Skip to content

Insight and analysis of technology and business strategy

Google Cloud (GC) Cloud SQL Disaster Recovery

This blog post describes how to enable roll-your-own Disaster Recovery in GCP Cloud SQL. This process is automated and will save money. However, recovery is manual.

Introduction

One of the benefits of using the cloud is the ability to track and manage costs at a very micro level and to easily make changes to your architecture and processes in order to bring those costs down. Unfortunately, the default behavior of many cloud tools can often lead to a higher bill rather than a lower one. In particular, companies can easily double the costs of their systems by having a hot standby for their disaster recovery environments when it isn't needed. Often, a database can be down for a few hours (or days) in the case of an emergency. However, the database must also mitigate data loss and be available in the new disaster recovery (DR) environment. If you're using GC Cloud SQL instances, there isn't a good built-in process that solves this problem without incurring higher costs. Here we'll describe a way to implement cold disaster recovery using cloud functions and backups to different GC regions. For this process, we'll assume you have a running Cloud SQL instance.

GC SQL Disaster Recovery

This process uses a combination of cloud functions and on-demand backups to store the data for an instance in a different region than the production system.

Step One: Define GC the DR Region

The first step is to define the DR region for the Cloud SQL instances. This is done by logging into GC and running the following Powershell commands from your local machine. This is a simple location parameter definition and should be saved as request.json in the same directory as Script #2: { "location": "europe-west4", } This command reads the file created above, and sets the default location for all backups to the new location: gcloud auth login $cred = gcloud auth print-access-token $headers = @{ "Authorization" = "Bearer $cred" } Invoke-WebRequest ` -Method POST ` -Headers $headers ` -ContentType: "application/json; charset=utf-8" ` -InFile .\request.json ` -Uri "https://www.googleapis.com/sql/v1beta4/projects/YourProjectName/instances/myinstancename/backupRuns" | Select-Object -Expand Content

Step Two: Create a Cloud Function

Next, you must create and schedule a cloud function to backup the instance on a regular basis. These backups are saved to the DR region. The requirements.txt file should appear as follows: # Function dependencies, for example: # package>=version google-auth The cloud function itself is simply a call to the REST API: import google.auth from google.auth.transport.requests import AuthorizedSession def backup_cloudsql(req, res): credentials, project = google.auth.default( scopes=['https://www.googleapis.com/auth/sqlservice.admin']) authed_session = AuthorizedSession(credentials) #Better to list instances & loop through results, but this is a blog post so... ;) response = authed_session.post('https://www.googleapis.com/sql/v1beta4/projects/{}/instances/myinstancename/backupRuns'.format(project))

Step Three: Schedule the Backup

Lastly, create a Cloud Scheduler job to run the Cloud Function on whatever schedule meets your SLOs.

Conclusion

This post has described how to back up a Cloud SQL instance to a secondary region in order to restore the backups in case of a disaster. It's important to note that after a GC region goes down, you need to create the Cloud SQL instances themselves in the new region. It's definitely a good idea to have this scripted and ready to go so you don't lose time.

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner