Skip to content

Insight and analysis of technology and business strategy

MySQL encrypted streaming backups directly into AWS S3

Overview

Cloud storage is becoming more and more popular for offsite storage and DR solutions for many businesses. This post will help with those people that want to perform this process for MySQL backups directly into Amazon S3 Storage. These steps can probably also be adapted for other processes that may not be MySQL oriented.

Steps

In order to perform this task we need to be able to stream the data, encrypt it, and then upload it to S3. There are a number of ways to do each step and I will try and dive into multiple examples so that way you can mix and match the solution to your desired results. The AWS S3 CLI tools that I will be using to do the upload also allows encryption but to try and get these steps open for customization, I am going to do the encryption in the stream.
  1. Stream MySQL backup
  2. Encrypt the stream
  3. Upload the stream to AWS S3

Step 1 : Stream MySQL Backup

There are a number of ways to stream the MySQL backup. A lot of it depends on your method of backup. We can stream the mysqldump method or we can utilize the file level backup tool Percona Xtrabackup to stream the backup. Here are some examples of how these would be performed.

mysqldump

When using mysqldump it naturally streams the results. This is why we have to add the greater than sign to stream the data into our .sql file. Since mysqldump is already streaming the data we will pipe the results into our next step [code lang="bash"] [root@node1 ~]# mysqldump --all-databases > employee.sql becomes [root@node1 ~]# mysqldump --all-databases | [/code]

xtrabackup

xtrabackup will stream the backup but with a little more assistance to tell it to do so. You can reference Precona's online documentation ( https://www.percona.com/doc/percona-xtrabackup/2.4/innobackupex/streaming_backups_innobackupex.html) for all of the different ways to stream and compress the backups using xtrabackup. We will be using the stream to tar method. [code lang="bash"] innobackupex --stream=tar /root > /root/out.tar becomes innobackupex --stream=tar ./ | [/code]

Step 2 : Encrypt The Stream

Now that we have the backup process in place, we will then want to make sure that our data is secure. We will want to encrypt the data that we are going to be sending up to AWS S3 as to make sure the data is protected. We can accomplish this a couple of ways. The first tool I am going to look at is GnuPG (https://www.gnupg.org/), which is the open source version of PGP encryption. The second tool I will look at is another very popular tool OpenSSL ( https://www.openssl.org/). Below are examples of how I set them up and tested their execution with streaming.

GnuPG

I will be creating a public and private key pair with a password that will be used to encrypt and decrypt the data. If you are going to do this for your production and sensitive data, please ensure that your private key is safe and secure. When creating the keypair I was asked to provide a password. When decrypting the data I was then asked for the password again to complete the process. It was an interactive step and is not shown in the example below. To accept a stream, you don't provide a file name to encrypt, then to stream the output, you just don't provide an output parameter.
KEY PAIR CREATION
[code lang="bash"] [root@node1 ~]# gpg --gen-key gpg (GnuPG) 2.0.14; Copyright (C) 2009 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Please select what kind of key you want: (1) RSA and RSA (default) (2) DSA and Elgamal (3) DSA (sign only) (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) Requested keysize is 2048 bits Please specify how long the key should be valid. 0 = key does not expire <n> = key expires in n days <n>w = key expires in n weeks <n>m = key expires in n months <n>y = key expires in n years Key is valid for? (0) Key does not expire at all Is this correct? (y/N) y GnuPG needs to construct a user ID to identify your key. Real name: root Name must be at least 5 characters long Real name: root@kmarkwardt Email address: markwardt@pythian.com Comment: You selected this USER-ID: "root@kmarkwardt <markwardt@pythian.com>" Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key. can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory gpg-agent[1776]: directory `/root/.gnupg/private-keys-v1.d' created We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. After typing for what felt like FOREVER, to generate enough entropy gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 1EFB61B1 marked as ultimately trusted public and secret key created and signed. gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u pub 2048R/1EFB61B1 2016-04-29 Key fingerprint = 8D98 2D23 3C49 F1E7 9CD2 CD0F 7163 EB03 1EFB 61B1 uid root@kmarkwardt <markwardt@pythian.com> sub 2048R/577322A0 2016-04-29 [root@node1 ~]# [/code]  
SAMPLE USAGE
ENCRYPT
[code lang="bash"] [root@node1 openssl]# echo "test" | gpg --output install.log.gpg --encrypt -r root [root@node1 openssl]# cat install.log.gpg ? ??? Ws"???l? ??g ?w??g?C}P ?5A??f?6?p? ???Qq?m??&?rKE??*}5.?4XTj?????Th????}A???: ^V?/w?$???"?<'?; ?Y?|?W????v?R??a?8o<BG??!?R???f?u?????????e??????/?X?y?S7??H??@???Y?X~x>qoA0??L?????*???I?;I?l??]??Gs?G'?!?? ??k>? [/code]
DECRYPT
[code lang="bash"] [root@node1 ~]# gpg --decrypt -r root --output install.log.decrypted install.log.gpg install.log.decrypted You need a passphrase to unlock the secret key for user: "root@kmarkwardt <markwardt@pythian.com>" 2048-bit RSA key, ID 577322A0, created 2016-04-29 (main key ID 1EFB61B1) can't connect to `/root/.gnupg/S.gpg-agent': No such file or directory gpg: encrypted with 2048-bit RSA key, ID 577322A0, created 2016-04-29 "root@kmarkwardt <markwardt@pythian.com>" [root@node1 ~]# ls install.log.decrypted install.log.gpg [/code] ENCRYPT STREAM [code lang="bash"] [root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root or [root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root [/code]  

OpenSSL

As with GPG we will generate a public and private key with a pass phrase. There are other ways to use openssl to encrypt and decrypt the data such as just using a password with no keys, using just keys with no password, or encrypt with no password or keys. I am using keys with a password as this is a very secure method.
KEY PAIR CREATION
[code lang="bash"] [root@node1 openssl]# openssl req -newkey rsa:2048 -keyout privkey.pem -out req.pem Generating a 2048 bit RSA private key .......................................+++ ........+++ writing new private key to 'privkey.pem' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: [root@node1 openssl]# openssl x509 -req -in req.pem -signkey privkey.pem -out cert.pem Signature ok subject=/C=XX/L=Default City/O=Default Company Ltd Getting Private key Enter pass phrase for privkey.pem: [root@node1 openssl]# ls -al total 20 drwxr-xr-x 2 root root 4096 May 5 10:47 . dr-xr-x---. 9 root root 4096 May 4 04:38 .. -rw-r--r-- 1 root root 1103 May 5 10:47 cert.pem -rw-r--r-- 1 root root 1834 May 5 10:43 privkey.pem -rw-r--r-- 1 root root 952 May 5 10:43 req.pem [root@node1 openssl]# rm -rf req.pem [/code]
SAMPLE USAGE
ENCRYPT
[code lang="bash"] [root@node1 openssl]# echo "test" | openssl smime -encrypt -aes256 -binary -outform DER cert.pem > test.dat [root@node1 openssl]# cat test.dat ???0??1?k0?g0O0B1 0 UXX10U Default City10U ?V??p?A$????PO??+???q@t??????\"%:0 ??J?????5???0?D/?1z-?xO??&?#?;???E>^?g??#7??#m????lA???'??{)?*xM P?l????]iz/???H???????[root@node1 openssl]# [/code]
DECRYPT
[code lang="bash"] [root@node1 openssl]# openssl smime -decrypt -in test.dat -inform DER -inkey privkey.pem -out test.txt Enter pass phrase for privkey.pem: [root@node1 openssl]# cat test.txt test [/code] ENCRYPT STREAM [code lang="bash"] [root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem or [root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem [/code]

Step 3 : Stream to Amazon AWS S3

Now that we have secured the data, we will want to pipe the data into an Amazon AWS S3 bucket. This will provide an offsite copy of the MySQL backup that you can convert to long term storage, or restore into an EC2 instance. With this method I will only be looking at one. The Amazon provided AWS CLI tools incorporates working with S3. Allowing you to copy your files up into S3 with the ability to stream your input.

AWS CLI

In order to tell the AWS CLI S3 copy command to accept STDIN input you just have to put a dash in the place of the source file. This will allow the command to accept a stream to copy. The AWS CLI tools for copying into S3 also allows for encryption. But I wanted to provide other methods as well to allow you to customize your own solution. You can also stream the download of the S3 bucket item, which could allow for uncompression as you download the data or any other number of options. UPLOAD STREAM [code lang="bash"] echo "test" | aws s3 cp - s3://pythian-test-bucket/incoming.txt [/code] BACKUP / ENCRYPT / UPLOAD STREAM [code lang="bash"] -- MySQL Dump -> OpenSSL Encryption -> AWS S3 Upload [root@node1 ~]# mysqldump --all-databases | openssl smime -encrypt -aes256 -binary -outform DER cert.pem | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.dat -- Xtrabackup -> OpenSSL Encryption -> AWS S3 Upload [root@node1 ~]# innobackupex --stream=tar ./ | openssl smime -encrypt -aes256 -binary -outform DER cert.pem |aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.dat -- MySQL Dump -> GPG Encryption -> AWS S3 Upload [root@node1 ~]# mysqldump --all-databases | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.sql.gpg -- MySQL Dump -> GPG Encryption -> AWS S3 Upload [root@node1 ~]# innobackupex --stream=tar ./ | gpg --encrypt -r root | aws s3 cp - s3://pythian-test-bucket/mysqldump.tar.gpg [/code] References
  • https://www.percona.com/doc/percona-xtrabackup/2.4/innobackupex/streaming_backups_innobackupex.html
  • https://linuxconfig.org/using-openssl-to-encrypt-messages-and-files-on-linux
  • https://www.gnupg.org/gph/en/manual/c14.html
  • https://www.gnupg.org/gph/en/manual/x110.html
  • https://linuxconfig.org/using-openssl-to-encrypt-messages-and-files-on-linux
  • https://www.openssl.org/docs/manmaster/apps/openssl.html
  • https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
   

Top Categories

  • There are no suggestions because the search field is empty.

Tell us how we can help!

dba-cloud-services
Upcoming-Events-banner