World Backup Day – The MySQL perspective

Posted in: MySQL, Open Source, Technical Track

Today is World Backup Day, reminding us of the importance of backups. Here at Pythian we LOVE YOUR DATA and as database consultants we are therefore very much focussed on making sure that our client’s data is always secure and recoverable in case of a disaster. Backups are a crucial factor in that line of work. My colleague Jared Still has written a quite extensive blog post with regard to Oracle Databases and backups. I will guide you through the main principles of MySQL backups.

Logical vs Physical

In MySQL world (as is many other technologies) we identify 2 main “types” of backups, logical and physical. With logical backups we will get the raw data in a format that can be used to be reloaded to a new server while this new server is running.

In this case, think of tools like mysqldump. This tool will SELECT all your data and write it as CREATE, INSERT and ALTER statements. For each table that you include in your dump, you will have a CREATE TABLE statement that represents the structure of the table at the time of the dump. Each row in the table will be represented by an INSERT statement and any foreign key constraints will be added after the INSERTS as ALTER TABLE statements. The advantage of this method is that it’s fairly easy to execute, some form of mysqldump is included by default in any MySQL distribution out there and your backups are human-readable. This last part makes it easier if you want to make any modifications to the data before loading it to a new server.

Physical backups, however, will make a copy of the data files as if you would physically take the hard drive storing the data and stowing it a safe place. Of course, taking the hard drive out of the server will require you to bring the server down if you do not want to damage the device nor it’s contents. This is not good for High Availability of your system, but it might be the fastest way if you want to backup a large data set: take down the server, take out the hard drive and store it in a safe place like a vault. This is what we refer to as a cold backup.

Whereas cold backups are fairly easy and fast, most people do not like it to take their databases down. This brings us to the subject of physical HOT backups. This leads us to creating a consistent copy of the data files while the server keeps running and operational. Several tools are available to perform such a backup.

The first one to jump to mind is mysqlbackup or mysql enterprise backup (MEB for the friends). This is a tool that is included in the MySQL Enterprise suite because someone once considered that online physical backups are something for Enterprise customers only. Good call from a money-making perspective, bad call from a LOVE YOUR DATA perspective. Luckily there are some open-source tools available to work around the requirement of an Enterprise license. Percona is actively developing and releasing new versions of xtrabackup periodically. This tool is capable of creating online (hot) backups of your MySQL tables stored using the InnoDB storage engine. Because of their nature (non-transactional), MyISAM tables can not be backed up consistently without holding a global lock. MariaDB has its own variant of xtrabackup called mariabackup to ensure compatibility with their specific features. So when running MariaDB, you will want to consider mariabackup for your online backups.

How does xtrabackup work?

So what is so special about xtrabackup that it can make an online physical backup. The basic operation is based on the InnoDB crash recovery principles. xtrabackup will copy your MySQL datafiles to the backup location which will lead to an inconsistent copy of the data. At the same time, it also starts tracking InnoDB Redo (or transaction) logs that contain all the changes happening on the InnoDB tables. To keep track of which changes it needs, xtrabackup will use the Log Sequence Number (LSN) which is an auto incrementing value that MySQL assigns to each log record it writes. This way xtrabackup knows exactly which LSN’s are already in the copied files, which log records it needs to process to get the database to a consistent state and especially is which order to apply them.

Once the InnoDB data is copied, xtrabackup also has support for the non-transactional tables running, for example, MyISAM. To consistently backup these it will place a global lock on those tables and copies their data files to the backup target location. As you can imagine this will cause some disruption to your applications, therefore you should try to avoid storing data in these non-transactional storage engines.

What about snapshots?

Some people run on storage hardware that has snapshot features. Snapshots are generally pretty fast but your hardware has to have some kind of support for it. In (public) cloud environments this a common way of quickly creating backups for the data volumes attached to the virtual machines. This can work too for your MySQL databases. However, still, you need to take consistency into account. The way you could implement this with MySQL is to issue a “FLUSH TABLES WITH READ LOCK;” command which will globally lock your database for any writes while reads can still happen. You need to remember to keep the session holding the lock open as terminating the session will also release the lock. While the session is holding the lock you can then tell your OS to freeze the filesystem and then call the storage API’s to create a snapshot of your volume. Once the snapshot is confirmed, you can unfreeze the filesystem and release the global lock.

In my personal experience, this process generally finishes under 15 seconds, however, it is still recommended to run this on a slave server (which will in theory not take any writes) and not a master instance. When running in an on premise datacenter your snapshot will still reside on the same storage enclosure where the original data was residing, so you might want to ship the snapshots off to any other location for safekeeping. Public Cloud providers will send your filesystem (block storage) snapshots off to their respective object storage solutions (eg AWS will store your snapshots in S3, GCP will store them GCS).

Speed up mysqldump

Where physical backups will usually be your primary backup solutions because they will have a shorter recovery time. It might make sense not to put all your eggs in one basket. Doing a combination of physical backup and logical backups might help you in different recovery scenarios. Using mysqldump will work for this, but mysqldump will process all the tables in a single thread, so one table at a time. In modern servers, we have multiple cores available which is ideal for parallelization of your workloads. In this case, a tool like mydumper/myloader would be a good addition to your toolset. This a logical dump tool that you can configure to dump (or load) multiple tables at a time in multiple threads. This will significantly speed up your dump/load process in the case where your data is split out over multiple tables across your database.

Testing your backups

Regardless of the backup method you implement, testing and validating the backup is a key factor to the success of getting them actually restored in the time of need. Relying on your tools and scripts is good, testing the actual restores is better. Nothing is worse than having to tell your boss, “yeah well… erm… sorry we thought we had backups but it seems we did not have a valid backup of that particular piece of data that we lost…”. Commission a server in your environment where you periodically, automatically restore a random backup taken since the last restore-test and try to actually spin up a working database from it and maybe even get that instance to join your replication topology as a slave to verify that you could actually get it to come online and catch up with the cluster. It will make you feel much safer! Additionally scripting and automating this will save you the trouble of having to figure out all the steps to restore a backup with an anxious manager standing by your terminal, live watching you doing the restore in case of a disaster!

Conclusion

Backups are important for data safety and therefore also for business continuity. At Pythian we LOVE YOUR DATA and can help you to implement or review your backup infrastructure, strategies, and recovery plans. We have specialists in the most well known and most used database technologies around the globe. Reach out to our teams to see how we can help protect your business against disasters and/or inevitable human errors. Being certain that your backups are secured and recoverable will make your teams sleep better, feel better, and perform better!

email

Interested in working with Matthias? Schedule a tech call.

About the Author

Lead Database Consultant
Matthias has been passionate about computers since the age of 10. He has been working with them ever since. Currently he's a Lead Database Consultant in one of the MySQL teams at Pythian where he's the technical lead for his team. Together with his team he works to provide the best possible service to the customers.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *