Upgrade to MySQL 5.1.56 on Bacula server using 5.0.x and MyISAM tables

Posted in: Technical Track

Hello there, it’s me again, with another blog about a DBA situation that a typical Linux Administrator may find themselves in.

In this blog, i’m going to review a recent MySQL upgrade I have done on one of the systems I am involved in administering. This is a real world example of an upgrade project, and hopefully when we’re done, there may even be an overall performance boost.

There are several reasons to perform upgrades (of any kind), for me an important one is to keep current for security and bug fixes, but general performance improvements and new features are always welcome as well.

This system is running Bacula, an open source enterprise backup system. In this particular case Bacula is configured to store data in a MySQL database. The data stored include status reports, backup content lists, lists of all the files on all the systems, schedules and other related information. While everything has been operating “generally fine” for several years, this also means several components of the system are quite out of date.

The system is using CentOS 5.3, and the versions of MySQL and Bacula that shipped with it (5.0.45 and 2.2.8 respectively) which are both quite old by todays standards. Officially MySQL 5.0.x is now completely out of support by Oracle/MySQL.com, so upgrading is quite important.

The OS definitely could/should be updated to CentOS 5.6 (current at the time of this writing), but that actually wouldn’t make a significant dent on the version of MySQL (or likely Bacula) itself. The OS upgrade, and the Bacula upgrade discussion i’ll leave to the appropriate System Administration team, i’ll just be reviewing what has been done with MySQL itself.

As the main application (Bacula) will not be upgraded – this means that any new features in MySQL 5.1.56 won’t really be used unless it’s something I can configure in the my.cnf file.

Initially my plan was to use the remi yum archives (https://rpms.famillecollet.com/) which I have seen used in several situations. The MySQL RPM’s in these yum repositories generally replace the RPM’s that are shipped in RHEL/CentOS, but are current GA versions. For non-DBA’s, I find these archives a really good way of moving to new releases without a lot of effort outside the regular system upgrading activities.

Unfortunately the server in question doesn’t have direct external Internet access. It only has access to some internal repositories, and access to/from other internal systems. I figured that if i’m going to have to copy RPM files manually anyways, I will use the official Oracle/mysql.com packages which are generally the preferred method (speaking as a DBA).

The overall plan was/is the following:

1) Schedule outage with all monitoring tools, and notify the appropriate people of a pending outage to the backup system
2) Confirm current backup is in place for the existing database and system files
3) Copy packages downloaded from mysql.com to a location accessible to the system in question (in my case an NFS mounted home directory)
4) Stop mysql and bacula-director

/etc/init.d/mysql stop
/etc/init.d/bacula-dir stop

5) remove mysql and mysql-server – ignoring the complaint about libmysqlclient dependancies

rpm -e –nodeps mysql mysql-server

6) install the packages downloaded from mysql.com

rpm -Uvh MySQL-server-community-5.1.56-1.rhel5 MySQL-shared-compat-5.1.56-1.rhel5 MySQL-client-community-5.1.56-1.rhel5 MySQL-devel-community-5.1.56-1.rhel5

7) upgrade the mysql database (using mysql_upgrade) access controls

/usr/sbin/mysqld –user=mysql –skip-grant &
killall mysqld

8) restart mysql to confirm it starts up fine with no config changes.

/etc/init.d/mysqld start
tail -f /var/log/mysqld.log

9) update my.cnf to enable some new features (innodb plugin)

vi /etc/my.cnf

and add the following lines to the [mysqld] section


10) restart mysql and confirm it has now loaded the innodb plugin, and is still operating fine.

/etc/init.d/mysql restart
tail /var/log/mysqld.log -n 20

you should see something like this:

InnoDB: The InnoDB memory heap is disabled
InnoDB: Mutexes and rw_locks use GCC atomic builtins
InnoDB: Compressed tables use zlib 1.2.3
110505 16:38:33 InnoDB: Initializing buffer pool, size = 768.0M
110505 16:38:33 InnoDB: Completed initialization of buffer pool
110505 16:38:33 InnoDB: highest supported file format is Barracuda.
110505 16:38:33 InnoDB Plugin 1.0.15 started; log sequence number 18094312675
110505 16:38:33 [Note] Event Scheduler: Loaded 0 events
110505 16:38:33 [Note] /usr/sbin/mysqld: ready for connections.
Version: ‘5.1.56-community-log’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 MySQL Community Server (GPL)

11) Finally, restart the Bacula director

/etc/init.d/bacula-dir start

Other than a stray config value present in the old my.cnf file that was now being interpreted differently (basedir) this part of the upgrade went smoothly.

When we reached this point I confirmed with the SysAdmin responsible for backups that Bacula was still able to perform it’s usual duties, and that the old version of Bacula seems to be fine with the new version of MySQL.

We paused at this stage for about a week, so the system could do a full run through of the weekly backups, and so we could see how well the system performs and if there is any real difference (so far).

Very little speed difference was expected at this point because the only new features enabled are for using the InnoDB storage engine which was not actually being used. It’s not actually being used yet as all the Bacula tables were still using the MyISAM storage engine which is the default engine for older versions of MySQL such as 5.0. This of course means the next major stage of the upgrade was to convert all the Bacula tables from MyISAM to InnoDB and I want to do this because of details i’ll mention below.

I found several references on the web to Bacula supporting the use of the InnoDB storage engine perfectly fine, so i’m not concerned about compatibility differences between the storage engine affecting actual functionality.

There are plenty of differences between InnoDB and MyISAM, but InnoDB has one big advantage over MyISAM that should affect Bacula – the locking method. MyISAM uses table locking, but InnoDB will do row level locking.

This means if you have parallel inserts and updates all going into the same table (like if you’re doing backups from multiple systems at the same time), MyISAM performing global locks could cause some problems – and indeed it has.

In this specific situation, we have seen locked threads on this database during the weekly backups Sunday at 4:00am, so that means to me that we could see a bit of a performance improvement, and a lot less locked threads.

Ok, so now I needed to change all the databases to InnoDB format. By far the easiest way to do this is to run a series of “ALTER TABLE {table} engine=innodb;” statements. As this system has definite times of the day when there is no activity, this is a safe option.

The main drawback of using the ALTER TABLE method to do the conversion is the disk space required. During the conversion, the old table and the new table will both exist at the same time.

The entire database is about 7GB in size and there is currently about 7GB of free space on the file system being used by MySQL. Normally this would be fine if there was a number of medium sized tables as only one table is duplicated at a time. Unfortunately this is not the case with Bacula – it has many small tables, and one really large table called ‘File’. This means converting ‘File’ may be an issue.

At this point I’m attempting to do the remainder of the upgrade without the help of the SysAdmin which means I no longer have access to ‘sudo’ or ‘root’ access. This means several of the typical options for distributing disk space (like soft linking db files to other file systems, etc..) are not really available to me.

I did eventually come up with a solution that worked due to the way Bacula operates and clears old data – I used ‘OPTIMIZE TABLE’. This actually cleared up a fairly significant amount of the disk space being used leaving me with 9GB free which is more than enough to do the conversion.

mysql> optimize table File;

Once the disk space issue was covered, I needed to generate the list of alter statements I want to execute for changing the engine. The following command was used to generate the list:

bash# mysql -uavail -p bacula -s -e ‘select “alter table “,table_name,” engine=innodb;” from information_schema.tables where tables.table_schema=”bacula”‘

The command generated this output which was then pasted into the mysql console:

alter table BaseFiles engine=innodb;
alter table CDImages engine=innodb;
alter table Client engine=innodb;
alter table Counters engine=innodb;
alter table Device engine=innodb;
alter table File engine=innodb;
alter table FileSet engine=innodb;
alter table Filename engine=innodb;
alter table Job engine=innodb;
alter table JobMedia engine=innodb;
alter table Location engine=innodb;
alter table LocationLog engine=innodb;
alter table Log engine=innodb;
alter table Media engine=innodb;
alter table MediaType engine=innodb;
alter table Path engine=innodb;
alter table Pool engine=innodb;
alter table Status engine=innodb;
alter table Storage engine=innodb;
alter table UnsavedFiles engine=innodb;
alter table Version engine=innodb;

I didn’t accurately measure the total time it took, but I would guess it took about an hour to convert everything. Most tables took less than a minute, except the ‘Filename’ table which took about 5 minutes, and the ‘File’ table which took all the rest.

We’ve now completed the major milestone for this upgrade – Bacula is now using the InnoDB storage plugin for all of its tables with the newer version of MySQL.

As suspected while working on this project, there was not a huge performance difference for the overall time it takes to perform the weekly backups, although there still is an improvement. The backups with a large number of files (+300,000) now complete faster than they previously did – approximately 20 minutes faster for a 7hr backup. This was likely purely due to the improvements in locking.



Want to talk with an expert? Schedule a call with our team to get the conversation started.

No comments

Leave a Reply

Your email address will not be published. Required fields are marked *