Or, There is a Use for GUI Tools!
Recently I was working on a problem for a customer. They are converting a server with two InnoDB data files to innodb_file_per_table
:
$ ls -l ibdata* -rwxr-xr-x 1 mysql mysql 2.0G Oct 14 13:10 ibdata1 -rwxr-xr-x 1 mysql mysql 350G Oct 14 13:10 ibdata2
I honestly don’t recall ever seeing (or hearing about) so large a data file.
The method chosen for conversion boils down to this:
- stop and start the server to enable
innodb_file_per_table
- alter all tables to myisam
- stop server and delete ibdata file
- restart server
- convert tables back to InnoDB
- add foreign keys
This post isn’t about how we did it, or about whether or not it was the best way, or anything like that. This post is really about the the last two steps.
There were over 500 InnoDB tables and something like 375 foreign keys. So I needed to somehow create the ALTER TABLE
statements for these 500 servers. To drop the Foreign Keys, I used the following SELECT
statement:
SELECT CONCAT( "mysql -u root -pPASS ", table_schema," -e 'ALTER TABLE ", table_name, ' DROP FOREIGN KEY ', constraint_name,';'' &') AS list_of_fks FROM information_schema.KEY_COLUMN_USAGE WHERE REFERENCED_TABLE_NAME is not null ORDER BY TABLE_NAME, COLUMN_NAME;
This created a list of statements that I could then put in a batch file, edit a little, and have them run in parallel.
I used the following SELECT
statement to find all InnoDB tables and prepare a command to convert them to MyISAM:
SELECT CONCAT("mysql -u root -pPASS ", TABLE_SCHEMA, " -e 'ALTER TABLE ", TABLE_NAME, " ENGINE=MYISAM;' & ") FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE='INNODB';
Again, the output can be put in a batch file, modified a little, and run in parallel.
Once everything was converted to MyISAM, I shut down the database and removed the InnoDB log and data files and modified the my.cnf
to so I had file_per_table
and a much smaller InnoDB data file. That was the fairly straightforward part. Now the fun began. I needed to convert all the tables back to InnoDB and add the FKs back.
Now ordinarily, I’m a command-line kind of of guy. However, having used diff
before for comparisons, I decided to go another route. Based on some previous experience, I chose SQLyog for the job. Before I began any conversion of the tables I dumped the schema of the database and loaded it into a MySQL server running on my laptop.
Once all the conversions were done I used a tool in SQLyog called a “Schema Synchronization Tool”. Essentially, you point the tool a “source” database and a “target” database. (It is very important to keep track of which is which.) During the choice of the source and target you also choose the database that you want to compare. Once this is done you simply click compare and it checks both database and then returns a nice list of SQL statements.
Since we had 500 tables to convert I didn’t just copy the SQL into a file and import it. I created more files that contained lines similar to:
mysql -u root -pXXX "ALTER TABLE .... " &
This way, they ran in the background. At one point I had over 200 alter statements running at the same time.
My one complaint about SQLyog is that the SQL it generated was sometimes inefficient. It would give me a SQL statement to ALTER TABLE engine=Innodb
and then three separate statements for three different FKs. Each time it would have to copy the table so this would take four times as long as combining all these statement into one. It is perfectly feasible to do this:
ALTER TABLE X ADD FK X1, ADD FK X2, ADD FKX3, engine=innodb;
Big time savings!
Unfortunately, I did run across what amounts to a bug. Not in SQLyog, but in MySQL. You can run this:
ALTER TABLE X ADD FK X1, ADD FK X2, ADD FKX3;
and MySQL will happily take the statement and run. Hours later when it gets done, you find out that the table is still MyISAM and the FKs were just turned into indexes. Stupid!
Sheeri pointed out that the MySQL Workbench program does something very similar. I haven’t had a chance to look into it yet. Having worked with MySQL Workbench before, I am sure it does a schema sync very well. Either way, trust me on this—if you have a significant schema synchronization project, you need to look into one of these tools.
7 Comments. Leave new
“The FOREIGN KEY and REFERENCES clauses are supported by the InnoDB storage engine, which implements ADD [CONSTRAINT [symbol]] FOREIGN KEY (…) REFERENCES … (…). See Section 13.2.6.4, “FOREIGN KEY Constraintsâ€. For other storage engines, the clauses are parsed but ignored. The CHECK clause is parsed but ignored by all storage engines. See Section 12.1.9, “CREATE TABLE Syntaxâ€. The reason for accepting but ignoring syntax clauses is for compatibility, to make it easier to port code from other SQL servers, and to run applications that create tables with references.”
https://dev.mysql.com/doc/refman/5.0/en/alter-table.html
For alter tables in/to MyISAM, you want a very big myisam_sort_buffer_size
Dropping all the foreign keys beforehand shouldn’t be necessary, the’d be dropped in the ALTER TABLE. Saves one whole table copy operation per table ;-)
I’ve got some customers with a bigger ibdata1…. it’s not a problem often, but it can become unmanagable, obviously. So you can add one more separate data file for future growth of existing tables, and enable innodb-file-per-table for any new tables.
A complete dump/reload is not always an option… requires downtime and people don’t always have a neat master/slave setup that can be flipped – which would be ideal for this.
Very cool information. I’ll have to check out SQLyog and give workbench another look-see. Do you know if it works on the OSS version or just on SE?
Interesting side note that I had not heard before today. (We had a MySQL consultant on hand today.) He said that once you get over 200 tables, innodb_file_per_table is a bad plan due to all of the files that need to be opened. I hadn’t heard that before. I haven’t done any benchmarks but since we are using innodb_file_per_table I probably will be. If it is true, you might want to switch back to the old way. At least you’ll have the tools to help you.
Cheers,
Tom
Hey, why convert to MyISAM? Just ALTER … ENGINE=InnoDB works fine when innodb_file_per_table changed.
Arjen — that’s what we tried first, but unfortunately if the table is referenced as a foreign key in another table, you can’t convert it to MyISAM:
https://bugs.mysql.com/bug.php?id=12555
That is, if tbl1 references tbl2, you can’t convert tbl2 to MyISAM. You can alter tbl1 first, and then tbl2.
Andrii — it works just fine, however the problems is that you then still have the large ibdata files — which are now wasted, because most of the 352G in the ibdata files are now fragmented space, which cannot be recovered.
Tom,
The OSS version does the reverse engineering also.
[…] SELECT CONCAT(“mysql -u root -pPASS “, TABLE_SCHEMA, ” -e ‘ALTER TABLE “, TABLE_NAME, ” ENGINE=MYISAM;’ & “) FROM INFORMATION_SCHEMA.TABLES WHERE ENGINE=’INNODB’; Again, the output can be put in a batch file, modified a little, and run in parallel. Once everything was converted to MyISAM, I shut down the database and removed the InnoDB log and data files and modified the my.cnf to so I had file_per_table and a much smaller InnoDB data file. That was the fairly straightforward part. Now the fun began. (more…) […]