In general, MySQL is quite fast at restoring data, but I observed that while restoring 20gb of backup its taking more than the usual time. This can happen when you don’t have enough memory or if key_buffer_size is not set high enough, then it can take very long time to re-index the data. In CentOS 5.2 server with 6gb of RAM, I noticed key_buffer_size is set to just 800M which is very low. You should set it at least 20-25% of total RAM. After increasing the value to 2gb, MySQL is able to reload data quite fast.
Though this does trick for me but there are other helpful suggestions also which you can try/check to speed up your backup/restore process, few are as follows:
* Increase value of key_buffer_size variable in my.cnf to 20-25% of available RAM as mentioned above. Be aware that, if you make the value too large like setting more than 50% of total RAM, your machine may start to page and become extremely slow. This is because MySQL relies on the operating system to perform file system caching for data reads, so you must leave some room for the file system cache. Anyhow, remember that on 32 bit system, the maximum value key_buffer_size can take is 4gb only and setting any greater might crash your MySQL server.
* For InnoDB engine, another very important setting you should consider is to set innodb_flush_log_at_trx_commit = 2. Reset it to 1 again after restore if you need ACID.
* Again for InnoDB and foreing keys, you could also disable foreign key checks and re-enable it at the end (using SET FOREIGN_KEY_CHECKS=0 and SET FOREIGN_KEY_CHECKS=1).
* Few maatkit tools like mk-parallel-dump / mk-parallel-restore will add capability to do dump/restore with multithreaded support utilizing available no. of CPUs but beware maatkit itself not recommending using these tools for backup.
* Try using Extended inserts in dumps but again as client has a fixed size buffer for each line it is reading and extended inserts may exceeds that limit, there’s possibility that you may not be able to restore your very large dumps.
* Backup/Dump data with –tab format to make use of mysqlimport, which is faster than mysql < dumpfile.
* You can try importing data with multiple threads, means start individual process to import each table.
* Change database engine if possible. Importing into a heavily transactional engine like innodb is painfully slow. Inserting into a non-transactional engine like MyISAM is much much faster.
* You can also try turning off foreign key checks and turn on auto-commit.
* Also check usage of --disable-keys with InnoDB.
* If you think network may be bottleneck, try --compress when using mysqldump.
* Last but not the least here, you can better take directory dump and restore it when required if your MySQL version is similar. You just need to stop your MySQL server, copy whole data or mysql directory, compress it and store at your backup location. Restore in same way when needed. Its one of the fastest method available and I would recommend using it if you are having separate slave hosts which is taking regular backups etc. You can also check lvmbackup/snapshot etc for more efficiency.
Do you know any other tip/tricks which can speed up taking/restore MySQL dump? Please share by putting a comment below.
More related and helpful articles:
* Optimize MySQL on a large Database Server
* Recover or reset root password of MySQL and PostgreSQL Servers
* How to setup MySQL Cluster 7.0 in Redhat based Linux
* Optimize and fix MySQL Server running slow without any load
* Quick and simple php based script to check replication status of multiple MySQL Servers