大约有三四千万条记录,使用mysqldump导出,在本地导入的时候(使用source xxx.sql的方式),感觉很慢,到某表一条记录可能比较大,才一次insert3000+条,平均用二三十秒。本地的目标数据库的参数innodb_flush_log_at_trx_commit已经设置为2。请问这个导入速度跟导出时候的参数net_buffer_length有关系吗?有什么方法可以提高一下速度,除了使用文件导入load data infile。谢谢!
Do not create an index in the new table of the imported data. Create it after importing, so that the import speed will be much faster.
One way is to move the .frm, .MYD, .MYI files.
Here is a question and answer and an article for you. The article contains the evaluation results of several methods. I hope it can help you. MySQL any way to import a huge (32 GB) sql dump faster? esting the Fastest Way to Import a Table into MySQL
Modified the innodb_buffer_pool_size of mysql and enlarged the memory a little (the default is 8M, which is definitely not enough, and is much slower than disk IO per second). The speed seems to have increased, but the speed is still limited by disk IO. Maybe the log is OK optimization.
Do not create an index in the new table of the imported data. Create it after importing, so that the import speed will be much faster.
One way is to move the
.frm
,.MYD
,.MYI
files.Here is a question and answer and an article for you. The article contains the evaluation results of several methods. I hope it can help you.
MySQL any way to import a huge (32 GB) sql dump faster?
esting the Fastest Way to Import a Table into MySQL
msyql -uroot -proot < data.sql
Modified the innodb_buffer_pool_size of mysql and enlarged the memory a little (the default is 8M, which is definitely not enough, and is much slower than disk IO per second). The speed seems to have increased, but the speed is still limited by disk IO. Maybe the log is OK optimization.