mysql tutorialColumn introduction to batch SQL insertion
Recommended (Free): mysql tutorial
For some systems with large amounts of data, in addition to low query efficiency, the database faces It just takes a long time for data to be stored in the database. Especially for reporting systems, the time spent on data import may last for several hours or more than ten hours every day. Therefore, it makes sense to optimize database insertion performance.
One SQL statement inserts multiple pieces of data
INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('0', 'userid_0', 'content_0', 0);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('1', 'userid_1', 'content_1', 1);
INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('0', 'userid_0', 'content_0', 0), ('1', 'userid_1', 'content_1', 1);
Test and compare data, respectively, importing a single piece of data and converting it into a SQL statement for import.
Insertion processing in transactions
START TRANSACTION;INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('0', 'userid_0', 'content_0', 0);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('1', 'userid_1', 'content_1', 1);...COMMIT;
Test and compare data, divided into written tests not applicable to transactions and using transaction operations
Orderly insertion of data
Orderly insertion of data is to insert records on the primary key. Sequential sorting
INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('1', 'userid_1', 'content_1', 1);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('0', 'userid_0', 'content_0', 0);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('2', 'userid_2', 'content_2',2);
INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('0', 'userid_0', 'content_0', 0);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('1', 'userid_1', 'content_1', 1);INSERT INTO `insert_table` (`datetime`, `uid`, `content`, `type`) VALUES ('2', 'userid_2', 'content_2',2);
Refer to the B tree index used by InnoDB. If each inserted record is at the end of the index, the index positioning efficiency is very high and there will be less adjustment to the index; if the inserted record In the middle of the index, B trees need to be split and merged, which will consume more computing resources, and the index positioning efficiency of inserted records will decrease. When the amount of data is large, there will be frequent disk operations.
Test comparison data, performance comparison of random data and sequential data
Delete the index first and insert Rebuild the index after completion
Comprehensive performance test
Notes
max_allowed_packet configuration. The default is
1M, which can be modified to
8M during testing.
innodb_log_buffer_size configuration item. If this value is exceeded, the innodb data will be flushed to the disk. At this time, the efficiency will be reduced. So a better approach is to perform transaction commit before the data reaches this value.
The above is the detailed content of Detailed explanation of performance optimization of MySQL batch SQL insertion. For more information, please follow other related articles on the PHP Chinese website!