Remember a MYSQL update and optimization

WBOY
Release: 2016-08-08 09:19:00
Original
1009 people have browsed it

Introduction

Today (August 5, 2015 5:34 PM) I made an adjustment to the structure of a table in the database, added several fields, and then refreshed the previous data. The content of the refresh is: Match an existing field url, and then update the newly added fields type and typeid. Later, I wrote a shell script to refresh the data. After running the shell script, I was confused. Why is it so slow? There is only one joint index

uin_id

, and when I updated it, I had the following idea:

First, get a certain amount of data based on an id range select id,url from funkSpeed ​​where id>=101 and id<=200 ;

  • Traverse all the data and update each piece of data
    #First process the data, match and obtain type and typeid
    update fuckSpeed ​​set type=[type],typeid=[typeid] where id=[id]
  • After following the above idea, I found that the updates were extremely slow, with an average of about 3 to 5 updates per second. I was also drunk. I looked at the data to be updated. There were 320,000+ pieces in total. It would take about 24h+ to update. It’s even more than a day, uh~~ I cried, thinking about it, something must have gone wrong.


  • I found the problem
  • The first thing I thought about was whether it was because there was only one process updating, which caused it to be very slow. I started 5 processes and segmented the IDs, like the following
<code>CREATE TABLE `fuckSpeed` (
  `uin` bigint(20) unsigned NOT NULL DEFAULT 0,
  `id` int(11) unsigned NOT NULL DEFAULT 0,
  `url` varchar(255) NOT NULL DEFAULT &#39;&#39;,
  `type` int(11) unsigned NOT NULL DEFAULT 0,
  `typeid` varchar(64) NOT NULL DEFAULT &#39;&#39;,
  ......
  KEY `uin_id` (`uin`,`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;</code>
Copy after login

After running it, I found that it was still the same. , the speed has not improved much, and there are still about 3 to 5 updates per second. Think about it, time cannot be spent on the steps before inserting data (matching, assembling SQL statements,...), it should be when inserting There is a problem

Let’s take a look at my sql statement

select id, url from funkSpeed ​​where id>=101 and id<=200;

Here, I tried to execute it on the command line and the result is as follows

<code>./update_url.sh 0 10000 &
./update_url.sh 10000 20001 &
./update_url.sh 20001 30001 &
./update_url.sh 30002 40002 &
./update_url.sh 40003 50003 &</code>
Copy after login

It actually took 0.18 seconds. At this time, I guess I suddenly realized that I have not used the joint index. The condition for the joint index to take effect is that there must be a field on the left. I verified it with explain, and it turned out to be like this: <div class="code" style="position:relative; padding:0px; margin:0px;"><pre class="brush:php;toolbar:false">&lt;code&gt;mysql&gt; select id,url from funkSpeed where id&gt;=0 and id&lt;=200; Empty set (0.18 sec)&lt;/code&gt;</pre><div class="contentsignin">Copy after login</div></div> Then use the joint index:

<code>mysql> explain id,url from funkSpeed where id>=0 and id<=200;
+-------------+------+---------------+------+---------+------+--------+-------------+
| table       | type | possible_keys | key  | key_len | ref  | rows   | Extra       |
+-------------+------+---------------+------+---------+------+--------+-------------+
| funkSpeed 	 | ALL  | NULL          | NULL | NULL    | NULL | 324746 | Using where |
+-------------+------+---------------+------+---------+------+--------+-------------+
1 row in set (0.00 sec)</code>
Copy after login

You can see that it is almost a second check. At this time, you can basically conclude that the problem occurs in the index.

When I select, the number of times is relatively small. The ID difference between each two selections is 10,000, so it can be ignored here, and here There is no way to optimize unless you add an index on the id.

The problem occurs in

update fuckSpeed ​​set type=[type],typeid=[typeid] where id=[id]

. Query is also used when updating. My mysql version is 5.5, so I can’t

explain update

, otherwise you can definitely verify what I said. There are 320,000+ pieces of data to be updated here. Each piece of data will be updated. Each piece of data will take about 0.2 seconds. This is too scary~~Solved the problemThe problem was found and solved It’s much easier~~ When

select, add a field

uin

and change it to the following

select uin,id,url from funkSpeed ​​where id>=101 and id<=200;

, and then update it Use update fuckSpeed ​​set type=[type],typeid=[typeid] where uin=[uin] id=[id], so that the index is used. After changing the code three times, five times and two times, I tried to start a process to see the effect. Sure enough, the effect was not improved a little, with an average of 30+ times/s. In this way, everything can be completed in about 3 hours. has been updated. WeChat ID: love_skills

The harder you work, the luckier you will be! The luckier you are, the harder you work!

Being a CEO is not a dream

Winning Bai Fumei is not a dream

Disi counterattack is not a dream

It’s now! ! Come on


The above introduces the optimization of a MYSQL update, including aspects of the content. I hope it will be helpful to friends who are interested in PHP tutorials. Remember a MYSQL update and optimization

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template