Home > Database > Mysql Tutorial > Detailed introduction to the lock mechanism in mysql database

Detailed introduction to the lock mechanism in mysql database

不言
Release: 2018-09-10 14:12:55
Original
2040 people have browsed it

This article brings you a detailed introduction to the locking mechanism in the MySQL database. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you.

Pessimistic lock and optimistic lock:
Pessimistic lock: As the name suggests, it is very pessimistic. Every time you get the data, you think that others will modify it, so you will lock it every time you get the data. This way If others want to get this data, they will block until it gets the lock. Many such lock mechanisms are used in traditional relational databases, such as row locks, table locks, read locks, write locks, etc., which are all locked before operations.

Optimistic lock: As the name suggests, it is very optimistic. Every time you go to get the data, you think that others will not modify it, so you will not lock it, but when you update it, you will judge that others will not modify it during this period. Have you updated this data? You can use mechanisms such as version numbers. Optimistic locking is suitable for multi-read application types, which can improve throughput. If a database provides a mechanism similar to write_condition, it actually provides optimistic locking.

Table level: Engine MyISAM directly locks the entire table. During your locking period, other processes cannot write to the table. If you have a write lock, other processes are not allowed to read.

Page level: Engine BDB, table-level locking is fast, but has many conflicts, row-level conflicts are few, but slow. Therefore, a compromise was adopted at the page level, locking a group of adjacent records at a time

Row level: Engine INNODB, only locks specified records, so that other processes can still lock the same table to operate on other records.

The characteristics of the above three locks can be roughly summarized as follows:
1) Table-level lock: low overhead, fast locking; no deadlock; large locking granularity, lock conflict occurs The highest probability and the lowest concurrency.
2) Page lock: The cost and locking time are between table locks and row locks; deadlocks will occur; the locking granularity is between table locks and row locks, and the concurrency is average.
3) Row-level locks: high overhead, slow locking; deadlocks may occur; the locking granularity is the smallest, the probability of lock conflicts is the lowest, and the concurrency is the highest.

Each of the three locks has its own characteristics. From a lock perspective, table-level locks are more suitable for applications that are mainly query-based and only have a small amount of data updated based on index conditions, such as WEB Application; row-level locking is more suitable for applications where a large number of different data are concurrently updated according to index conditions and concurrent queries are performed, such as some online transaction processing (OLTP) systems.

MySQL table-level locks have two modes:
1. Table shared read lock (Table Read Lock). When performing a read operation on a MyISAM table, it will not block other users' read requests for the same table, but will block write operations on the same table;
2. Table Write Lock. Write operations on the MyISAM table will block other users' read and write operations on the same table.

The reading and writing of MyISAM tables are serial, that is, writing operations cannot be performed during reading operations, and vice versa. However, under certain conditions, the MyISAM table also supports concurrent query and insert operations. The mechanism is performed by controlling a system variable (concurrent_insert). When its value is set to 0, concurrent insertion is not allowed; when its value is set When it is 1, if there are no holes in the MyISAM table (that is, there are no deleted rows in the table), MyISAM allows one process to read the table while another process inserts records from the end of the table; when its value is set to 2, regardless of whether MyISAM Whether there is a hole in the table, concurrent insertion of records at the end of the table is allowed.

How MyISAM lock scheduling is implemented is also a very critical issue. For example, when a process requests a read lock on a MyISAM table, and at the same time another process also requests a write lock on the same table, how will mysql prioritize the process? Research has shown that the writing process will acquire the lock first (that is, the read request arrives first in the lock waiting queue). But this also causes a big flaw, that is, a large number of write operations will make it difficult for query operations to obtain read locks, which may cause permanent blocking. Fortunately, we can adjust the scheduling behavior of MyISAM through some settings. We can make the MyISAM default engine give priority to read requests by specifying the parameter low-priority-updates, and set its value to 1 (set low_priority_updates=1) to lower the priority.

The biggest difference between InnoDB lock and MyISAM lock is:
1. It supports transactions (TRANCSACTION).
2. Row-level locks are used.

We know that a transaction is a logical processing unit composed of a set of SQL statements. It has four attributes (ACID attributes for short), which are:
Atomicity: A transaction is a Atomic operation unit, whose modifications to the data are either all executed or not executed at all;
Consistency (Consistent): the data must remain consistent at the beginning and completion of the transaction;
Isolation (Isolation) : The database system provides a certain isolation mechanism to ensure that transactions are executed in an "independent" environment that is not affected by external concurrent operations;
Durable (Durable): After the transaction is completed, its modification of the data is permanent, even if there is System failures are also maintained.

Problems caused by concurrent transaction processing
Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, thereby supporting more users. However, concurrent transaction processing will also bring some problems, mainly including the following situations.
1. Lost Update: occurs when two or more transactions select the same row and then update the row based on the originally selected value, because each transaction is unaware of the existence of the other transactions. Lost update problem - the last update overwrites updates made by other transactions. For example, two editors make electronic copies of the same document. Each editor independently changes their copy and then saves the changed copy, overwriting the original document. The editor who last saved a copy of his or her changes overwrites the changes made by another editor. This problem can be avoided if one editor cannot access the same file until another editor completes and commits the transaction.
2. Dirty Reads: A transaction is modifying a record. Before the transaction is completed and submitted, the data of this record is in an inconsistent state; at this time, another transaction also reads the same record. If a record is not controlled and a second transaction reads the "dirty" data and performs further processing accordingly, uncommitted data dependencies will occur. This phenomenon is vividly called "dirty reading".
3. Non-Repeatable Reads: A transaction reads the previously read data again at a certain time after reading certain data, only to find that the data it read has changed. Or some records have been deleted! This phenomenon is called "non-repeatable reading".
4. Phantom Reads: A transaction re-reads previously retrieved data according to the same query conditions, but finds that other transactions have inserted new data that satisfies its query conditions. This phenomenon is called "phantom reads". read".

Transaction Isolation Level
Among the problems caused by concurrent transaction processing mentioned above, "update loss" should usually be completely avoided. However, preventing update loss cannot be solved by the database transaction controller alone. The application needs to add necessary locks to the data to be updated. Therefore, preventing update loss should be the responsibility of the application.
"Dirty read", "non-repeatable read" and "phantom read" are actually database read consistency problems, which must be solved by the database providing a certain transaction isolation mechanism. The ways in which databases implement transaction isolation can basically be divided into the following two types.
1. One is to lock the data before reading it to prevent other transactions from modifying the data.
2. The other is to generate a consistent data snapshot (Snapshot) of the data request time point through a certain mechanism without adding any locks, and use this snapshot to provide a certain level (statement level or transaction level) of consistent reading. Pick. From a user's perspective, it seems that the database can provide multiple versions of the same data. Therefore, this technology is called data multi-version concurrency control (MVCC or MCC for short), which is also often called a multi-version database.

The stricter the transaction isolation of the database, the smaller the concurrent side effects, but the greater the price paid, because transaction isolation essentially makes transactions "serialized" to a certain extent. Obviously it is contradictory to "concurrency". At the same time, different applications have different requirements for read consistency and transaction isolation. For example, many applications are not sensitive to "non-repeatable reads" and "phantom reads" and may be more concerned about the ability to access data concurrently.
In order to solve the contradiction between "isolation" and "concurrency", ISO/ANSI SQL92 defines 4 transaction isolation levels. Each level has a different degree of isolation and allows different side effects. Applications can meet their own business logic requirements , balance the contradiction between "isolation" and "concurrency" by choosing different isolation levels. Table 20-5 provides a good summary of the characteristics of these four isolation levels.

Read data consistency and allowed concurrent side effects
Isolation level Read data consistency Dirty read Non-repeatable read Phantom read
Read uncommitted The lowest level, only guaranteed Do not read physically damaged data Yes Yes Yes
Read committed Statement level No Yes Yes
Repeatable read (Repeatable read) Transaction level No No Yes
Serializable Highest level, transaction level No No No

The last thing to note is that each specific database does not necessarily fully implement the above four isolation levels. For example, Oracle only provides two standard isolation levels: Read committed and Serializable, and also provides its own defined Read only isolation level; In addition to supporting the four isolation levels defined by ISO/ANSI SQL92 above, SQL Server also supports an isolation level called "snapshot", but strictly speaking it is a Serializable isolation level implemented using MVCC. MySQL supports all 4 isolation levels, but in specific implementation, there are some characteristics. For example, in some isolation levels, MVCC consistency reading is used, but in some cases it is not.
InnoDB has two modes of row locks:
1) Shared lock (S): allows one transaction to read a row and prevents other transactions from obtaining an exclusive lock on the same data set.
(Select * from table_name where ……lock in share mode)
2) Exclusive lock (X): allows transactions that obtain exclusive locks to update data, and prevents other transactions from obtaining shared read locks and exclusive writes of the same data set Lock. (select * from table_name where…..for update)
In order to allow row locks and table locks to coexist, a multi-granularity locking mechanism is implemented; there are also two internally used intention locks (both table locks), namely intention Shared locks and intent-exclusive locks.
1) Intention shared lock (IS): The transaction intends to add a row shared lock to the data row. The transaction must first obtain the IS lock of the table before adding a shared lock to a data row.
2) Intention exclusive lock (IX): The transaction intends to add a row exclusive lock to the data row. The transaction must first obtain the IX lock of the table before adding an exclusive lock to a data row.
InnoDB row lock mode compatibility list
Request lock mode
Is it compatible
Current lock mode X IX S IS
X Conflict Conflict Conflict Conflict
IX Conflict Compatible Conflict Compatible
S Conflict Conflict Compatible Compatible
IS Conflict Compatible Compatible Compatible
If the lock mode requested by a transaction is compatible with the current lock, InnoDB will grant the requested lock to the transaction; otherwise, if the two are incompatible, the transaction will Wait for the lock to be released.
Intention locks are automatically added by InnoDB and do not require user intervention. For UPDATE, DELETE and INSERT statements, InnoDB will automatically add exclusive locks (X) to the involved data sets; for ordinary SELECT statements, InnoDB will not add any locks; transactions can add shared locks or exclusive locks to the record set through the following statements.
1. Shared lock (S): SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE.
2. Exclusive lock (X): SELECT * FROM table_name WHERE ... FOR UPDATE.
InnoDB row lock is implemented by locking the index entry on the index. This is different from MySQL and Oracle, which is implemented by locking the corresponding data row in the data block. This row lock implementation feature of InnoDB means that InnoDB uses row-level locks only when data is retrieved through index conditions. Otherwise, InnoDB will use table locks!
In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise it may cause a large number of lock conflicts, thus affecting concurrency performance.

Query table-level lock contention
Table lock contention:
You can analyze the table lock contention on the system by checking the table_locks_waited and table_locks_immediate status variables:

mysql> show status like ‘table%’; 
+———————–+——-+ 
| Variable_name         | Value | 
+———————–+——-+ 
| Table_locks_immediate | 2979  | 
| Table_locks_waited    | 0     | 
+———————–+——-+ 
2 rows in set (0.00 sec))
Copy after login

If the value of Table_locks_waited is relatively high, it indicates that there is a serious table-level lock contention.
InnoDB row lock contention:
You can analyze the row lock contention on the system by checking the InnoDB_row_lock status variable:

mysql> show status like ‘innodb_row_lock%’; 
+——————————-+——-+ 
| Variable_name                 | Value | 
+——————————-+——-+ 
| InnoDB_row_lock_current_waits | 0     | 
| InnoDB_row_lock_time          | 0     | 
| InnoDB_row_lock_time_avg      | 0     | 
| InnoDB_row_lock_time_max      | 0     | 
| InnoDB_row_lock_waits         | 0     | 
+——————————-+——-+ 
5 rows in set (0.01 sec)
Copy after login

MyISAM write lock experiment:
On the MyISAM table Read operations will not block other users' read requests for the same table, but will block write requests for the same table; write operations on the MyISAM table will block other users' read and write operations on the same table; read operations on the MyISAM table Operations and write operations, and write operations are serial! According to the example shown in Table 20-2, we can know that when a thread obtains a write lock on a table, only the thread holding the lock can update the table. Read and write operations from other threads will wait until the lock is released.
USER1:

mysql> lock table film_text write;
Copy after login

The current session can perform query, update, and insert operations on the locked table:

mysql> select film_id,title from film_text where film_id = 1001;
Copy after login
Copy after login

USER2:

mysql> select film_id,title from film_text where film_id = 1001;
Copy after login
Copy after login

Waiting
USER1:
Release the lock:

mysql> unlock tables;
Copy after login

USER2:
Get the lock, the query returns:
InnoDB storage engine shared lock experiment

USER1: 
mysql> set autocommit = 0; 
USER2: 
mysql> set autocommit = 0;
Copy after login
Copy after login

USER1:
The current session adds a share mode shared lock to the record of actor_id=178:

mysql> select actor_id,first_name,last_name from actor where actor_id = 178lock in share mode;
Copy after login
Copy after login

USER2:
Other sessions can still query the records, and also You can add a share mode shared lock to this record:

mysql> select actor_id,first_name,last_name from actor where actor_id = 178lock in share mode;
Copy after login
Copy after login

USER1:
The current session updates the locked record and waits for the lock:

mysql> update actor set last_name = ‘MONROE T’ where actor_id = 178;
Copy after login
Copy after login
Copy after login

Waiting
USER2:
If other sessions also update the record, it will cause a deadlock exit:

mysql> update actor set last_name = ‘MONROE T’ where actor_id = 178;
Copy after login
Copy after login
Copy after login

ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
USER1:
After obtaining the lock, it can be successfully updated:

mysql> update actor set last_name = ‘MONROE T’ where actor_id = 178; 
Query OK, 1 row affected (17.67 sec) 
Rows matched: 1  Changed: 1  Warnings: 0
Copy after login

InnoDB storage engine exclusive lock example

USER1: 
mysql> set autocommit = 0; 
USER2: 
mysql> set autocommit = 0;
Copy after login
Copy after login

USER1:
当前session对actor_id=178的记录加for update的排它锁:

mysql> select actor_id,first_name,last_name from actor where actor_id = 178 for update;
Copy after login
Copy after login

USER2:
其他session可以查询该记录,但是不能对该记录加共享锁,会等待获得锁:

mysql> select actor_id,first_name,last_name from actor where actor_id = 178;
Copy after login

USER1:
当前session可以对锁定的记录进行更新操作,更新后释放锁:

mysql> update actor set last_name = ‘MONROE T’ where actor_id = 178;
Copy after login
Copy after login
Copy after login

USER2:
其他session获得锁,得到其他session提交的记录:

mysql> select actor_id,first_name,last_name from actor where actor_id = 178 for update;
Copy after login
Copy after login

更新性能优化的几个重要参数
bulk_insert_buffer_size
批量插入缓存大小,这个参数是针对MyISAM存储引擎来说的.适用于在一次性插入100-1000+条记录时,提高效率.默认值是8M.可以针对数据量的大小,翻倍增加.
concurrent_insert
并发插入,当表没有空洞(删除过记录),在某进程获取读锁的情况下,其他进程可以在表尾部进行插入.
值可以设0不允许并发插入, 1当表没有空洞时,执行并发插入, 2不管是否有空洞都执行并发插入.
默认是1针对表的删除频率来设置.
delay_key_write
针对MyISAM存储引擎,延迟更新索引.意思是说,update记录时,先将数据up到磁盘,但不up索引,将索引存在内存里,当表关闭时,将内存索引,写到磁盘.值为 0不开启, 1开启.默认开启.
delayed_insert_limit, delayed_insert_timeout, delayed_queue_size
延迟插入,将数据先交给内存队列,然后慢慢地插入.但是这些配置,不是所有的存储引擎都支持,目前来看,常用的InnoDB不支持, MyISAM支持.根据实际情况调大,一般默认够用了。


The above is the detailed content of Detailed introduction to the lock mechanism in mysql database. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template