Reprint address: Click to open the link
We know that database processing SQL is processed one by one. Assume that the process of purchasing goods is like this:
sql1: Query product inventory
if (inventory quantity > 0)
{
/ /Generate order...
sql2: Inventory-1
}
When there is no concurrency, the above process looks so perfect. Suppose two people place orders at the same time, and there is only one inventory. In the sql1 stage, two people query The inventory was all > 0, so sql2 was finally executed, and the inventory finally became -1. It was oversold. We should either replenish the inventory or wait for user complaints.
The more popular ideas to solve this problem:
1. Use an additional single process to process a queue, put the order requests in the queue, and process them one by one, so there will be no concurrency problems, but additional background processes are required and delay issues will not be considered.
2. Database optimistic locking, which roughly means querying the inventory first, then immediately adding 1 to the inventory, and then after the order is generated, query the inventory again before updating the inventory to see if it is consistent with the expected inventory quantity. If it is inconsistent, then Rollback, prompting the user that the inventory is insufficient.
3. To judge based on the update result, we can add a judgment condition update...where inventory>0 in sql2. If false is returned, it means that the inventory is insufficient and the transaction will be rolled back.
4. With the help of file exclusive lock, when processing the order request, use flock to lock a file. If the lock fails, it means that other orders are being processed. At this time, either wait or directly prompt the user "server busy"
This article What I am talking about is the 4th option, the approximate code is as follows:
//Blocking (waiting) mode
?
1 2 3 4 5 6 7 8 9 |
|
?
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|