Analyze the problems in the following code:
// 分布式锁服务 public interface RedisLockService { // 获取锁 public boolean getLock(String key); // 释放锁 public boolean releaseLock(String key); } // 业务服务 public class BizService { @Resource private RedisLockService redisLockService; public void bizMethod(String bizId) { try { // 获取锁 if(redisLockService.getLock(bizId)) { // 业务重复校验 if(!bizValidate(bizId)) { throw new BizException(ErrorBizCode.REPEATED); } // 执行业务 return doBusiness(); } // 获取锁失败 throw new BizException(ErrorBizCode.GET_LOCK_ERROR); } finally { // 释放锁 redisLockService.releaseLock(bizId); } } }
The above code seems to be fine, In fact, it hides a big problem. The problem is that when releasing the lock, there is no verification whether the current thread has obtained the lock:
Thread 1 and Thread 2 access the business method at the same time
Thread 2 Obtain the lock successfully and perform business processing
Thread 1 did not acquire the lock, but released the lock successfully
At this time, thread 3 tried to acquire The lock is successful, but the business of Thread 2 has not been processed, so Thread 3 will not cause a business duplication exception
Ultimately, Thread 2 and Thread 3 will repeatedly execute the business
The solution is to allow the lock to be released only after confirming that the lock acquisition is successful:
public class BizService { @Resource private RedisLockService redisLockService; public void bizMethod(String bizId) { boolean getLockSuccess = false; try { // 尝试获取锁 getLockSuccess = redisLockService.getLock(bizId); // 获取锁成功 if(getLockSuccess) { // 业务重复校验 if(!bizValidate(bizId)) { throw new BizException(ErrorBizCode.REPEATED); } // 执行业务 return doBusiness(); } // 获取锁失败 throw new BizException(ErrorBizCode.GET_LOCK_ERROR); } finally { // 获取锁成功才允许释放锁 if(getLockSuccess) { redisLockService.releaseLock(bizId); } } } }
No. The second problem is that Redis also has a memory cleaning mechanism, which may cause distributed locks to fail.
(1) Regular deletion
Redis regularly checks which keys have expired, and deletes them if they are found to be expired
(2) Lazy deletion
If there are too many keys, regular deletion will consume a lot of resources, so a lazy deletion strategy is introduced
If Redis finds that the key has expired when accessing it, Direct deletion
When memory is insufficient, Redis will select some elements for deletion:
no-enviction
Prohibit eviction of data, new The write operation will report an error
volatile-lru
Select the least recently used data from the data set with an expiration time to be eliminated
volatile-ttl
Select the data to be expired from the data set with expiration time set to be eliminated
volatile-random
Select any data to be eliminated from the data set with expiration time set
allkeys- lru
Select the least recently used data from the data set to eliminate
allkeys-random
Select any data from the data set to eliminate
There are at least two Scenario causes distributed lock failure problem:
Scenario 1: Redis has insufficient memory for memory recycling, use allkeys-lru
or allkeys-random
The recycling strategy causes lock failure
Scenario 2: The thread successfully acquires the distributed lock, but the processing time is too long. At this time, the lock expires and is regularly cleared, causing other threads to successfully acquire the lock and fail. Repeated execution of business
The general solution is to protect it at the database layer. For example, the inventory deduction business uses optimistic locking at the database layer.
udpate goods set stock = stock - #{acquire} where sku_id = #{skuId} and stock - #{acquire} >= 0
The above is the detailed content of What are the two pitfalls that Redis distributed locks must avoid?. For more information, please follow other related articles on the PHP Chinese website!