Based on the requirements raised, I think there are mainly the following two questions:
Because there are Local cache, how to ensure data consistency. When the data of one node changes, how does the data of other nodes become invalid?
The data is incorrect and needs to be resynchronized. How to invalidate the cache?
The next step is to cooperate with the product and other developers to draw a flow chart, as follows:
Use a A configuration table records whether caching is needed and whether caching is enabled to achieve cache invalidation when notified.
Because the project requirements are general, even if the message is lost, there will not be much impact, so I finally chose the subscription and publishing functions in redis to notify other nodes of invalid local cache.
The above questions are clear, and the flow chart is also clear. Then get ready to start writing bugs. The overall idea is to customize the annotation implementation aspects and try to reduce the coupling to the business code.
Mainly explained in the code, defining a CacheManager to integrate with the business. Special attention needs to be paid to the maximum number of cacheable items to avoid occupying too much program memory and causing the memory to become full. Of course, it cannot be too small, because the hit rate issue also needs to be considered. Therefore, the final size must be determined based on the actual business.
@Bean(name = JKDDCX) @Primary public CacheManager cacheManager() { CaffeineCacheManager cacheManager = new CaffeineCacheManager(); cacheManager.setCaffeine(Caffeine.newBuilder() // 设置最后一次写入或访问后经过固定时间过期 .expireAfterAccess(EXPIRE, TIME_UNIT) //设置本地缓存写入后过期时间 .expireAfterWrite(EXPIRE, TIME_UNIT) // 初始的缓存空间大小 .initialCapacity(500) // 缓存的最大条数 .maximumSize(1000));// 使用人数 * 5 (每个人不同的入参 5 条)\ return cacheManager; }
Customize annotations and add all available parameters.
@Target({ ElementType.METHOD ,ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface CaffeineCache { public String moudleId() default ""; //用于在数据库中配置参数 public String methodId() default ""; public String cachaName() default ""; //动态切换实际的 CacheManager public String cacheManager() default ""; }
Cache listener is mainly responsible for ensuring the consistency of multi-node data. When a node cache is updated, other nodes are notified to handle it accordingly. Use the publish and subscribe function of Redis and implement the MessageListener interface to implement the main technology.
Of course, another detail below is that the Redis#keys command is disabled in general production environments, so you have to scan the corresponding keys in another way.
public class CacheMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] pattern) { CacheMessage cacheMessage = (CacheMessage) redisTemplate.getValueSerializer().deserialize(message.getBody()); logger.info("收到redis清除缓存消息, 开始清除本地缓存, the cacheName is {}, the key is {}", cacheMessage.getCacheName(), cacheMessage.getKey()); // redisCaffeineCacheManager.clearLocal(cacheMessage.getCacheName(), cacheMessage.getKey()); /** * 如果是一个类上使用了 注解 @CaffeineCache ,那么所有接口都会缓存。 * 下面的逻辑是:除了当前模块的接口访问的入参 key,其他的 redis 缓存都会被清除 * (比如此模块的表更新了,但是当前调用此接口只是缓存了当前这个入参的redis,其他的数据删除) */ String prefixKey = RedisConstant.WXYMG_DATA_CACHE + cacheMessage.getCacheName(); Set<String> keys = redisTemplate.execute((RedisCallback<Set<String>>) connection -> { Set<String> keysTmp = new HashSet<>(); Cursor<byte[]> cursor = connection.scan(new ScanOptions.ScanOptionsBuilder(). match(prefixKey + "*"). count(50).build()); while (cursor.hasNext()) { keysTmp.add(new String(cursor.next())); } return keysTmp; }); Iterator iterator = keys.iterator(); while (iterator.hasNext()) { if (iterator.next().toString().equals(cacheMessage.getKey())) { iterator.remove(); } } redisTemplate.delete(keys); cacheConfig.cacheManager().getCache(cacheMessage.getCacheName()).clear(); //cacheName 下的都删除 } }
Then there is the logical processing of the aspect. The content inside is exactly the same as the flow chart, except that the code is used to implement the requirements.
Among them: The following code is Redis publishing messages.
redisTemplate.convertAndSend(CacheConfig.TOPIC, new CacheMessage(caffeineCache.cachaName(), redisKey));
This is a message body when Redis publishes a message. It is also customized. You can add more parameter attributes
public class CacheMessage implements Serializable { private static final long serialVersionUID = -1L; private String cacheName; private Object key; public CacheMessage(String cacheName, Object key) { super(); this.cacheName = cacheName; this.key = key; } }
The above is the detailed content of How to use caffeine_redis to customize the second level cache. For more information, please follow other related articles on the PHP Chinese website!