How to optimize Redis cache space
Scene setting
1. We need to store POJO in the cache. The class is defined as follows
public class TestPOJO implements Serializable { private String testStatus; private String userPin; private String investor; private Date testQueryTime; private Date createTime; private String bizInfo; private Date otherTime; private BigDecimal userAmount; private BigDecimal userRate; private BigDecimal applyAmount; private String type; private String checkTime; private String preTestStatus; public Object[] toValueArray(){ Object[] array = {testStatus, userPin, investor, testQueryTime, createTime, bizInfo, otherTime, userAmount, userRate, applyAmount, type, checkTime, preTestStatus}; return array; } public CreditRecord fromValueArray(Object[] valueArray){ //具体的数据类型会丢失,需要做处理 } }
2. Use the following example as test data
TestPOJO pojo = new TestPOJO(); pojo.setApplyAmount(new BigDecimal("200.11")); pojo.setBizInfo("XX"); pojo.setUserAmount(new BigDecimal("1000.00")); pojo.setTestStatus("SUCCESS"); pojo.setCheckTime("2023-02-02"); pojo.setInvestor("ABCD"); pojo.setUserRate(new BigDecimal("0.002")); pojo.setTestQueryTime(new Date()); pojo.setOtherTime(new Date()); pojo.setPreTestStatus("PROCESSING"); pojo.setUserPin("ABCDEFGHIJ"); pojo.setType("Y");
General practice
System.out.println(JSON.toJSONString(pojo).length());
Use JSON to directly serialize and print length=284**, **This method is the simplest way , which is also the most commonly used method. The specific data is as follows:
{"applyAmount":200.11,"bizInfo":"XX","checkTime":"2023-02-02","investor":"ABCD ","otherTime":"2023-04-10 17:45:17.717","preCheckStatus":"PROCESSING","testQueryTime":"2023-04-10 17:45:17.717","testStatus":"SUCCESS ","type":"Y","userAmount":1000.00,"userPin":"ABCDEFGHIJ","userRate":0.002}
We found that the above contains a lot of useless data, among which the attribute names There is no need to store it.
Improvement 1-Remove the attribute name
System.out.println(JSON.toJSONString(pojo.toValueArray()).length());
By selecting the array structure instead of the object structure, the attribute name is removed, and length=144 is printed. The data size has been reduced by 50%. The specific data is as follows:
["SUCCESS","ABCDEFGHIJ","ABCD","2023-04-10 17:45:17.717",null,"XX"," 2023-04-10 17:45:17.717",1000.00,0.002,200.11,"Y","2023-02-02","PROCESSING"]
We found that there is no need to store null. The time format is serialized into a string. Unreasonable serialization results lead to data expansion, so we should choose a better serialization tool.
Improvement 2-Use better serialization tools
//我们仍然选取JSON格式,但使用了第三方序列化工具 System.out.println(new ObjectMapper(new MessagePackFactory()).writeValueAsBytes(pojo.toValueArray()).length);
Choose better serialization tools to achieve field compression and reasonable data format, print** length=92, the space is reduced by 40% compared with the previous step.
This is a piece of binary data. Redis needs to be operated in binary. After converting the binary to a string, print it as follows:
��SUCCESS�ABCDEFGHIJ�ABCD� �j�6� ��XX� �j�6�� ��?`bM����@i � �Q�Y�2023-02-02�PROCESSING
Follow this idea further Digging, we found that we can achieve more extreme optimization effects by manually selecting data types. Choosing to use smaller data types will achieve further improvements.
Improvement 3-Optimize data type
In the above use case, the three fields testStatus, preCheckStatus, and investor are actually enumeration string types. If they can Using simpler data types (such as byte or int, etc.) instead of string can further save space. You can use the Long type instead of a string to represent checkTime, so that the serialization tool outputs fewer bytes.
public Object[] toValueArray(){ Object[] array = {toInt(testStatus), userPin, toInt(investor), testQueryTime, createTime, bizInfo, otherTime, userAmount, userRate, applyAmount, type, toLong(checkTime), toInt(preTestStatus)}; return array; }
After manual adjustment, a smaller data type was used instead of String type, printinglength=69
Improvement 4-Consider ZIP compression
In addition to the above points, you can also consider using ZIP compression to obtain a smaller volume. When the content is large or repetitive, the effect of ZIP compression is obvious. If the storage The content is an array of TestPOJOs, probably suitable for use with ZIP compression.
For files smaller than 30 bytes, ZIP compression may increase the file size but may not necessarily reduce the file size. In the case of less repetitive content, no significant improvement can be obtained. And there is CPU overhead.
After the above optimization, ZIP compression is no longer a required option, and testing based on actual data is required to determine the ZIP compression effect.
Finally implemented
The above improvement steps reflect the optimization ideas, but the deserialization process will lead to the loss of types, which is more cumbersome to handle, so We also need to consider the issue of deserialization.
When the cache object is predefined, we can completely process each field manually. Therefore, in actual combat, it is recommended to use manual serialization to achieve the above purpose, achieve refined control, and achieve the best compression. effect and minimal performance overhead.
You can refer to the following msgpack implementation code. The following is the test code. Please package better Packer and UnPacker tools by yourself:
<dependency> <groupId>org.msgpack</groupId> <artifactId>msgpack-core</artifactId> <version>0.9.3</version> </dependency>
public byte[] toByteArray() throws Exception { MessageBufferPacker packer = MessagePack.newDefaultBufferPacker(); toByteArray(packer); packer.close(); return packer.toByteArray(); } public void toByteArray(MessageBufferPacker packer) throws Exception { if (testStatus == null) { packer.packNil(); }else{ packer.packString(testStatus); } if (userPin == null) { packer.packNil(); }else{ packer.packString(userPin); } if (investor == null) { packer.packNil(); }else{ packer.packString(investor); } if (testQueryTime == null) { packer.packNil(); }else{ packer.packLong(testQueryTime.getTime()); } if (createTime == null) { packer.packNil(); }else{ packer.packLong(createTime.getTime()); } if (bizInfo == null) { packer.packNil(); }else{ packer.packString(bizInfo); } if (otherTime == null) { packer.packNil(); }else{ packer.packLong(otherTime.getTime()); } if (userAmount == null) { packer.packNil(); }else{ packer.packString(userAmount.toString()); } if (userRate == null) { packer.packNil(); }else{ packer.packString(userRate.toString()); } if (applyAmount == null) { packer.packNil(); }else{ packer.packString(applyAmount.toString()); } if (type == null) { packer.packNil(); }else{ packer.packString(type); } if (checkTime == null) { packer.packNil(); }else{ packer.packString(checkTime); } if (preTestStatus == null) { packer.packNil(); }else{ packer.packString(preTestStatus); } } public void fromByteArray(byte[] byteArray) throws Exception { MessageUnpacker unpacker = MessagePack.newDefaultUnpacker(byteArray); fromByteArray(unpacker); unpacker.close(); } public void fromByteArray(MessageUnpacker unpacker) throws Exception { if (!unpacker.tryUnpackNil()){ this.setTestStatus(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setUserPin(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setInvestor(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setTestQueryTime(new Date(unpacker.unpackLong())); } if (!unpacker.tryUnpackNil()){ this.setCreateTime(new Date(unpacker.unpackLong())); } if (!unpacker.tryUnpackNil()){ this.setBizInfo(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setOtherTime(new Date(unpacker.unpackLong())); } if (!unpacker.tryUnpackNil()){ this.setUserAmount(new BigDecimal(unpacker.unpackString())); } if (!unpacker.tryUnpackNil()){ this.setUserRate(new BigDecimal(unpacker.unpackString())); } if (!unpacker.tryUnpackNil()){ this.setApplyAmount(new BigDecimal(unpacker.unpackString())); } if (!unpacker.tryUnpackNil()){ this.setType(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setCheckTime(unpacker.unpackString()); } if (!unpacker.tryUnpackNil()){ this.setPreTestStatus(unpacker.unpackString()); } }
Scenario extension
Assume that we store data for 200 million users. Each user contains 40 fields. The length of the field key is 6 bytes, and the fields are managed separately.
Under normal circumstances, we will think of the hash structure, and the hash structure stores key information, which will occupy additional resources. The field key is unnecessary data. According to the above ideas, you can use list instead of hash structure.
Through the Redis official tool test, using the list structure requires 144G of space, while using the hash structure requires 245G of space** (When more than 50% of the attributes are empty, you need to test whether it is still applicable)* *
In the above case, we took several very simple measures, with just a few lines of simple code, which can reduce the space by more than 70%. When the amount of data is relatively large, It is highly recommended in scenarios with large and high performance requirements. :
• Use arrays instead of objects (if a large number of fields are empty, you need to use serialization tools to compress nulls)
• Use better serialization tools
• Use Smaller data types
• Consider using ZIP compression
• Use list instead of hash structure (if a large number of fields are empty, testing and comparison are required)
The above is the detailed content of How to optimize Redis cache space. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Redis cluster mode deploys Redis instances to multiple servers through sharding, improving scalability and availability. The construction steps are as follows: Create odd Redis instances with different ports; Create 3 sentinel instances, monitor Redis instances and failover; configure sentinel configuration files, add monitoring Redis instance information and failover settings; configure Redis instance configuration files, enable cluster mode and specify the cluster information file path; create nodes.conf file, containing information of each Redis instance; start the cluster, execute the create command to create a cluster and specify the number of replicas; log in to the cluster to execute the CLUSTER INFO command to verify the cluster status; make

To view all keys in Redis, there are three ways: use the KEYS command to return all keys that match the specified pattern; use the SCAN command to iterate over the keys and return a set of keys; use the INFO command to get the total number of keys.

Redis uses hash tables to store data and supports data structures such as strings, lists, hash tables, collections and ordered collections. Redis persists data through snapshots (RDB) and append write-only (AOF) mechanisms. Redis uses master-slave replication to improve data availability. Redis uses a single-threaded event loop to handle connections and commands to ensure data atomicity and consistency. Redis sets the expiration time for the key and uses the lazy delete mechanism to delete the expiration key.

Using the Redis directive requires the following steps: Open the Redis client. Enter the command (verb key value). Provides the required parameters (varies from instruction to instruction). Press Enter to execute the command. Redis returns a response indicating the result of the operation (usually OK or -ERR).

Using Redis to lock operations requires obtaining the lock through the SETNX command, and then using the EXPIRE command to set the expiration time. The specific steps are: (1) Use the SETNX command to try to set a key-value pair; (2) Use the EXPIRE command to set the expiration time for the lock; (3) Use the DEL command to delete the lock when the lock is no longer needed.

Redis counter is a mechanism that uses Redis key-value pair storage to implement counting operations, including the following steps: creating counter keys, increasing counts, decreasing counts, resetting counts, and obtaining counts. The advantages of Redis counters include fast speed, high concurrency, durability and simplicity and ease of use. It can be used in scenarios such as user access counting, real-time metric tracking, game scores and rankings, and order processing counting.

The steps to start a Redis server include: Install Redis according to the operating system. Start the Redis service via redis-server (Linux/macOS) or redis-server.exe (Windows). Use the redis-cli ping (Linux/macOS) or redis-cli.exe ping (Windows) command to check the service status. Use a Redis client, such as redis-cli, Python, or Node.js, to access the server.

The best way to understand Redis source code is to go step by step: get familiar with the basics of Redis. Select a specific module or function as the starting point. Start with the entry point of the module or function and view the code line by line. View the code through the function call chain. Be familiar with the underlying data structures used by Redis. Identify the algorithm used by Redis.
