1. Distributed deployment of databases to increase instantaneous insertion performance 2. If the business allows, use message queues for asynchronous insertion 3. Use redis and the like for caching, and then insert asynchronously into the database 3 . For data with frequent insertion operations and no need for related queries, you can choose to use a non-relational database such as mongo
If the real-time requirements are not very high, you can use redis as a middle layer to receive data, and then use the background timing or message subscription mechanism to pull data from redis and write it into the database in batches.
Consider optimizing the storage structure 2. Consider multiple machines, multiple databases and multiple processes if possible. 3. Equipped with connection pool and middleware (message queue) and intermediate database (redis) 4. Multi-threading...
You can search for something like Taobao’s 140,000 orders per second as a reference. There is no hard and fast solution to this thing. To put it simply, it uses server-side cache such as redis to process requests.
1 Look at the logs to find the request address with too high requests. Find out the reason first to see if it can be avoided and whether it is a necessary operation 2 Write a mysql trigger according to business needs and leave this work to mysql 3 Use swoole to asynchronously insert into the database 4 You can use multi-thread queue 5 Do not write data directly to the database, you can write it to redis first
1. Distributed deployment of databases to increase instantaneous insertion performance
2. If the business allows, use message queues for asynchronous insertion
3. Use redis and the like for caching, and then insert asynchronously into the database
3 . For data with frequent insertion operations and no need for related queries, you can choose to use a non-relational database such as mongo
If the real-time requirements are not very high, you can use redis as a middle layer to receive data, and then use the background timing or message subscription mechanism to pull data from redis and write it into the database in batches.
Consider optimizing the storage structure 2. Consider multiple machines, multiple databases and multiple processes if possible. 3. Equipped with connection pool and middleware (message queue) and intermediate database (redis) 4. Multi-threading...
You can search for something like Taobao’s 140,000 orders per second as a reference. There is no hard and fast solution to this thing.
To put it simply, it uses server-side cache such as redis to process requests.
You should also consider message queue
You can try adding caching mechanism and doing load balancing
1 Look at the logs to find the request address with too high requests. Find out the reason first to see if it can be avoided and whether it is a necessary operation
2 Write a mysql trigger according to business needs and leave this work to mysql
3 Use swoole to asynchronously insert into the database
4 You can use multi-thread queue
5 Do not write data directly to the database, you can write it to redis first