Highlights of the project:
1. Using distributed Seesion, multiple servers can respond at the same time.
2. Use redis as cache to improve access speed and concurrency, reduce database pressure, and use memory tags to reduce redis access.
3. Use static pages to speed up user access, improve QPS, cache pages to the browser, and separate front-end and back-end to reduce server pressure.
4. Use the message queue to complete asynchronous ordering, improve user experience, and cut peak and flow rates.
5. Security optimization: double md5 password verification, flash kill interface address hiding, interface current limiting and anti-brushing, mathematical formula verification code.
Main knowledge points:
Distributed Seesion
The actual application of our flash sale service may not only be deployed on one server, but distributed on multiple servers. At this time, if the user logs in to the first server, the first request goes to the first server, but the second request goes to the second server, then the user's session information is lost.
Solution: Session synchronization, no matter which server is accessed, the session can be obtained, using the redis cache method, and using a redis server specifically to store the user's session information. In this way, the user session will not be lost. (Every time you need a session, just get it from the cache)
redis relieves database pressure
This project makes extensive use of caching technology, including user information caching (distributed session), product information The cache, product inventory cache, order cache, page cache, and object cache reduce access to the database server.
Universal cache key encapsulation
One problem is how to distinguish caches in different modules, because there may be the same key value
Solution: Use an abstract class to define BaseKey ( Prefix), in which the prefix of the cache key and the cache expiration time are defined to encapsulate the cache key. Let different modules inherit it, so that each time a module is stored in the cache, a cache-specific prefix is added, and different expiration times can be set uniformly.
Page staticization (separation of front and back ends)
The main purpose of page staticization is to speed up the loading of the page, and make the product details and order details pages into static HTML (pure HTML ), data loading only requires requesting the server through ajax, and the static HTML page can be cached in the client's browser.
Message queue to complete asynchronous order placement
Use message queue to complete asynchronous order placement, improve user experience, cut peak and reduce flow
Idea:
1 .The system is initialized and the product inventory quantity stock is loaded into Redis.
2. The backend receives the flash sale request and Redis pre-reduces the inventory. If the inventory has reached the critical value, there is no need to continue the request and a failure will be returned directly, that is, a large number of subsequent requests do not need to bring any problems to the system. pressure.
3. Determine whether the flash sale order has been formed, determine whether the flash sale has arrived, avoid multiple products from one account, and determine whether the flash sale is repeated.
4. The inventory is sufficient and there are no duplicate flash sales. The flash sale request is encapsulated and the message is enqueued. At the same time, a code (0) is returned to the front end, which means it is returned to the queue. (What is returned is not failure or success, and it cannot be judged at this time)
5. After the front end receives the data, it displays the queue and polls the request server according to the product ID (consider polling once every 200ms).
6. The back-end RabbitMQ monitors the channel named MIAOSHA_QUEUE for the flash sale. If there is a message, it will obtain the incoming information. Before executing the real flash sale, it must judge the inventory of the database and determine whether the flash sale is repeated. Then Execute the flash sale transaction (the flash sale transaction is an atomic operation: decrease the inventory by 1, place an order, and write the flash sale order).
7. At this time, the front-end polls the request interface MiaoshaResult according to the product ID to check whether a product order has been generated. If the request returns -1, it means the flash sale failed, 0 means it is queued, and >0 means the product id is returned. It means the flash sale is successful.
Security Optimization
Double md5 password verification, flash kill interface address hiding, interface current limiting and anti-swiping, mathematical formula verification code.
Elegant code writing
The output result of the interface is made into a Result encapsulation
The wrong code is made into a CodeMsg encapsulation
Access cache Encapsulated a key
Difficulties of the project and problem solving:
1. When using JMeter to do stress testing, 5000 threads are opened, the system cannot run, and an exception occurs
Reason: Modify the redis configuration item poolMaxTotal in the configuration file and set it to 1000.
#redis configuration items
redis.poolMaxTotal=1000
redis.poolMaxldle=500
redis.poolMaxWait=500
2 .A large number of caches are used, so there are problems such as cache breakdown, cache avalanche and cache consistency?
Cache penetration refers to making a request for some data that must not exist. The request will penetrate the cache and reach the database.
Solution: Cache empty data for these non-existent data and filter such requests.
Cache avalanche refers to a large number of requests reaching the database because the data is not loaded into the cache, or the cached data fails (expires) in a large area at the same time, or the cache server is down.
solution:
In order to prevent cache avalanches caused by cache expiration in large areas at the same time, you can observe user behavior and set the cache expiration time reasonably;
In order to prevent cache avalanches caused by cache server downtime, you can use Distributed cache. Each node in the distributed cache only caches part of the data. When a node goes down, it can ensure that the cache of other nodes is still available.
You can also perform cache preheating to avoid cache avalanche due to a large amount of data not being cached soon after the system is started.
For example: First set different expiration times for different caches, such as session cache. In the userKey prefix, the setting is to expire in 30 minutes, and the cache time is updated every time the user responds. Each session acquisition will be extended by 30 minutes, so the probability of cache expiration is relatively low
Cache consistency requires that the cached data can be updated in real time while the data is updated.
Solution:
Update the cache immediately when the data is updated. First try to read from the cache, and return directly after reading the data; if it cannot be read, read the database and save the data. Will be written to the cache and returned.
Before reading the cache, first determine whether the cache is the latest. If it is not the latest, update it first. When the data needs to be updated, update the database first, and then invalidate (delete) the corresponding data in the cache.
3. Extensive use of cache puts a lot of pressure on the cache server. Thinking about how to reduce redis access?
When redis pre-reduces inventory, an isOvermap is maintained in the memory as a memory mark. When there is no inventory, it is set to true. Before each flash sale business accesses redis, check the map mark. If true, it means there is no inventory, and it will directly return failure without requesting the redis server.
4. In a business scenario with high concurrent requests, when a large number of requests are too late to be processed, or even requests pile up?
Message queue, used to process requests asynchronously. Every time a request comes in, instead of processing the request, it is put into the message queue, and then a listener is arranged in the background to listen to the message queues of different businesses. When a message comes, the business logic is executed. This prevents the exception of too many database connections when multiple requests are operated at the same time.
5. How to ensure that a user cannot place repeated orders?
Solution: Create a unique index in the flash sale order table (the reference is user ID and product goodsId), so that the first record can be inserted, the second one will make an error, and then roll back through the transaction to prevent a user Process multiple requests issued at the same time and instantly sell multiple products.
Unique index means unique. After adding a unique index to a field in the database table structure and performing a storage operation on the database, the database will determine whether the data already exists in the database. It can only be performed when the data does not exist. Insert operation.
Although this is a small skill, it is actually a very practical skill in business development. For example, in high-concurrency business, how can the database prevent data from concurrently inserting two identical order numbers? Adding a unique index is of course one of the fastest methods. Of course, whether to add an index or solve it through business code depends on the company's business
6. How to solve the oversold phenomenon?
Oversold scenario: When different users read the request, they find that the product inventory is sufficient, and then initiate requests at the same time to perform a flash sale operation to reduce the inventory, causing the inventory to be reduced to a negative number.
The simplest method is to update the database to reduce inventory and implement inventory restrictions. In the reduceStock(GoodsVo goodsvo) method, add one more stock_count > 0 to sql and use database features to ensure oversold. The problem is, only when stock_count is greater than 0, the stock_count is read and then subtracted by 1.
@Update("update miaosha_goods set stock_count=stock_count-1 where goods_id=#{goodsId} and stock_count>0")
public void reduceStock(MiaoshaGoods goods);
7. What is the process of page staticization and what is browser cache?
Cache the HTML static page in the client browser. Only the data is obtained through the ajax asynchronous calling interface. Only part of the data is interacted, which reduces bandwidth and speeds up user access.
Browser caching is to store a copy of a requested Web resource (such as html page, picture, js, data, etc.) in the browser. The cache keeps a copy of the output content based on incoming requests. When the next request comes, if it is the same URL, the cache will decide according to the caching mechanism whether to directly use the copy to respond to the access request, or to send the request again to the source server. What is more common is that the browser will cache the web pages that have been visited on the website. When the URL address is visited again, if the web page has not been updated, the web page will not be downloaded again, but the locally cached web page will be used directly. Only when the website clearly identifies that the resource has been updated will the browser download the web page again.
8. Flash sale architecture design concept?
Current limiting: Since only a small number of users can succeed in instant killing, most traffic must be restricted and only a small part of the traffic is allowed to enter the service backend.
When conducting flash sale activities, there will be an instant influx of a large number of users when the rush sale begins, resulting in a peak period. Therefore, peak-cutting measures need to be taken. High peak traffic is an important reason for overwhelming the system, so how to turn instantaneous high traffic into stable traffic for a period of time is also an important idea in designing a flash sale system. Commonly used methods to achieve peak clipping include the use of technologies such as caching and message middleware.
Asynchronous processing: The flash sale system is a high-concurrency system. Using asynchronous processing mode can greatly increase the amount of system concurrency. In fact, asynchronous processing is a way to achieve peak cutting.
Memory cache: The biggest bottleneck of the flash sale system is generally database reading and writing. Since database reading and writing belongs to disk IO, the performance is very low. If some data or business logic can be transferred to the memory cache, the efficiency will be extremely high. The earth rises.
Scalable: Of course, if we want to support more users and greater concurrency, it is best to design the system to be elastic and scalable. If the traffic comes, just expand the machine. During Double Eleven events such as Taobao and JD.com, a large number of machines will be added to cope with the transaction peak.
9. Design ideas for the flash sale system architecture?
Intercept requests in the upstream of the system to reduce downstream pressure: The flash kill system is characterized by a large amount of concurrency, but the actual number of successful flash kill requests is very small, so if it is not intercepted at the front end, it is likely to cause database read-write lock conflicts. , and eventually the request times out.
Use cache: Utilizing cache can greatly improve the system reading and writing speed.
Message queue: The message queue can cut the peak and intercept a large number of concurrent requests. This is also an asynchronous processing process. The background business actively pulls request messages from the message queue for business processing based on its own processing capabilities.
10. If the inventory is reduced and the user does not pay, how can the inventory be restored and continue to participate in the rush sale?
Set a maximum payment time, such as 30 minutes, and there is a scheduled task in the background (use a timer Timer), rotate the pending orders that exceed 30 minutes (the order status is determined in the database), then close the order and restore inventory.
The above is the detailed content of What are the knowledge points of the redis project?. For more information, please follow other related articles on the PHP Chinese website!