The current scenario is: the performance of the machine is that the CPU is very nb. I forgot the model number of i7. nb means that there are very few CPU operations during querying. View it through htop. But the memory is not good, it is only 8g, and the memory will definitely be increased in the future. The hard disk is also an ordinary mechanical disk and will be replaced later, but that is for later. What I can feel now is that when the query data is greater than 100,000, the bottleneck appears on io, io is locked, and the memory usage is also full. High. As a result, other operations are stuck. Although this impact is small, it is reflected in the long waiting time for pages on the web.
Currently the following optimizations have been made:
1. Adjust the linux stack size value.
2, create index.
3. Put some popular data into a separate collection that has not been modified in the recent past (one month).
Through the above three optimizations, the situation will be better. But there will still be problems. When there are multiple conditions in the query, the index will be invalid. Things are still just as bad.
So, it’s very contradictory. How can we handle it well? So if there is another server with the same configuration, can this situation be solved through operations such as sharding?
Thanks
mongoDB, optimization can be done by 1. Adding more memory; 2. Adding more memory; 3. Insert the maximum memory of the motherboard.
You can consider changing to the redis + mysql solution.
For reference.
1. First find out which business CRDU operations are slow, and then optimize around these operations.
2. Generally speaking, adding memory/replacing SSD are relatively common practices. There are also some common parameter settings for optimization. You can refer to MongoDB's production notes and configure them accordingly.
https://docs.mongodb.com/manu...
Love MongoDB! Have fun!