Home > Backend Development > PHP Tutorial > How to design the fastest way to obtain, cache, and search queries for large amounts of multi-level classified data?

How to design the fastest way to obtain, cache, and search queries for large amounts of multi-level classified data?

WBOY
Release: 2023-03-02 07:04:02
Original
1278 people have browsed it

1. Large amount of data
2. Multi-level classification
3. First get all the data
4. After getting all the data, there is a search function (no matter how many levels of the searched data, all its parents and children must be obtained Those that match the search name are marked in red)

Now I use one method to get all the data and later search. I get all the category IDs and their parent subsets together, and then for example, instead of starting the search, it is all the data. There must be many duplicates. I remove the duplicates and then pass these id to obtain all the information, and finally recycle to change the font color that matches the search (common to the search) and finally recycle to combine it into a tree structure data

Because if the amount of data is large, I feel that the old loop may be slow

How can I optimize it by writing two separate methods for searching and initializing all data? Or how can I do it?

Also, I added a cache so that I don’t have to re-check it every time I initialize it. I just get the cache. But if I search, I still have to check it. Of course, the search may be better if the data volume is small, but the number of cycles is too many. Is there a way to perform secondary filtering on the cache? ? S method cache of thinkphp framework
Thanks for sharing

Reply content:

1. Large amount of data
2. Multi-level classification
3. First get all the data
4. After getting all the data, there is a search function (no matter how many levels of the searched data, all its parents and children must be obtained Those that match the search name are marked in red)

Now I use one method to get all the data and later search. I get all the category IDs and their parent subsets together, and then for example, instead of starting the search, it is all the data. There must be a lot of duplicates. I remove the duplicates and then pass these id to obtain all the information, and finally recycle to change the font color that matches the search (common to the search) and finally recycle to combine it into a tree structure data

Because if the amount of data is large, I feel that the old loop may be slow

How can I optimize it by writing two separate methods for searching and initializing all data? Or how can I do it?

Also, I added a cache so that I don’t have to recheck it every time I initialize it. I just get the cache. But if I search, I still have to check it. Of course, the search may be better if the data volume is small, but the number of cycles is too many. Is there a way to perform secondary filtering on the cache? ? S method cache of thinkphp framework
Thanks for sharing

  1. Don’t feel that, in most cases, loops are several orders of magnitude faster than database queries, unless you use loops to query the database

  2. In most cases, if select...in hits the primary key or index, the result set will be within a few hundred, and the efficiency is still acceptable. You can use it based on your actual situation

  3. The amount of data is really large, there are many filtering conditions, and if you have word segmentation search, please consider using search engines, ElasticSearch, Sphinx, etc.

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template