I was working on a shopping mall website before, and one of the modules was to customize the homepage. I quickly extracted the html content of the page and wrote it into a file using PHP. When the homepage was read, I read the file and output it to the template. Reading the database in the question refers to reading customized data, not html content.
I’m not in a hurry now, I want to know:
If there are many people accessing concurrently, is it faster to read the file? Or is it faster to read data from the database?
If there is a lot of content written to the file and the file is relatively large, is the reading time slow?
The TP framework I use, would it be better if I
Ask me again, how do you test concurrency? What keywords should you search for?
ps: I am new to work, please give me some advice
I was working on a shopping mall website before, and one of the modules was to customize the homepage. I quickly extracted the html content of the page and wrote it into a file using PHP. When the homepage was read, I read the file and output it to the template. Reading the database in the question refers to reading customized data, not html content.
I’m not in a hurry now, I want to know:
If there are many people accessing concurrently, is it faster to read the file? Or is it faster to read data from the database?
If there is a lot of content written to the file and the file is relatively large, is the reading time slow?
The TP framework I use, would it be better if I
Ask me again, how to test concurrency? What keywords should you search for?
ps: I am new to work, please give me some advice
The data of the database is also stored in files. Regardless of the memory cache of the database, simply reading files is of course faster than reading the database, because the database also needs to go through the query process and other processing processes. However, if there are too many files, you also need to consider the query speed of the file system. This speed is slower than database query when there are too many files.
The file reading time naturally depends on the size of the file, but if all the contents in the file are what you want, this time is naturally indispensable. If you only want to read part of the file, you can move the file pointer by seeking.
The templates of the TP framework are compiled, which means that the template file you wrote will not be used during actual execution, but the compiled template will be used, so you can safely use include without worrying about performance issues. .
To be precise, concurrency testing should be called stress testing. Just search for the stress testing plan.
In response to your question,
In the case of high concurrency, direct database will definitely be very slow. At least there is a Cache layer on the database. Cache layer efficiency:
<code>文件 < 内存(memcache、redis) < Cache阵列 文件也属于Cache的一种</code>
Looking at your needs, it seems that there are steps to generate static files. Here are a few keywords for you:
ob_start
pseudostatic
CDN
If there is a lot of file content to be written, there is nothing we can do about it. We usually use Cache etc. to make the overall architecture solution instead of simply writing
If the file being read (especially PHP file) is relatively large, consider turning on OpCache to increase the speed
You are including a template file, which is include
executed by PHP. The efficiency is equivalent to PHP's include
.
If the visited pages contain all static files and need to embed sub-templates, SSI (Apache
, nginX
) will be much faster than PHP's include
So, the answer to your question: HTML is the fastest, there is no need to execute PHP, but it needs to be generated in advance
Concurrency testing, start with the keyword apache benchmark
, and then you will find a lot of content you want.
First of all, there is not much difference in their speed. Files are faster, but it is easier to manage if they are all placed in the database.
Secondly, it doesn’t matter who is faster. These CMS-type websites can be optimized through staticization. After staticization, it has nothing to do with the page generation time.
First of all, memory > files
Secondly, the database also stores data in files (of course, the database has query cache)
Non-relational data, of course, it is faster to save files
But you cannot store a large amount in the same folder Files (addressing is slow) can be split using file name directories, such as:
File name dsaferdfsasxfsfsdf.dat
Take the first two characters to create a first-level directory and store it as
ds/dsaferdfsasxfsfsdf.dat
lokljljoiomlkml.dat >> lo/ lokljljoiomlkml.dat
TP comes with a simple cache method S(), which defaults to a file driver. If the cache driver uses Memcahced or Redis, it should be faster than the file. The premise is that the Memached or Redis server is on the local machine, or in a LAN above Gigabit.
For concurrent testing, you can use the ab command (ApacheBench) for Linux and Darwin (OS requests, each concurrency is 100 (there is an upper limit to this value, depending on the system settings, the general default is 256)