eAccelerator and memcached are currently two of the more mainstream caching acceleration tools that can be used in PHP.
eAccelerator is specially developed for PHP, while memcached is not only used in PHP, but can be used in all other languages.
Main functions of eAccelerator:
1. Cache the execution code of PHP files: When the cached code is called again, it will be read directly from the memory, thereby greatly reducing the speed of PHP running.
2. Provides shared memory operation functions: users can save their common non-resource objects into memory and read them out at any time.
Main functions of memcached:
Provides shared memory operation functions to save and read data
What the two have in common:
Common ground: both provide shared memory operation functions, which can be used to save and read your own data
The difference between the two:
eAccelerator exists as an extension library of PHP, so it can operate, read and write shared memory only when PHP is running. In general, it can only be called by the program itself that operates the shared memory.
At the same time, eAccelerator can cache the execution code of PHP programs to improve the loading and execution speed of the program.
memcached is mainly used as a shared memory server, and its PHP extension library only exists as a connection library from PHP to memcached, similar to the MySQL extension library. Therefore, memcached can be completely separated from PHP, and its shared data can be called by different programs.
Based on the differences between the two, we use them where they are really needed:
eAccelerator is mainly used to speed up stand-alone PHP and cache intermediate data. It is very practical when real-time performance is high but the amount of data operations is small.
Memcached is used in distributed or cluster systems. Multiple servers can share data. It is very practical when real-time performance is high and the amount of data operations is large.
Correct understanding of MemCached
At first, I heard that MemCached is used to cache data into memory and then operate on the data (the operations here include query and update), which sounds really great. In this way, there is no need to operate the database for a certain period of time. It's so good.
Then I have been thinking about a question. Querying is indeed possible, but how to handle concurrency when updating memory? Could it be that our MemCached has such a function? If so, that would be amazing.
But things are not as they say. This understanding of MemCached is incorrect.
MemCache is the same as other caches. When the data is updated, the cached items are the out date items.
After reading on the Internet, the explanations of MemCached by seniors illustrate this point.
Therefore, you should not expect to directly update MemCached and omit the database link.
I used to think that the set method he provided was used to update the database. At that time, I was just wishful thinking.
In fact, this method is to cache the records in the database into MemCached and specify its validity period.
Now I think about why, the content in our MemCached has not changed, even if I have deleted the record.
When we set(), we did not set its expiration time, so the default is 0, which means it will never expire. As long as the MemCached server is not restarted, it will always exist.
In this way, in our ROR project, we use cache to reduce database retrieval, but we cannot expect that MemCached will save us from updating the database.
If you really don’t even need to update the database, you will really have entered the non-database era, haha. Probably unlikely. If we can ensure that users come in a queue, one after another.
It’s better to find another way to reduce the pressure caused by updates.