memory cache memory cache
memcache is a set of distributed high-speed caching system developed by Brad Fitzpatrick of LiveJournal and is currently used by many websites In order to improve the access speed of the website, especially for some large websites that require frequent access to the database, the effect of improving the access speed is very significant. This is a set of open source software released under the BSD license. [Excerpted from Baidu Encyclopedia]
Official website: http://memcached.org/
Distributed: A structure in which multiple Memcache servers manage data.
Cache system: Cache the data queried by the user into the memory so that it can be obtained directly from the memory next time. Reduces disk IO overhead.
What is nosql? (sql relational database)
Answer: MySQL is called a relational database (the main feature is that it has a two-dimensional table structure (rows and columns in the table), and there are relationships between tables) (oracle (Java), db2, sqlserver)
Non-relational data: It is a database (system that saves data) that does not use SQL statements as queries, and does not have the concept of a two-dimensional table in the strict sense. Its data structure is all a huge hash table (key-value)
Hash table benefits: The time complexity is 0 (1): as the data increases, the query time will not change by an order of magnitude (1s).
Disadvantages of hash table: hash collision Different keys correspond to the same value
|
Asion | ||||||||
key2 | 12 | ||||||||
key3 | shenzhen | ||||||||
key4 | iphone |
Let selinux take effect immediately
2.1 Environment preparation
In Linux environment, tools such as gcc, g-c, make (makefile), cmake, autoconfig (configure), libtool and other tools are required
Under Linuxnetworking, use the following command
# yum install -y gcc make cmake autoconfig libtool
-y does not require confirmation interactive
2.2 Compile and install memcached
memcached depends on the libevent library, so it needs to be installed first and download the stable version from their respective official websites
libevetnt official website: http://libevent.org/
memcache official website: http://memcached.org/
Compile and install libevent first, then compile and install memcached. At the same time, when installing memcached, you need to specify the installation path of libevent
Specific steps:
Upload
# ./configure --prefix=/usr/local/libevent && make && make install
b. Install memcache, decompress, compile and install
# ./configure --prefix=/usr/local/memcached --with-libevent=/usr/local/libevent && make && make install
2.3 Starting memcached
# /usr/local/memcached/bin/memcached -m 64 -p 11211 -u nobody -vv
Note: Memcached starts successfully at this time, but the information is output to the console.
If you want memcached to be started as a service in the background, just add the -d option (daemon background)
# /usr/local/memcached/bin/memcached -m 64 -p 11211 -u nobody -d
How to check whether the server starts normally?
# ps axu | grep memcached
If you need to view parameter information, use memcached -h to view help:
The communication between memcached client and server is very simple, text-based protocol, similar to http protocol, you can directly use Telnet for interaction
Use Telnet operation (quit)
# telent server IP 11211
After connecting, use ctrl ] to open the Telnet echo
2. Basic commands:
Learn memcache’s add, delete, modify and query commands:
add key flag expire length
key: name
flag: 1 memcache is saved as a string
expire: expiration time, memcache time
length: data length (B)
※add increase
# add name 1 0 2 # Add a name value with key on the memcache server. The length is 2 bytes and the validity period is long-term
How to understand expire
Set the cache validity period, there are three formats
Least Recently Used Principle
※delete Delete
# delete key
※replace replace
# replace key flag expire length
※get Get
# get key
※set If the data has replace, if not, add
# set key flag expire length
name exists:
age does not exist:
※incr increase
# incr age NUMBER
※decr decreases
# decr age NUMBER
※stats statistics memcache server information
# stats
※flush_all Clear all data
# flush_all
Beta: Test version: Generally there are some small bugs, but users need to use it to find out. If there are problems, you can give official feedback. Then repair it
alpha: Internal test version: The version used internally during development. Generally, this version has many bugs. But this version often has new features added. (Usually only new companies try it.) There are some compensations for the alpha version.
stable: Stable version: basically bug-free and able to run stably.
Notes on using ftp:
# cls
# cd
# vim .bashrc
Linux download last line mode
# :x lowercase exit wq
linux in edit mode
# Z capitalized
# /usr/local/php/bin/phpize absolute path phpize
d. Use the configure file generated above to collect system information. There is no need to specify the installation path
# ./configure --with-php-config=/usr/local/php/bin/php-config Tell it how to find the php configuration file
e. Execute compilation and installation
# make && make install
Note: You can view the structure after the above command is executed
# ls /usr/local/php/lib/php/extensions/no-debug-non-zts-20090626/
Note: How to check the location of the php.ini configuration file under Linux?
Solution: phpinfo();
Note: Under Linux, be sure to back up the configuration file before modifying it
php.ini-backup-2016-1-12
Note: What is a .so file?
.so is a shared object under Linux, type .dll file under Windows
Note: Under Linux, you can use
to close a service# pkill -9 httpd
# ps aux | grep httpd
#ps uax | grep httpd
Add the address generated above to the php.ini configuration file, as follows
extension_dir=/usr/local/php/lib/php/extensions/no-debug-non-zts-20090626/
extension=memcacahe.so
Create a test.php file to test whether php has a memcache module
php operates memcache, saves and gets values
Note: The maximum value of Memcache can only be 1M space.
Note: Memory fragmentation will always exist, it just depends on which method can minimize memory fragmentation.
1. What is memory fragmentation?
When using this kind of memory cache system, due to constant application and release, some small memory fragments will be formed that cannot be used. This phenomenon is called memory fragmentation. This little red block is the space that cannot be used by the operating system.
Memcache is managed using slab allocator (The size of each slab class is 1M)
The smallest unit is called chunk: a warehouse for storing data
Multiple small units form a chunk: composed of multiple small blocks (all small blocks are the same size)
The size of each slab class is 1M
Note: If the 122Bytes slab is full and now a 100Bytes data comes, where will it be stored?
Answer: 144 definitely does not exist, it still exists in the chunk 122, and the LRU algorithm is used to implement data storage.
When memcached is started, it will organize slab classes according to a certain size, which can be specified by -f
The default is 1.25, and the ratio between adjacent chunks is the increase factor. You can adjust the size of the cache factor according to the business of your website.
Because each business is different and the minimum chunk required is different. This parameter makes our system more adaptable to our own business, because the data can set its own size.
Memcached does not internally monitor whether the record has expired. Instead, it checks the timestamp of the record when getting it to check whether the record has expired. This behavior is called lazy expiration. So the benefit is that memcached does not consume CPU time on expiration monitoring.
For example: set(name, asion, 0, 3600) will expire after 3600 seconds. After it expires, it will not be automatically deleted. Only when get is queried, it will detect whether it has expired and delete it if it expires.
For example # add name 1 8 2 After 8 seconds, does it become invalid or does it not exist? Analyze by stats
Analysis: It is invalid, not non-existent. Only the next time it is retrieved, memcache will detect whether it has expired and delete it if it expires
Memcached will give priority to using the space of timed-out records, but even so, there will be insufficient space when appending new records. In this case, a mechanism called Least Recently Used (LRU) must be used to allocate space.
As the name suggests, this is a mechanism for deleting "least recently used" records. Therefore, when the memory space of memcached is insufficient (when new space cannot be obtained from slab class), it searches from the recently unused records and empties them
space is allocated to new records. From a caching practical perspective, this model is ideal.
When the data space in mecache (default is 64M) is full, can I continue to store data?
Answer: It can be stored. Expired data needs to be deleted. If it has not expired, delete the least active data to make space for adding data later.
For example: Take a 122Bytes slab as an example. When the data is full, if a 100Bytes data comes in, how to deal with it? (Even if it is permanently effective, you will be kicked)
Analysis: Memory management LRU algorithm, FIFO algorithm
Note: If you enter ctrl s under vim, you can use ctrl q to exit
-p listening port
-l The IP address of the connection, the default is the local machine
-d start starts the memcached service
-d restart Restart the memcached service
-d stop|shutdown shut down the running memcached service
-d install install memcached service
-d uninstall uninstall memcached service
-u Run as (only valid when running as root)
-m Maximum memory usage in MB. Default 64MB
Note: If the system is 32-bit, the maximum limit is 2G, if the system is 64-bit, there is no limit.
-M Return an error when memory is exhausted instead of deleting the item
-c Maximum number of simultaneous connections, default is 1024
-f block size growth factor, default is 1.25
-n minimum allocated space, key value flags default is 48
-h show help
-v output warning and error messages
-vv prints the client’s request and return information
-i prints the copyright information of memcached and libevent
Save: 1. File 2. mysql
Question: If there are too many session files in a folder and retrieval becomes slow, how to deal with it?
Answer: Layered processing
Use Memcache to save, Memcache uses distributed storage (use multiple Memcache for storage)
Use memcache to save session
1. Modify the php.ini file, the configuration information is as follows
session.save_handler = memcache #Represents using memcache to save session
session.save_path ="tcp://127.0.0.1:11211" # Specify the address and port of the memcache server
1. Open session and save session
What is distributed?
Due to the limited service capability of a single memcache, multiple memcache can be used to provide caching functions. This architecture is called memcache's distributed cache (cluster) storage system
How to achieve it?
It can be understood this way, how to distribute data to various Memcache servers.
Implement distribution on the client. Before saving the data, according to a certain algorithm , save the data to that memcache server. When obtaining the data, follow the same algorithm as before to get the corresponding data. Get data from memcache server
Distributed Algorithm
Take the remainder of the key value to the number of servers, and then save the corresponding value to the corresponding memcache server with the remainder. Generally, this hash function crc32( key ) % 3
The crc32() function can turn a string into a 32-bit integer
Disadvantages: When a certain server goes down or a server needs to be added, all cached data will be basically invalid at this time because the divisor has changed. A loose formula, hit rate = fetched data/total number 1/N N represents the number of servers
Problems caused: When memcache goes down, the cached data becomes invalid. At this time, the pressure on MySQL will increase sharply.
At this time, MySQL will crash, and then restart MySQL. MySQL will crash again in a short period of time, and then, with a slight delay (part of the cache has been re-established), it will crash again. As time goes by, MySQL has basically become stable and the cache system has been successfully established. Because the cached data does not exist, all requests must be forwarded to MySQL. This phenomenon is called the memcache avalanche phenomenon.
Overview:
Note: As long as memcache is down, there will be data loss. But we must find a way to minimize data loss and use consistent hashing. Even if one server goes down, it will only affect the data on one server.
Virtual node: share tasks
What caused the avalanche?
Solution: Consistent hash algorithm
Solution: Set the cache time to a random time within a range (3-9 hours)
Due to the failure of the cached data of a certain memcache node, the cache hit rate of other memcache nodes decreases. The missing data in the cache will be queried in the MySQL database. In a short period of time, it causes huge pressure on the MySQL server and causes downtime. It’s called cache avalanche phenomenon.
After restarting MySQL, it crashed again in a short period of time, but part of the cache data had been established. After MySQL was started repeatedly, the cache was completely rebuilt, and MySQL no longer crashed and became stable.
Solution: Set the cache time to a random value within a range (3-9 hours), so that it will expire at different time periods and allocate the reconstruction work to different times.
Answer:
Answer:
Because the design of memcache itself is extremely simple, there are no restrictions on setting permissions at all. Why not set permissions? Only caching function is provided for simplicity
192.168.1.221 ---224
3. When using files to save session files, what should be done if there are too many files?
Generally speaking, when there are more than 65535 session files, the session acquisition will become extremely slow at this time, which means that the PHP code execution is very slow. How to solve it?
Answer:
Hierarchical processing: Create folders starting with A-Z under one folder, and then create A_Z
Use memcache for processing: If a single memcache has limited processing capacity, use distributed memcache for processing
Answer: The load balancing cluster provides high availability capabilities. If a certain machine goes down, it can still provide normal services, but the service provision is a little difficult.
Because the load balancing cluster does not provide the keepalive mechanism. Monitoring mechanism
How to use Memcache in your project?
The purpose of Memcache is to cache data and reduce the pressure on MySQL.
Answer:
extension_dir='Directory generated above'
extension='NAME.so'