Let me talk about my own use first:
The first memcache is used, which is a server-side cache for key-value relationships. It is used to store some commonly used data that is not very large but requires fast response
Then, in another place, redis was used, and then I went to study redis. At first glance, it shows that I have installed the PHP extension. Because there is a redis server on the server, I have not installed it locally. In fact, the usage is basically the same as that of memcache. There may be some differences in a few parameters. same. Of course, their caching effects are also different. What are the specific differences? Here are some information and my own summary
1. Redis and Memcache both store data in memory and are both in-memory databases. However, memcache can also be used to cache other things, such as pictures, videos, etc.
From the following dimensions, redis, memcache, mongoDB I have made a comparison, welcome to give me feedback
1, performance
They are all relatively high, and performance should not be a bottleneck for us
Generally speaking, TPSredis is almost the same as memcache. Be larger than mongodb
2, Convenience of operation
memcacheSingle data structure
redis is richer, in terms of data operations, redis is better, and has less network IONumbers
mongodb supports rich data expression and indexing, most similar to relational databases, and supports a wide range of query languages
3, the size of memory space and the size of data
redis added its own VM feature after the 2.0 version to break through the limitations of physical memory ;Can be used for key valueSet expiration time (similar to memcache)
memcache can modify the maximum available memory , adopts the LRU algorithm
mongoDB is suitable for storage of large amounts of data. It relies on the operating system VM for memory management. It also consumes memory very much. The service should not be separated from other services. Together
4, availability (single point issue)
For single point questions,
redis, relies on the client to achieve distributed reading and writing; during master-slave replication, each time the slave node reconnects to the master node, it must rely on the entire snapshot ,No incremental copy, due to performance and efficiency issues,
So the single-point problem is more complicated; automatic sharding is not supported, and needs to rely on the program to set a consistent hash mechanism.
An alternative is to use redis's own replication mechanism, use your own active replication (multiple storage), or change to incremental replication (you need to do it yourself implementation), consistency issues and performance trade-offs
Memcache itself has no data redundancy mechanism, and it is not necessary; for fault prevention, use the mature hash or ring algorithm to solve the single problem. Jitter problems caused by point faults.
mongoDB supports master-slave, replicaset (internally uses paxos election algorithm, automatic fault recovery ),auto The sharding mechanism shields the client from failover and sharding mechanisms.
5, Reliability (persistence)
For data persistence and data recovery,
redis supports (snapshot, AOF): relies on snapshots for persistence, aof has been enhanced While reliability, it also has an impact on performance
memcache is not supported and is usually used for caching , to improve performance;
MongoDB 6, data consistency (transaction support) Memcache In concurrent scenarios, use
cas to ensure consistency
redis transaction support is relatively weak and can only ensure that each operation in the transaction is executed continuously mongoDBdoes not support transactions
7, data analysis
mongoDB has built-in data analysis function
(mapreduce), others are not supported
8, application scenarios redis: more performant operations and calculations with smaller data volume
memcache: used to reduce database
load and improve performance in dynamic systems ; for caching and improving performance (suitable for reading Write more and less. For large amounts of data, you can use
sharding) MongoDB:Mainly solve the problem of access efficiency of massive data
I have been studying key-value storage recently, and I will briefly note my feelings. . I won’t go into details about the installation and use of some memcache and redis. Just briefly talk about the differences between the two options. Some The impressions and test results may not be enough to explain the problem. Please correct me if there is anything wrong. Because in the process of studying these two days, I have found that I have been correcting the flaws in my understanding, and every day I negate the ideas of the previous day. . Okay, Fei Say less. After researching 500,000 data stores, it was found:
Amount of execution of a single instruction per secondMemcache about 30,000 times
redis About 10,000 times
Moreover, a major advantage of memcache is that the expiration time can be directly set through a function, while redis requires two functions to set both the key-value pair and the expiration time, which means that the efficiency of redis at this point becomes half of its original value. , that is, 5,000 times, which is a bit too slow for most needs.
The test code of memcache is as follows:
$mem = new Memcache;
$mem->connect("127.0.0.1", 11211);
$time_start = microtime_float();
//Save data
for($i = 0; $i < 100000; $i ++){
$mem->set("key$i",$i,0,3);
}
$time_end = microtime_float();
$run_time = $time_end - $time_start;
echo "$run_time seconds n";
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
Return ((float)$usec + (float)$sec);
}
?>
The test code of redis is as follows: redis1.php This code takes about 10 seconds
//Connect
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$time_start = microtime_float();
//Save data
for($i = 0; $i < 100000; $i ++){
$redis->sadd("key$i",$i);
}
$time_end = microtime_float();
$run_time = $time_end - $time_start;
echo "$run_time seconds n";
//Close the connection
$redis->close();
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
Return ((float)$usec + (float)$sec);
}
?>
If you need to set the expiration time while setting the key value, it will take about 20 seconds to execute. The test code is as follows: redis2.php
//Connect
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$time_start = microtime_float();
//Save data
for($i = 0; $i < 100000; $i ++){
$redis->sadd("key$i",$i);
$redis->expire("key$i",3);
}
$time_end = microtime_float();
$run_time = $time_end - $time_start;
echo "$run_time seconds n";
//Close the connection
$redis->close();
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
Return ((float)$usec + (float)$sec);
}
?>
Later, I discovered on the Internet that redis has a magical function called transactions, which atomically execute a block of code in sequence through multi, thereby achieving the execution of a complete functional module. Unfortunately, by testing It was found that when executing code in multi mode, the number of requests was not reduced. On the contrary, requests were sent when executing multi instructions and exec instructions, thus quadrupling the running time, that is, Running time of four instructions. The test code is as follows: redis3.php
//Connect
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$time_start = microtime_float();
//Save data
for($i = 0; $i < 100000; $i ++){
$redis->multi();
$redis->sadd("key$i",$i);
$redis->expire("key$i",3);
$redis->exec();
}
$time_end = microtime_float();
$run_time = $time_end - $time_start;
echo "$run_time seconds n";
//Close the connection
$redis->close();
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
Return ((float)$usec + (float)$sec);
}
?>
There is a bottleneck in the problem. There are many companies that need massive data processing, and 5,000 times per second is far from meeting the demand. And because the redis master-slave server has greater advantages than memcache, For the sake of future data, we have to use redis. At this time, a new method has emerged, namely the pipline function provided by phpredis, which can truly encapsulate several pieces of code into One request greatly improves the running speed. It only takes 58 seconds to execute 500,000 times of data. The test code is as follows: redis4.php
//Connect
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$time_start = microtime_float();
//Save data
for($i = 0; $i < 100000; $i ++){
$pipe=$redis->pipeline();
$pipe->sadd("key$i",$i);
$pipe->expire("key$i",3);
$replies=$pipe->execute();
}
$time_end = microtime_float();
$run_time = $time_end - $time_start;
echo "$run_time seconds n";
//Close the connection
$redis->close();
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
Return ((float)$usec + (float)$sec);
}
?>
Using this operation, the assignment operation and the expiration time setting operation can be perfectly packaged into one request for execution, which greatly improves the operating efficiency.
redis installation: http://mwt198668.blog.163.com/blog/static/48803692201132141755962/
memcache installation: http://blog.csdn.net/barrydiu/article/details/3936270
Redis settings master-slave server: http://www.jzxue.com/fuwuqi/fuwuqijiqunyuanquan/201104/15-7117.html
Memcache sets the master-slave server: http://www.cnblogs.com/yuanermen/archive/2011/05/19/2051153.html
This article comes from: http://blog.csdn.net/a923544197/article/details/7594814