redis数据库把它分散到两台机器
阿神
阿神 2017-04-21 10:57:02
0
6
688

我有一个4G的redis数据库,现在已经存在,我想要把它分散到两台机器,一台机器2G,该怎么做?

看了一下redis cluster,都是讲单点失败,主从复制,这个不是我的需求。

sf的redis服务器是多大的内存?

阿神
阿神

闭关修行中......

reply all(6)
大家讲道理

Not sure what client you are using. Different clients have the most basic implementation of distributed, but these are client implementations. When adding and deleting nodes, you need to recalculate the hash and migrate the data yourself. Redis 3 will have server support, which will make it much simpler.

Basically, the current client implementation is based on consistent hashing as @TechAd said. For example, Redis::Distributed is available for the ruby ​​client. It is also very simple to split 4G data into two machines according to your needs. There is already https:/ /github.com/yankov/redis-migra... You can use it directly. Let’s take a look at the implementation. It’s actually very simple.

Hope it helps you.

巴扎黑

It is recommended that you search for related algorithms of consistent hashing. The general idea is to make a route on the client, make a hash for the key, and then mod 2, store the key-value to 0 and 1 on two machines, and read The same is true when taking it out. You can refer to memcache and give you a reference link http://blogread.cn/it/article/5271

大家讲道理

You can use the 3L approach, or the most direct way is to separate the data at the business layer and write it in the configuration. This will be very convenient when splitting in the future and the logic is simple. If you really need to do distributed , it is recommended to use mongoDB

PHPzhong

If you don’t need pepline, it is recommended to use https://github.com/twitter/twemproxy Twitter exit proxy, automatic hashing, automatic switching on failure

刘奇

I would like to ask, does twemproxy not support automatic switching between active and standby?

巴扎黑

如果你是用redis cluster来做这个数据迁移的话,可以看看其官方的介绍:
Assuming you have your preexisting data set split into N masters, where N=1 if you have no preexisting sharding, the following steps are needed in order to migrate your data set to Redis Cluster:
Stop your clients. No automatic live-migration to Redis Cluster is currently possible. You may be able to do it orchestrating a live migration in the context of your application / environment.
Generate an append only file for all of your N masters using the BGREWRITEAOF command, and waiting for the AOF file to be completely generated.
Save your AOF files from aof-1 to aof-N somewhere. At this point you can stop your old instances if you wish (this is useful since in non-virtualized deployments you often need to reuse the same computers).
Create a Redis Cluster composed of N masters and zero slaves. You'll add slaves later. Make sure all your nodes are using the append only file for persistence.
Stop all the cluster nodes, substitute their append only file with your pre-existing append only files, aof-1 for the first node, aof-2 for the second node, up to aof-N.
Restart your Redis Cluster nodes with the new AOF files. They'll complain that there are keys that should not be there according to their configuration.
Use redis-trib fix command in order to fix the cluster so that keys will be migrated according to the hash slots each node is authoritative or not.
Use redis-trib check at the end to make sure your cluster is ok.
Restart your clients modified to use a Redis Cluster aware client library.w

Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template