Heim > Datenbank > MySQL-Tutorial > Hauptteil

Client side highly available Redis Cluster, Dynamo

WBOY
Freigeben: 2016-06-07 16:36:03
Original
1389 Leute haben es durchsucht

I'm pretty surprised no one tried to write a wrapper for redis-rb or other clients implementing a Dynamo-style system on top of Redis primitives. Basically something like that: 1) You have a list of N Redis nodes. 2) On write, use consiste

I'm pretty surprised no one tried to write a wrapper for redis-rb or other clients implementing a Dynamo-style system on top of Redis primitives.

Basically something like that:

1) You have a list of N Redis nodes.
2) On write, use consistent hashing and write the same thing to M nodes (M configurable).
3) On reads, read from M nodes and pick the most common reply to return to the client. For all the non-matching replies, use DUMP / RESTORE in Redis 2.6 to update the value of nodes that are in the minority.
4) To avoid problems with ordering and complex values, optionally implement some way to lock a key when it's the target of a non plain SET/GET/DEL ... operation. This does not need to be race conditions free, it is just a good idea to avoid to end with keys in desync.

OK the fourth point needs some explanation.

Redis is a bit harder to distribute in this way compared to other plain key-value systems because there are operations that modify the value instead of completely rewriting it. For instance LPUSH is such a command, while SET instead rewrites the value at every time.

When a command completely rebuilds a value, out of order operations are not an huge issue. Because of latency you can still have a scenario like this:

CLIENT A> "SET key1 value1" in the first node.
CLIENT B> "SET key1 value2" in the first node.
CLIENT B> "SET key1 value2" in the second node.
CLIENT A> "SET key1 value1" in the second node.

So you end with the same key with two different values (you can use vector clocks, or ask the application about what is the correct value).

However to restore a problem like this involves a fast write.

Instead if the same happens during an LPUSH against lists with a lot of values, a simple last value desync may force the update of the whole list that could be slower (even if DUMP / RESTORE are pretty impressive performance wise IMHO)

So you could use the first node in the hash ring and the Redis primitives to perform a simple locking operation in order to make sure that operations such LPUSH are serialized across nodes.

But to cut a long story short, this would be an interesting weekend project to do possibly with useful consequences, as Redis 2.6 now allows you to use DUMP/RESTORE to synchronize a value much faster and atomically. Comments
Verwandte Etiketten:
Quelle:php.cn
Erklärung dieser Website
Der Inhalt dieses Artikels wird freiwillig von Internetnutzern beigesteuert und das Urheberrecht liegt beim ursprünglichen Autor. Diese Website übernimmt keine entsprechende rechtliche Verantwortung. Wenn Sie Inhalte finden, bei denen der Verdacht eines Plagiats oder einer Rechtsverletzung besteht, wenden Sie sich bitte an admin@php.cn
Beliebte Tutorials
Mehr>
Neueste Downloads
Mehr>
Web-Effekte
Quellcode der Website
Website-Materialien
Frontend-Vorlage