i found a very useful article about the database sharding, here is the content.
This article accompanies the slides from a presentation on database sharding. Sharding is a technique used for horizontal scaling of databases we are using at Netlog. If you're interested in high performance, scalability, MySQL, php, caching, partitioning, Sphinx, federation or Netlog, read on ...
This presentation was given at the second day of FOSDEM 2009 in Brussels. FOSDEM is an annual conference on open source software with about 5000 hackers. I was invited by Kris Buytaert and Lenz Grimmer to give a talk in the MySQL Dev Room. The talk was based on an earlier talk I gave atBarcampGent 2.
Overview Who am I? What is Netlog? A history of scaling database systems Hitting limits Sharding basics Sharding schemes Implications Existing solutions Implementation Tackling the problems Final thoughts Slides ResourcesWho am I?
Currently I am a Lead Web Developer at Netlog working with php, MySQL and other frontend technologies to develop and improve the features of our social network. I've been doing this for 3 years now. For this paper it is important to mention that I am neither a DBA nor a sys-admin, so I approach the problem of scaling databases from an application / developer point of view.
Of course the solutions presented in this presentation are the result of a lot of effort from the Development and IT Services Department at Netlog.
For those of you, who are unfamiliar with Netlog, it's best to sketch a little overview of who and what we are, and especially where we come from in terms of userbase and growth. It will let you see things in perspective regarding scalability. At the moment we have over 40 million active members, resulting in over 50 million unique visitors per month. This adds up to 5+ billion page views per month and 6 billion online minutes per month. We're active in 26 languages and 30+ countries with our 5 most active countries being Italy, Belgium, Turkey, Switzerland and Germany. (If you're interested in more info about the company, check our About Pages and sign-up for an account.)
In terms of database statistics, this type of usage results among others in huge amounts of data to store (eg. 100+ million friendships for nl.netlog.com). The nature of ourapplication (lots of interaction) results in a very write-heavy app (with a read-write ratio of about 1.4 to 1). A typical database, before sharding, had an average of 3000+ queries per second during the peaktime (15h - 22h local time, for nl.netlog.com).
Of course, these requirements do not have to be met by every application, and different applications require different scaling strategies. Nevertheless we wouldn't have thought (or hoped) to be where we are today, when we started off 7 years ago as a college student project. We are convinced that we can give you further insight into scalability and share some valuable suggestions.
Below is a graph of our growth in the last year.
This growth has of course resulted in several performance issues. The bottleneck for us has often been the database layer, because this layer is the only layer in the web stack that isn't stateless. The interactions and dependencies in a relational database system, make scaling horizontally less evident.
Netlog is (being) built and runs on open source software such as php, MySQL, Apache, Debian,Memcached, Sphinx, Lighttpd, Squid, and many more. Our solutions for scaling databases are also built on these technologies. That's why we want to give something back by documenting and sharing our story.
A history of scaling database systemsAs every hobby project, Netlog (then asl.to, "your internet passport") started off, more then 7 years ago, with a single database instance on a - probably virtual - server in a shared hosting environment. As traffic grew and load increased, we moved to a separate server, with eventually a split setup for MySQL and php (database setup 1).
Database Setup 1: Master (W)
A next step to be taken was introducing new databases configured as "slaves" of the "master" database. Because a single MySQL server couldn't serve all the requests from our application, we distributed the read and write traffic to separate hosts. Setting up a slave is pretty easy through MySQL's replication features. What happens in a master-slave configuration is that you direct all write-queries (INSERT/UPDATE/DELETE) to the master database and all (or most) read queries to one or more slave databases. Slaves databases are typically kept in sync with the master by reading the binlog files of the master and replaying all write-queries (database setup 2).
Problems to tackle for this set-up include increased complexity for your DBA-team (that needs to monitor multiple servers), and the possibility of "replication lag"; your slaves might get out-of-sync with the master database (because of locking read-queries, downtime, inferior hardware, etc.), resulting in out-of-date results being returned when querying the slave databases.
Not in every situation real-time results are required, however you'll have situations where you have to force some read-queries to your master database to ensure data integrity. Otherwise you will end up with the painful consequences of (possible) race conditions.
Database Setup 2: Master (W) + Slaves (R)
A good idea for the master-slave set-up is to introduce roles for your slaves. Typically you might assign all search, backend and/or statistics related queries to a "search-slave", where you don't care that much about replication lag, since real time results are seldom required for those kind of use cases.
This system works especially well for read-heavy applications. Say you've got a server load of 100% and a read/write ratio of 4/1, your master server will be executing SELECT-queries 80% of the time. If you add a single slave to this set-up, the SELECT capacity doubles and you can handle twice the amount of SELECT-queries.
But in a write-heavy application, or a situation where your master database is executing write-queries for 90% of the time, you'll only win another 10% capacity by adding another slave, since your slaves will be busy syncing with their master for about 90% of the time. The problem here is that you're only distributing read traffic and no write traffic. In fact you're replicating the write traffic. Considering the fact that the efficiency of a Master-Slave setup is limited, you end up with lots of identic copies of your data.
At this point you'll have to start thinking about distributing write traffic. The heavier your application relies on write traffic, the sooner you'll have to deal with this. A simple, and straightforward, first step is to start partitioning your application on feature-level. This process is called vertical partitioning.
In your application you identify features (and by that MySQL tables) that more or less can exist on separate servers. If you have tables that are unrelated and don't require JOINs, why not put them on separate servers? For Netlog we have been able to put most of the tables containing details about a the items of a user (eg. photos, blogs, videos, polls, ...) on separate servers. By replicating some important tables (eg. a table with userids, nicknames, etc.) to all separate partitions, you can still access and JOIN with those tables if you might need to.
In database setup 3, you see an example where we don't bother our master database anymore for friends or messages related queries. The write and read queries for these features go directly to the database responsible for that feature. These feature-specific hosts are still configured as slaves of the "TOP" master database, because that way we can replicate a few of those really important tables.
A good idea here is to split up the tables for OLAP use cases (data warehouses) from OLTP use cases (front-end, real time features), since these require a different approach and have different needs regarding speed and uptime, anyways.
Database Setup 3: Vertical Partitioning
What we did in setup 2, can be easily repeated for each of the vertically partitioned features. If any of your databases have trouble keeping up with the traffic requirements, configure a slave for that database and distribute the read and write traffic. This way you create a tree of databases replicating some tables through the whole system and a database class responsible for distributing the right queries to the right databases (database setup 4).
Database Setup 4: Vertical Partitioning / Replication Tree
If necessary, you might dive deeper into your application and find more features to partition. Unfortunately this will become harder and harder, because with every feature you split up, you again lose some JOIN-functionality you might want or need. And, sometimes, you're even stuck with a single table that's growing too large and grows beyond what a single database host can easily manage. The first feature to hit this single-table-on-a-single-database limit, was a table with friendships between our users. This table grew so rapidly that the performance and uptime of the host responsible for this feature wasn't guaranteed anymore, no matter how many slaves we added to it. Of course you can always choose to scale up, instead of scale out, by buying boxes with an incredibly insane hardware setup, but apart from being nice to have, they're expensive and they'll still hit limits if you continue growing.
This approach to scaling has a limit (database setup 5) and if you hit that limit, you have to rethink your scaling strategy.
Database Setup 5: Hitting Limits
So, what's next?What could we do now? Vertical partitioning has helped us a great deal, but we are stuck. Does master-to-master replication help? Will a cluster set-up help? Not really; these systems are designed for high availability and high performance. They work by replicating data and don't offer solutions for distributing write traffic.
What about caching? Oh, how can we forget about caching! Of course, caching will help a great deal in lowering the load on your database servers. (The read/write-ratio mentioned earlier would be completely different if we did no caching.) But the same problem remains: caching will lower the read traffic on your databases, but doesn't offer a solution for write traffic. Caching will delay the moment your database is only returning "1040 Too many connections" errors, but no matter how good your caching strategy is, it can't prevent your visitor metrics going nuts at some point.
You can't split a table vertically, but can you easily split it horizontally? Sharding a table is putting several groups of records of that table in separate places (be it physically or not). You cut your data into arbitrarily sized pieces / fragments / shards and distribute them over several database hosts. Instead of putting all 100+ million friendships records on 1 big expensive machine, put 10 million friendships on each of 10 smaller and cheaper machines.
Sharding, or horizontal partitioning, is a term that was already in active use in 1996 in the MMO (Massive Multiplayer Online) Games world. If you're searching for info on sharding, you'll see it's a technique used by among others Flickr, LiveJournal, Sun and Netlog.
Sharding a photos table over 10 servers with a modulo partitioning scheme
In the image above you see an example of splitting up a photos-table over 10 different servers. The algorithm that's used to decide where you data goes or where you can access your data is eg. a modulo function on the userid of the owner of that photo. If you know the owner of a photo, you then know where to fetch the photo's other details, fetch its comments, etc.
Let's hvae a look at another simple example.
In a non sharded environment, somewhere in your application, you'd find code that looks like this:
PLAIN TEXT
PHP:
$db = DB:: getInstance ( ); // fetch a database instance
$db-> prepare ( "SELECT title, message FROM BLOG_MESSAGES WHERE userid = {userID}" ); // prepare a query
$db-> assignInt ( 'userID', $userID ); // assign query variables
$db-> execute ( ); // execute the query
$results = $db-> getResults ( ); // fetch an array of results
In this example we first fetch an instance of our database class that connects to our database. We then prepare a query, assign the variables (here the id of the author $userID), execute the query and fetch the resultset. If we introduce sharding based on the author's $userID, the database we need to execute this query on, is depending on that $userID (whether or not it is an even number). An approach to handle this could be to include the logic of "which user is on which database" into our database class and pass on that $userID to that class. You could end up with something like this: you pass on the $userID to the DB::getInstance() function, which then returns an object with the connection details based on the result of $userID % 2:
PLAIN TEXT
PHP:
$db = DB:: getInstance ( $userID ); // fetch a database instance, specific for this user
$db-> prepare ( "SELECT title, message FROM BLOG_MESSAGES WHERE userid = {userID}" );
$db-> assignInt ( 'userID', $userID );
$db-> execute ( );
$results = $db-> getResults ( );
Instead of passing the $userID as a parameter to your DB-class, you could try to parse it from the prepared query you supply your class, or you could do your calculation of which DB connection you need on a different level, but the key concept remains the same: you need to pass some extra information to your database class to know where to execute the query. That is one of the most challenging requirements that has to be met for successful sharding.
How to shard your data?When you want to split up your data two questions spring to mind: which property of the data (which column of the table) will I use to make the decisions on where the data should go? And what will the algorithm be? Let's call the first one the "sharding/partitioning key", and the second one the "sharding/partitioning scheme".
Which sharding key will be used is basically a decision that depends on the nature of your application, or the way you'll want to access your data. In the blog example, if you display overviews of blog messages per author, it's a good idea to shard on the author's $userID. Say your site's navigation is through archives per month or per category, it might be smarter to shard on publication date or $categoryID. (If your application requires both approaches it might even be a good idea to set up a dual system with sharding on both keys.)
What you can do with the "shard key" to find its corresponding shard basically falls into 4 categories:
Vertical Partitioning: Splitting up your data on feature/table level can be seen as a kind of sharding, where the "shard key" is eg. the table name. As mentioned earlier this way of sharding is pretty straightforward to implement and has a relatively low impact on the application on the whole. Range-based Partitioning: In range based partitioning you split up your data according to several ranges. Blog posts from before the 2000 and before go to database 1, blog posts from the new millenium go to the other database. This approach is typical for logging or other time based data. Other examples of range based partitioning could include federating users according to the first number of their postal code. Key or Hash based Partitioning: The modulo-function used in the photos example is a way of partitioning your data based on hashing or other mathematical functions of the key. In the simple example of a modulo function you can use your number of shards for the modulo-operation. Of course, changing your number of shards would mean rebalancing your data. This might be a slow process. A way to solve this is to use a more consistent hashing mechanism, or choose the original number of your shards right and work with "virtual shards". Directory based Partitioning: The last and most flexible scheme is where you have a directory lookup for each of the possible values of your shard key, mapped to a certain shard's id. This makes it possible to move all data from a certain shard key (eg. a certain user) from shard to shard, by altering the directory. A directory could on the other hand introduce more overhead or be a SPOF (Single Point Of Failure).As shown in the blog example, you need to know your "shard key" before you can actually execute your query on the right database server. It means that the nature of your queries and application determines the way of partitioning your data. The demanded flexibility, the projected growth and the nature of your data will be other factors helping you decide on what scheme to use.
You also want to choose your keys and scheme so the data is optimally balanced over the databses and the load to each of the servers in the pool is equal.
The end result of sharding your data should be that you have distributed write-queries to different independent databases, and that you end up with a system of more, but cheaper machines, that each have a smaller amount of the data and thus can process queries faster on smaller tables.
If you succeed, you're online again. Users appreciate it and your DBA is happy, because each of the machines in the setup now has less load and crashes less so there is no tussing and turning through the nights. (Smaller tables means faster queries, and that includes maintainance or ALTER-queries, which again helps in keeping your DBA and developers happy.)
Photo from The Rocketeer. (Creative Commons Licensed)
Of course, sharding isn't the silver bullet of horizontal database scaling that will easily solve all your problems. Introducing sharding in your application comes with a significant cost of development. Here are some of its implications:
No cross-shard SQL queries: If you ever want to fetch data that (possibly) resides on different shards, you won't be able to do this with a JOIN on SQL-level. If you shard on $userID a JOIN with data from the same user is possible. However once you fetch results from several users on a shard, this will probably be an incomplete resultset. The key here is to design your application so there's no need for cross-shard queries. Other solutions could be the introduction of parallel querying on application level, but then of course you lose the aspect of distributing your database traffic. Depending on the use case, this could be a problem or not (eg. parallel querying for backend purposes is not as crazy as it may sound).At the moment Netlog is the 67th most visited website in the world, according to Alexa's ranking. This means that there's at least 66 other websites out there probably facing similar problems as we do. 16 of the 20 most popular websites are powered by MySQL so, we are definitely not alone, are we?
Let's have a look at some of the existing technologies that implement or are somehow related to sharding and scaling database sysems, and let's see which ones could be interesting for Netlog.
MySQL Cluster is one of the technologies you could think would solve similar problems. The truth is that a database cluster is helpful when it comes to high availability and performance, but it's not designed for the distribution of writes.
MySQL Partitioning is another relatively new feature in MySQL that allows for horizontal splitting of large tables into smaller and more performant pieces. The physical storage of these partitions are limited to a single database server though, making it not relevant for when a single table grows out of the capacities of a single database server.
HSCALE and Spock Proxy, that both build on MySQL Proxy, are two other projects that help in sharding your data. MySQL Proxy introduces LUA, as an extra programming language to instruct the proxy (for eg. finding the right shard for this query). At the time we needed a solution for sharding neither of these projects seemed to support directory based sharding the way we'd wanted it to.
HiveDB is a sharding framework for MySQL in Java, that requires the Java Virtual Machine, with a php interface currently being in an infancy state. Being a Java solution makes it less interesting for us, since we prefer the technologies we are experts in and our application is written in: php.
Other technologies that aren't MySQL or php related include HyperTable (HQL), HBase, BigTable,Hibernate Shards (*shivers*), SQLAlchemy (for Python), Oracle RAC, etc ... The memcached SQL-functions or storage engine for MySQL is also a related project that we could mention here.
None of these projects really seemed to come in line with our requirements. But what exactly are they?
Flexible for the hardware department.So, what did we come up with? An in-house solution, written 100% in php. The implementation is mostly middleware between application logic and the database class. We've got a complete caching layer built in (using memcached). Since our site is mainly build around profiles, most of the data is sharded on $userID.
In this system we are using the shard scheme below, where a shard is identified by a unique number ($shardID) that also serves as a prefix for the tables in the sharding system. Several shards (groups of tables) sit together in a "shard database", and several of those databases (not instances) are on a certain "shard database host".
So a host has more then one shard. This allows us to move shards as a whole, or databases as a whole to help in balancing all the servers in the pool and it allows us to play with the amount of shards in a database and amount of shards on a server to find the right balance between table size and open files for that server.
When started using this system in production we had 4000 shards on 40 hosts. Today we've got 80 hosts in the pool.
Shards live in databases, databases live on hosts
From the php side there are two parts of the implementation. The first being a series of management and maintainance related functions allowing to add, edit, delete shards, databases and hosts to the system and a lookup system. The second series of classes provides an API consisting of a database access layer and a caching layer.
The Sharding Management DirectoryThe directory or lookup system is in fact a single MySQL table translating shard keys to $shardIDs. Typically these are $userID-$shardID combinations. This is a single table with the amount of records being the number of users on Netlog. With only id's saved in that table it's still manageable and can be kept very performant through master-to-master-replication, memcached and/or a cluster set-up.
Next to that there's a series of configuration files that translate $shardIDs to actual database connection details. These configuration files allow us to flag certain shards as not available for read and/or write queries. (Which is interesting for maintainance purposes or when a host goes down.)
Note: The API we implemented allows for handling more than the typical case I'll discuss next, and also allows for several caching modes and strategies based on the nature and use of its application.
Most records and data in the shard system have both a $userID field and an $itemID field. This $itemID is a $photoID for tables related to photos or $videoID for tables related to videos. (You get the picture ...) The $itemID is sometimes an auto_increment value, or a foreign key and part of a combined primary key with $userID. Each $itemID is thus unique per $userID, and not globally unique, because that would be hard to enforce in a distributed system.
(If you use an auto_increment value in a combined key in MySQL, this value is always a MAX()+1 value, and not an internally stored value. So if you add a new item, delete it again, and insert another record, the auto_increment value of that last insert will be the same as the previously inserted and deleted record. Something to keep in mind ...)
If we want to access data stored in the sharding system we typically create an object representing a table+$userID combination. The API provides all the basic CRUD (Create/Read/Update/Delete) functionalities typically needed in our web app. If we go back to the first example of fetching blog messages by a certain author we come to the following scenario;
Query: Give me the blog messages from author with id 26.
Where is user 26?In this process step 1 is executed on a different server (directory db) then step 2 and 3 (shard 5). Step 2 and 3 could easily be combined into one query, but there's a reason why we don't do it, which I'll explain when discussing our caching strategy.
It's important to note that the functionality behind step 2 allows for adding WHERE, ORDER and LIMIT clauses so you can fetch only the records you need in the order you need.
(One could argue that for the example given here and the way we are using MySQL here, it's not needed to have a relational database and you could try to use simpler database systems. While that could be the case, there's still advantages in using MySQL, for cases you're bypassing this API. It's not that bad to have all your data in the same format either, sharded or not. The possible overhead of still using MySQL hasn't been the bottleneck for us today, but it is certainly something we might consider improving on.)
Shard ManagementTo keep the servers in the sharding system balanced we are monitoring several parameters such as number of users, filesize of tables and databases, amount of read and write queries, cpu load, etc. Based on those stats we can make decisions to move shards to new or different servers, or even to move users from one shard to another.
Move operations of single users can be done completely transparently and online without that user experiencing downtime. We do this by monitoring write queries. If we start a move operation for a user, we start copying his data to the destination shard. When a write query is executed for that user, we abort the move process, clean up and try again later. So a move of a user will be successful if the user himself/herself isn't active at that time, or if no other user is interacting with him/her (for features in the sharding system).
Moving a complete shard or database at a time is a more drastic approach to balancing the load of servers and requires some downtime which we can keep to a minimum by configuring shards as read only / using a master-slave setup during the switch, etc.
Inherent to this sytem is that if one database goes down, only the users (or interactions with the users) on that database are affected. We (can) improve the availability of shards by introducing clusters, master-master setups or master-slave setups for each shard, but the chance of a shard database server being in trouble are slim to none because of the minor load on shard db's compared to the pre-sharding-era.
Tackling the problemsThe difficulties of sharding are partially tackled by implementations with these 3 technologies: Memcached, parallel processing and Sphinx.
Memcached"memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." By putting a memory caching layer in between our application logic and the SQL-queries to our shard database we are able to get results much, much faster. This caching layer also allows us to do some of the cross-shard data fetching, previously thought impossible on SQL-level.
For those unfamiliar with memcached, below is a very simple and stripped-down example of Memcached usage where we try to fetch a key-value pair from the cache and if it's not found we compute the value and store it into the system so a subsequent call to the function will instantly return the cached value without stressing the database.
PLAIN TEXT
PHP:
function isObamaPresident ( )
$memcache = new Memcache ( );
$result = $memcache-> get ( 'isobamapresident' ); // fetch
if ( $result === false )
// do some database heavy stuff
$db = DB:: getInstance ( );
$votes = $db-> prepare ( "SELECT COUNT(*) FROM VOTES WHERE vote = 'OBAMA'" )-> execute ( );
$result = ( $votes> (USA_CITIZEN_COUNT / 2 ) ) ? 'Sure is!' : 'Nope.'; // well, ideally
$memcache-> set ( 'isobamapresident', $result, 0 );
return $result;
Memcached is being used in several ways and on several levels in our application code, and for sharding the main ones include;
Each $userID to $shardID call is cached. This cache has a hit ratio of about 100% because every time this mapping changes we can update the cache with the new value and store it in the cache without a TTL (Time To Live). Each record in sharded tables can be cached as an array. The key of the cache is typically tablename + $userID + $itemID. Everytime we update or insert an "item" we can also store the given values into the caching layer, making for a theoretical hit-ratio of again 100%. The results of "list" and "count" queries in the sharding system are cached as arrays of $itemIDs or numbers with the key of the cache being the tablename + $userID (+ WHERE/ORDER/LIMIT-clauses) and a revision number.The revision numbers for the "list" and "count" caches are itself cached numbers that are unique for each tablename + $userID combination. These numbers are then used in the keys of "list" and "count" caches, and are bumped whenever a write query for that tablename + $userID combination is executed. The revisionnumber is in fact a timestamp that is set to "time()" when updated or when it wasn't found in cache. This way we can ensure all data fetched from cache will always be the correct results since the latest update.
If, with this in mind, we again return to the blog example, we get the following scenario.
Query: Give me the blog messages from author with id 26.
Where is user 26?Because of this caching strategy the two separate queries (list query + details query) which seemed a stupid idea at first, result in better performance. If we hadn't split this up into two queries and cached the list of items with all their details (message + title + ...) in Memcached, we'd store much more copies of the record's properties.
There is an interesting performance tweak we added to the "list" caches is that. Let's say we request a first page of comments (1-20), we actually query for the first 100 items, store that list of 100 in cache and then only return the requested slice of that result. A likely, following call to the second page (21-40) will then always be fetched from cache. So the window we ask from the database is different then the window requested by the app.
For features where caching race conditions might be a problem for data consistency, or for use cases where caching each record separately would be overhead (eg. because the records are only inserted and selected and used for 1 type of query), or for use cases where we do JOIN and more advance SQL-queries, we use different caching modes and/or different API-calls.
This whole API requires quite some php processing we are now doing on application level, where previously this was all handled and optimized by the MySQL server itself. Memory usage and processing time on php-level scale alot better then databases though, so this is less of an issue.
Parallel processingIt is not strange to fetch data stored on different shards in one go, because most data is probably available from memory. If we fetch a friends of friends list, one way to do this could be to fetch your own friends loop over them and fetch their friends and then process those results to get a list of people your friends know, but you don't know yet.
The amount of actual database queries needed for this will be small, and even so, the queries are simple and superfast. Problems start to occur if we are processing this for users which have a couple of hundreds of friends each. For this we've implemented a system for splitting up certain big tasks into several smaller ones we can process in parallel.
This parallel processing in php is done by doing several web requests to our php server farm that each process a small part of the task. It is actually faster to process 10 smaller tasks simultaneously than to do the whole thing at once. The overhead of the extra web requests and cpu cycles it takes to split up the task and combine the results, are irrelevant compared to the gain.
Other typical queries that become impossible for sharded data are overview queries. Say you'd like a page of all the latest photos uploaded by all users. If you'd have your user's photos distributed over a hundred of databases, you'd have to query each, and then process all of those results. Doing that for several features would not be justifiabled, so most of our "Explore" pages (where you browse through and discover content from the community) are served from a different system.
Sphinx is a free and open source SQL full-text search engine. We use it for more than your average input field + search button search engine. In fact a list of most viewed videos of the day, can also be a query result from Sphinx. For most of the data on these overview pages it's not a problem if the data isn't real time. So it's possible to retrieve those results from indexes that are regularly built from the data on each shard and then combined.
For a full overview of how we use Sphinx (and how we got there), I encourage you to have a look at the presentation of my colleague Jayme Rotsaert, "Scaling and optimizing search on Netlog", who's put a lot of effort into using Sphinx.
Final thoughtsIf there are only two things I could say about sharding it'd be these two quotes;
"Don't do it, if you don't need to!" (37signals.com) "Shard early and often!" (startuplessonslearned.blogspot.com)Sounds like saying two opposite things? Well, yes and no.
You don't want to introduce sharding in your architecture, because it definitely complicates your set-up and the maintenance of your server farm. There are more things to monitor and more things that can go wrong.
Today, there is no out-of-the-box solution that works for every set-up, technology and/or use case. Existing tool support is poor, and we had to build quite some custom code to make it possible.
Because you split up your data, you lose some of the features you've grown to like from relational databases.
If you can do with simpler solutions (better hardware, more hardware, server tweaking and tuning, vertical partitioning, sql query optimization, ...) that require less development cost, why invest lots of effort in sharding?
On the other hand, when your visitor statistics really start blowing through the roof, it is a good direction to go. After all, it worked for us.
The hardest part about implementing sharding, has been to (re)structure and (re)design the application so that for every access to your data layer, you know the relevant "shard key". If you query details about a blog message, and blog messages are sharded on the author's userid, you have to know that userid before you can access/edit the blog's title.
Designing your application with this in mind ("What are the possible keys and schemes I could use to shard?"), will definitely help you to implement sharding more easily and incrementally at the moment you might need to.
In our current set-up not everything is sharded. That's not a problem though. We focus on those features that require this scaling strategy, and we don't spend time on premature optimization.
Today, we're spending less ca$h on expensive machines, we've got a system that is available, it can handle the traffic and it scales.
View more presentations from Jurriaan Persyn.
(tags: fosdem2009 fosdem)
For further questions or remarks, feel free to contact me at jurriaan@netlog.com and subscribe to my blog atwww.jurriaanpersyn.com and the Netlog developer blog at www.netlog.com/go/developer/blog.