At work we're in the midst of rolling out a sharded Postgres platform based on http://www.craigkerstiens.com/2012/11/30/sharding-your-datab..., with the sharding implemented at the application level. The biggest piece of complexity in that post is around designing the sharding in such a way that you can gracefully add more shards later.
Having read the pg_shard readme, it's not clear to me how it addresses that issue. I'd need to have a really clear idea how to handle scaling my cluster before committing to a sharding solution.
We'll update our FAQ with a detailed answer to this question.
As a summary, the user specifies the shard and replication count as they are sharding their table. For example, if you have 4 nodes, you may pick 256 as the initial shard count. That way, you'll have ample room to grow as you add new nodes to your cluster.
When you pick 256 shards over 4 worker nodes, pg_shard will create them in a round-robin fashion. Shard #1 will go to node A and B, shard #2 will go to node B and C, and so forth. This has the advantage that when one of the worker nodes fail, the remaining 3 nodes evenly take the additional work. Also, when you add a new node to the cluster, you can gradually rebalance some of the shards by moving them to the new node.
We have used sharding for managing time series data e.g. 1 shard per day. Is there a way this could work i.e. where the number of shards continually grows?
That use case isn't yet handled by pg_shard: the plugin currently supports only hash partitioning and what you've described is range partitioning. This is certainly on our immediate feature list, as range and hash partitioning cover a variety of use cases.
However, CitusDB does support range partitioning and has a \stage command that will create new shards from incoming data. If you periodically load data that corresponds to a particular time range (hour, day, week), CitusDB can easily handle creation of additional shards during each load.
In our experience, sharding at the application level is to be avoided.
We implemented application level sharding a few years ago when we had 1Tb of data. We now have over 20Tb of data and application level business logic is killing us.
Our pain is not DB speed, but application complexity. All of our applications (backup, web, maintenance, archiving etc) all need the business logic to traverse the sharding set.
We are planning to move to DB level sharding in January (as part of a larger application and infrastructure refresh).
It's great to see this released as open source (LGPL) -- looks like a really useful extension.
I've always been a fan of using PostgreSQL wherever possible, and this extends "where possible". As a YC S11 batchmate especially, I'm really proud of all the great contributions Citus Data has made, and what a useful tool CitusDB is (a bunch of other YC companies use it).
I wonder how it compares to the slightly more well established Postgres-XL.
There is definitely a different approach being taken as Postgres-XL has a supervisor/loadbalancer, and pg_shard seems like every node is capable of doing all actions.
Hey, Jason from Citus Data here (one of pg_shard's authors).
Yes: if you have high availability needs, PostgreSQL's streaming replication feature can mitigate this failure risk by providing a hot standby for your master; otherwise, regular backups of the master can suffice (pg_shard's metadata is stored in regular PostgreSQL tables).
We have something similar in house, it is basically proxy that runs on our app-server and our app connects to. It will then parse incoming queries and route queries to the correct schema/server base on the WHERE clause (also supports INSERT). We would like to on open source it, but right now it specific to our needs (e.g hard coded hash function).
Comparing this a bit with Postgres-XL, how mindful of data locality do I have to be when querying? Looks like this drops right in for existing apps, but I'd be concerned about long-term performance if I didn't tailor my app code.
You don't need to worry much about data locality with pg_shard. It's designed to push the computation to the data, and as such, abstracts away most data locality concerns. The only exceptions are complex analytic workloads such as distributed joins; and CitusDB offers an easy upgrade path to remove those concerns.
Another difference with Postgres-XL is that pg_shard is an extension to PostgreSQL, rather than a separate product with different components. As such, you can just compile and load it into standard Postgres, and you're ready to go.
Nice work. But I wonder how they handle table alterations, I couldn't see they mentioned on the docs. Is it possible at all? If it is, since pg_shard doesn't support transactions, what if alteration fails ?
Table alterations are not supported: if we're missing this in our docs I'll be sure to make that more explicit. They're certainly in the list of things we'd like to support.
At the moment, our customers who do need table alterations accomplish them by using a script that propagates such changes to all worker nodes.
Wondering if any CitusData folks can speak to whether this is being used for production workloads anywhere yet. If so, could you speak to the size/throughput of those deployments?
We have a sharding mechanism in our PHP framework and also Node as well, which can actually split shards which become too "hot" as determined by you. The whole system is online during splitting and only a small part if the system goes offline for 1 second before the final switchiver. No need to pre shard in the beginning, it actually splits according to usage later, into an unlimited number of shards.
One of the issues I can see already is being able to support existing applications, especially ones that have transaction heavy workflows. I have the same issue with Postgres XC, supporting transactions, but not supporting savepoints.
But this looks like a completely different use case for postgres, as a sort of pseudo-noSQL type db.
Having read the pg_shard readme, it's not clear to me how it addresses that issue. I'd need to have a really clear idea how to handle scaling my cluster before committing to a sharding solution.