Clicking 'get started' merely leads to a 'please let me waste hours talking with your enterprise sales reps' form.
A mature product should be possible for me to evaluate and try on my own, and I would rather not waste time doing so if I don't even know what it costs.
You are totally right with your expectations. But in a startup you can't do everything at once and thus no direct download yet. If you contact us you'll talk directly to the developers and get a download link.
That said, Quobyte is in production for business critical workloads.
Of course you're right, a startup can't do everything -- so why not just put a download link up rather than require people to talk to you to get a download link? Given as a startup you're busy and have a lot to do, why not just post a link on your website?
How are the two related exactly? Is XtreemFS part of the core of the Quobyte product, or is it a dead-end, with all new development happening in Quobyte?
Looking at the architecture, there are two major differences:
* XtreemFS has POSIX file system semantics and split-brain safe quorum replication for file data. With Quobyte, we pushed that further and have now full fault tolerance for all parts of the system, working at high performance for both file and block workloads (Quobyte also does erasure coding). GlusterFS replication is not split brain safe, and there are many failure modes that can corrupt your data.
* XtreemFS and Quobyte have metadata servers. This allows them place data for each file individually as it can store the location of the data. With Quobyte we pushed this quite far and have a policy engine that allows you configuring placement. When the policy is changed, the system can move file data transparently in the background. This way you can configure isolation, partitioning and tiering. GlusterFS has a pretty static assignment of file data to devices.
GlusterFS developer here. Sure, we have bugs just like anybody else. So do you. Nonetheless, just saying that GlusterFS replication "is not split-brain safe" is FUD. Neither is yours, not 100%. Likewise, "static assignment" is simply not true. We move files around to rebalance capacity, we have tiering, and users can explicitly place a file on a particular node (though the mechanisms for that are clunky and poorly documented). I've been fair and even complimentary toward XtreemFS in many blog posts and public appearances. I'd appreciate the same honesty and courtesy in return.
I am sorry that this came across this way. I did not intend to say bad things about GlusterFS but carve out where the technical differences are in the big picture.
Also I actually tried not to make any valuing statements. I am using the term split brain safety as technical term, ie. the P in CAP. My understanding is that GlusterFS does not have this in its system model and you and the documentation seem to support this: "This prevents most cases of "split brain" which result from conflicting writes to different bricks."
Quobyte generally (and XtreemFS only for files) does quorum replication based on Paxos, where split brain is part of the system model. They are CP and hence data is not always available but is always consistent for reads if the quorum is there. Like Ceph.
I am sorry that I missed the progress on placement. It seems like I need to catch up on what happened after the volume types.
"Split brain safety" is not a commonly used term, and even if that weren't the case I'd say it's not a term that should be thrown around lightly. Also, using Paxos or Raft doesn't guarantee split-brain safety, as aphyr has proven over and over again with Jepsen. So what we have is two systems that take different approaches to quorum and split brain and all that. It seems a bit disingenuous to throw stones at the older open-source project while ignoring the potential for the exact same problems in the newer proprietary one.
FWIW, I do think the current Gluster approach to replication is not sufficiently resistant to split-brain in the all-important edge cases. That's why I've been working on a new approach, much more like Ceph and many other systems - though few of them use Paxos in the I/O path. That's wasteful. Other methods such as chain or splay replication are sufficient, with better performance, so they're more common.
That I'm aware of, nobody wants to pay the price for that. Much like with high assurance systems in general. Demos, prototypes, and so on available with no uptake due to tradeoffs involved. Or maybe I.P. issues, too.