Actually, @aphyr's analysis is wrong. In fact it's actually worse that he says. There is an exactly 0% chance of conflicts between two transactions that each only write 20 random keys (as they do in this test). This is perhaps counterintuitive, but, because there are no reads, any serialization order is possible and therefore there is no chance of conflict.
Equally wrong is his assumption that this has any bearing on the performance of FoundationDB. In fact FoundationDB will perform the same whether the conflict rate is high or low. This isn't to say that FoundationDB somehow cheats the laws of transaction conflicts, just that it has to do all the work in either case. There is no trick or cheat on this test--this same performance will hold on a variety of workloads with varying conflict rates as well as those including reads.
That's true: a large conflict rate would eat into your budget because clients would be retrying.
Of course most real world workloads are (hopefully!) no where near 90% conflicts. If you had a 90% transaction conflict rate you could expect FoundationDB performance to drop by about 30-60% due to retries (and, worse news, you would need to rethink your architecture).
because there are no reads, any serialization order is possible and therefore there is no chance of conflict
I don't understand how this is the case. If clients A and B try to write to the same key, it can be serialized as {A,B} or {B,A}, but in either case, there is some kind of conflict... no?
Nope. Not unless someone actually read that same key in their transaction. Then, with optimistic concurrency at least, you might get a conflict at commit time that basically says "Hey, someone changed that key you read in the meantime so your write might not be correct anymore".
Ah – I see what you mean now! If they are blind writes, not some kind of CAS operation (reading from and modifying the same key), then there is no conflict.
I thought you meant that clients A and B do their writes, and only if someone at a later time were to observe the value at that key, a conflict would be caused (Heisenberg-style).
Just to further clarify, the transactions (and therefore transaction conflict detection) are more flexible than simple CAS operations. If transaction A reads key K1, and then writes to key K2, it will be rejected as conflicting if transaction B writes to key K1 and is serialized as happening before transaction K1 is committed (even though no other transaction wrote to key K2).
Every transaction in FoundationDB is submitted as a collection of writes/mutations, but also contains records of all keys read (and the consistent version at which they were read). In this test, the transactions all have empty conflict ranges, and thus cannot conflict with other transactions (but we still have to check!).
Equally wrong is his assumption that this has any bearing on the performance of FoundationDB. In fact FoundationDB will perform the same whether the conflict rate is high or low. This isn't to say that FoundationDB somehow cheats the laws of transaction conflicts, just that it has to do all the work in either case. There is no trick or cheat on this test--this same performance will hold on a variety of workloads with varying conflict rates as well as those including reads.