Hacker News new | past | comments | ask | show | jobs | submit login

Yes --- the current version of Calvin (in the Yale research group) does not have this limitation. We're actually not sure which paper you're talking about, but either way, it's not fundamental to the Calvin approach. In general, if a single server in a replica fails, the other servers within the replica that need data from the failed server can access that data from one of the replicas of the failed server. (We can't speak for FaunaDB, but like the current version of Calvin, it is unlikely they have this limitation.)



My understanding was that the replica would go down in order to recover the failed server. This was a side effect of the way snapshots and command logging worked. You couldn't just restore the snapshot on the failed node because the multipartition commands would have to execute against the entire replica. Instead you would restore the snapshot on every node, and roll forward the entire replica.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: