Hacker News new | past | comments | ask | show | jobs | submit login

I think you mean latency between client and server. Second, there's a small exponent (let's say 1.3) in your simulation work (collision detection and newtonian motion), that can saturate a CPU quickly. quickly

Your end-to-end latency from a user clicking something and seeing the results is 2x latency + simulation time.

While I like software implementations of hardware optimizations as much as the next guy, what kind of replication distribution do you expect? You're splitting up your data-set in RAM across many machines, yet you've got them replicating data from each other. How much of your data-set do you expect to be accessing frequently?




Each item can be replicated to many machines, but written by one machine at a time. Galaxy works best when items that are replicated to many nodes are updated infrequently. This works very well when distributing data structures like B-trees, when the root of the tree is read by all but updated rarely, while the leaves remain pretty much confined to only one node and are updated regularly.

This is all for latency reasons. Fault-tolerance is a different story.


Ok. Makes perfect sense for B-Trees (as long as you have enough remaining memory/machine for a decent size cache). I think for a game's partitioned scene graph, the change rate would be rather painful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: