I'm curious what drove the decision to move to an external store (and multinode HA config at that) now compared to using a local Go KV store like Badger or Pebble?
Given that the goals seem to be improving performance over serializing a set of maps to disk as JSON on every change and keeping complexity down for fast and simple testing, a KV library would seem to accomplish both with less effort, without introducing dependence on an external service, and would enable the DB to grow out of memory if needed. Do you envision going to 2+ control processes that soon?
Any consideration given to running the KV store inside the control processes themselves (either by embedding something like an etcd or by integrating a raft library and a KV store to reinvent that wheel) since you are replicating the entire DB into the client anyway?
Meanwhile I'm working with application-sharded PG clusters with in-client caches with coherence maintained through Redis pubsub, so who am I to question the complexity of this setup haha.
Given that the goals seem to be improving performance over serializing a set of maps to disk as JSON on every change and keeping complexity down for fast and simple testing, a KV library would seem to accomplish both with less effort, without introducing dependence on an external service, and would enable the DB to grow out of memory if needed. Do you envision going to 2+ control processes that soon?
Any consideration given to running the KV store inside the control processes themselves (either by embedding something like an etcd or by integrating a raft library and a KV store to reinvent that wheel) since you are replicating the entire DB into the client anyway?
Meanwhile I'm working with application-sharded PG clusters with in-client caches with coherence maintained through Redis pubsub, so who am I to question the complexity of this setup haha.