Hacker News new | past | comments | ask | show | jobs | submit login

Rama should bundle a write-through cache! Another in-memory JVM cluster thingamabob (Apache Ignite) used to propose write-through caching as it's primary selling point: https://ignite.apache.org/use-cases/in-memory-cache.html#:~:....

Or, maybe their pitch is that the streaming bits are so fast, you can just await the downstream commit of some write to a depot and it'll be as fast as a normal SQL UPDATE.




Rama is extremely fast, as you can see for yourself by playing with our Mastodon instance.


It’s fast until it’s not. Making a post and then hitting reload and not seeing it can be very jarring for the user. Definitely something to think about.


What do you mean? Every post I do shows up instantly.

Reloading the page from scratch can be slow due to Soapbox doing a lot of stuff asynchronously from scratch (Soapbox is the open-source Mastodon interface that we're using to serve the frontend). https://soapbox.pub/


I think the concern is will this still be true if Mastodon reaches Twitter scale?


Rama is scalable. So as your usage grows, you add resources to keep up. Scaling a Rama module is a trivial one-line command at the terminal.

Rama's built-in telemetry provides the information you need to know when it's time to scale.


is there a way to guarantee reading your own writes from a client perspective?


Yes. Depot appends by default don't return success until colocated streaming topologies have completed processing the data. So this is one way to coordinate the frontend with changes on the backend.

Within an ETL, when the computations you do on PStates are colocated with them, you always read your own writes.


It makes sense, but wouldn’t the write be slow? Especially when you have many streaming pipelines.


That's part of designing Rama applications. Acking is only coordinated with colocated stream topologies – stream topologies consuming that depot from another module don't add any latency.

Internally Rama does a lot of dynamic auto-batching for both depot appends and stream ETLs to amortize the cost of things like replication. So additional colocated stream topologies don't necessarily add much cost (though that depends on how complex the topology is, of course).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: