Hacker News new | past | comments | ask | show | jobs | submit login

The size limit makes a lot of sense when you look at the transaction model. When you write to FDB, none of the writes are even sent to the server until you attempt to commit the transaction, so this limit is actually a "maximum request size limit" on the final commit request.

AIUI, the transaction size does not include values that are read, and for snapshot reads does not include the keys either, so this 10 MB limit only really constrains transactions that are writing a large volume of data.

For those transactions, the documentation suggests writing the data first (using manny smaller transactions) and then only updating a pointer to that data in the final transaction.

I'm currently building a simple database I'm calling AgentDB on top of FDB. It implements a message-passing system within the database, with guaranteed exactly-once semantics. It's designed to allow business logic to be more easily expressed without having to worry about all the failure modes typically present in a distributed system.

For this, I process messages in batches, with one transaction per batch. If a transaction fails, I retry with a smaller batch size, so in my case the answer would be "yes, to an extent". The user of AgentDB still has to ensure that processing a single message doesn't exceed the transaction size, but they don't have to worry that a batch of messages would exceed that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: