Hacker News new | past | comments | ask | show | jobs | submit login

Transactions do not necessarily get processed serially. For example, if there were two concurrent requests and request #1 had just completed the first UPDATE when request #2 jumps in and performs the first two SELECTs, then request #2 will incorrectly update the balance for card #2. Different database systems offer different ways to serialize transactions, usually with a cost of performance and complexity.

Note that the post says: > The only right way to do it is a pessimistic lock (FOR UPDATE clause).

This is not true. Banks deal with this problem all the time. You don't have to use a database engine as the serializer, despite what all the books tell you. My preference would be to explicitly serialize transactions rather than rely on database tricks - i.e. write accounting entries to ledgers and have a service that processes those entries on a single thread. For many scenarios this is more than good enough. If you needed lower latency, you could process this all in memory and use the database purely to replay the transaction log on restart for unprocessed transactions. In either implementation you could implement optimizations to process entries on different threads.




If you make 2 updates, check the balance to ensure > 0, and rollback if < 0, and you do all of this in a transaction, doesn't concurrency no longer matter? If another transaction beats you to the punch, won't the balance check query reflect that?


Nope - this can happen with cards with a balance above zero too.

Look at what happens if you start two transfers of 1/2 the money from A to B.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: