> re: deadlocks, these can be prevented as they are only possible in certain situations, and have mitigations (keep transactions short, never acquire the same locks in a different order, ...).
In principle yes, but this relies on human vigilance; as far as I know there's no automatic checker that can reliably tell you whether your queries have the possibility of deadlocking. Do you review every query before it gets run? And when you miss a case like acquiring locks in the wrong order, it can be weeks or months before it actually bites you.
> But I would still maintain that, even if you have a commit that times out, it does so in an atomic way. It takes at most your "statement timeout" (should be a few seconds probably), and then you can (in deterministic time) show the user an error message. This is still an improvement for the user experience over showing a "your order has been placed" message and then later cancelling it due to overconsumption of inventory.
One of the most fun ways I've seen an SQL system break: user navigates to a page, gets a timeout in their browser; 23 days later the database falls over.
(the page initiated a query for 2 years' worth of data, the database server chugged away through its indices for 23 days and then started trying to stream all the data back).
I agree that it's good to have that kind of fallback behaviour - in the system I currently work on we have something like that, where if a complex process doesn't get an event for over 1 second (most likely because the thing computing it broke, but it could also just be slow) then we have a simple consumer downstream that just emits a cancel event in that case (and passes everything else through otherwise) and we take that stream as canonical. And having something like that by default is a good thing, and one of the things that SQL databases do right is that they're a lot more request-response, whereas event sourcing things can be a bit "shouting into the void". I'd like a system with better support for that kind of case. That said, I think in a lot of cases the SQL defaults aren't great - I don't think I've ever used a database that had a good default timeout setting, and the way SQL databases treat any validation failure as "drop the data on the floor" is rarely what you want in practice.
In principle yes, but this relies on human vigilance; as far as I know there's no automatic checker that can reliably tell you whether your queries have the possibility of deadlocking. Do you review every query before it gets run? And when you miss a case like acquiring locks in the wrong order, it can be weeks or months before it actually bites you.
> But I would still maintain that, even if you have a commit that times out, it does so in an atomic way. It takes at most your "statement timeout" (should be a few seconds probably), and then you can (in deterministic time) show the user an error message. This is still an improvement for the user experience over showing a "your order has been placed" message and then later cancelling it due to overconsumption of inventory.
One of the most fun ways I've seen an SQL system break: user navigates to a page, gets a timeout in their browser; 23 days later the database falls over.
(the page initiated a query for 2 years' worth of data, the database server chugged away through its indices for 23 days and then started trying to stream all the data back).
I agree that it's good to have that kind of fallback behaviour - in the system I currently work on we have something like that, where if a complex process doesn't get an event for over 1 second (most likely because the thing computing it broke, but it could also just be slow) then we have a simple consumer downstream that just emits a cancel event in that case (and passes everything else through otherwise) and we take that stream as canonical. And having something like that by default is a good thing, and one of the things that SQL databases do right is that they're a lot more request-response, whereas event sourcing things can be a bit "shouting into the void". I'd like a system with better support for that kind of case. That said, I think in a lot of cases the SQL defaults aren't great - I don't think I've ever used a database that had a good default timeout setting, and the way SQL databases treat any validation failure as "drop the data on the floor" is rarely what you want in practice.