Hacker News new | past | comments | ask | show | jobs | submit login

Wow, this is actually incredible. One of my biggest gripes with Postgres is going to be solved. Thank you for sending this over!



Thanks.

I forgot to mention that the test case had constant long-running transactions, each lasting 5 minutes. Over a 4 hour period for each tested configuration.

This level of improvement was possible by adding a relatively simple mechanism because the costs are incredibly nonlinear once you think about them holistically, and consider how things change over time. The general idea behind bottom-up index deletion is that we let the workload figure out what cleanup is required on its own, in an incremental fashion.

Another interesting detail is that there is synergy with the deduplication stuff -- again, very nonlinear behavior. Kind of organic, even. Deduplication was a feature that I coauthored with Anastasia Lubennikova that appeared in Postgres 13.


I am not very familiar with this topic, but need to maintain large and frequently updated DB, which requires periodic VACUUM FULL with full tables lock, so, does PG suffers from index bloat only and your fix solves it, or there is some other type of bloat for general table data too, which will still exists after your improvement?


It's not possible to give you a simple answer, especially not without a lot more information. Perhaps you can test your workload with postgres 14 beta 1, and report any issues that you encounter to one of the community mailing lists.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: