Hacker News new | past | comments | ask | show | jobs | submit login

This post has a valid point. But the last line makes it clear why they care so much about it.

Yeah, table bloat and transaction ID wraparounds are terrible, but easily avoidable if you follow a few simple guidelines. Typically in my experience, best way to avoid these issues are to set sensible vacuum settings and track long running queries.

I do hate the some of the defaults in the Postgres configuration are too conservative for most workloads.




> "But making sure that PostgreSQL’s autovacuum is running as best as possible is difficult due to its complexity."

The problem, as the article states it, is that a "sensible" vacuum setting for one table is a terrible setting for another depending on how large these tables are. On a 100 million tuple table you'd be waiting 'til there there were 20 million garbage tuples before taking action.


You can set autovacuum reloptions on a per-table basis, if they differ that much for your your use case.


> I do hate the some of the defaults in the Postgres configuration are too conservative for most workloads.

it is also black magic to tune them.


Paying an overhead cost of 53 bytes per row is also too expensive for MVCC in my opinion.


What last line? The literal last line is "We’ll cover more about what we can do in our next article."

Do you mean this one?

> At OtterTune, we see this problem often in our customers’ databases. One PostgreSQL RDS instance had a long-running query caused by stale statistics after bulk insertions. This query blocked the autovacuum from updating the statistics, resulting in more long-running queries. OtterTune’s automated health checks identified the problem, but the administrator still had to kill the query manually and run ANALYZE after bulk insertions. The good news is that the long query’s execution time went from 52 minutes to just 34 seconds.


It previously had this closing line, with links to their products (https://web.archive.org/web/20230426171217/https://ottertune...):

> A better approach is to use an AI-powered service automatically determine the best way to optimize PostgreSQL. This is what OtterTune does. We’ll cover more about what we can do in our next article. Or you can sign-up for a free trial and try it yourself.

That was removed after the article was posted to HN, at dang's suggestion - he posted about it elsewhere in these comments.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: