Hacker News new | past | comments | ask | show | jobs | submit login

I'm curious what you feel is specifically missing.



pg_upgrade is a bit manual at the moment. If the database could just be pointed to a data directory and update it automatically on startup, that would be great.


I agree, why is this still needed? It can run pg_upgrade in the background.


It needs binaries for both old and new versions for some reason.


When you say "in the background" what are you meaning?

Unless something has radically changed with this last release, then the PostgreSQL database needs to be offline while pg_upgrade is running.


Being able to simply switch from "postgres:15" to "postgres:16" in docker for example (I'm aware about pg_autoupdate but it's external and I'm a bit iffy about using it)

What's more, even outside of docker running `pg_upgrade` requires both version to be present (or having older binary handy). Honestly, having the previous version logic to load and process the database seems like it would be little hassle but would improve upgrading significantly...


It would be a major hassle to do so since it would mean keeping around major parts of the code for all old versions of PostgreSQL. Possible to do? Yes. A hassle for the dev team? Yes, a huge one. Worth it? Doubtful.


> It would be a major hassle to do so since it would mean keeping around major parts of the code for *all old versions of PostgreSQL.*

WTF?

It was only implied to be able to do migration between major versions, not all of them (which doesn't make any sense).

I don't expect PostgreSQL changing that significantly between major version the upgrade would be such a huge hassle…

TBH PostgreSQL seem to be the only tool/software that has such a bonkers upgrade path between major versions…


Patches welcome since you obviously know more than me about the code base. And I obviously meant only major versions when I said all versions. PostgreSQL releases one new major version per year so that adds up quickly.

> I don't expect PostgreSQL changing that significantly between major version the upgrade would be such a huge hassle…

PostgreSQL would need code which transforms between the old AST to the new AST for example to support views and check constraints so PostgreSQL would need to keep around a version of the AST for every major version of PostgreSQL plus the code to deparse it. It would also need to keep around the code to read every version of the catalog tables (it partially does this in pg_dump, but only partially since parts of pg_dump is implemented using server side functions).

There are also likely a bunch more complications caused by the pluggable nature of PostgreSQL: e.g. custom plan nodes which might make the whole project a non-starter.

So this would at least mean basically a total rewrite of pg_dump plus maintaining 5 to 10 versions of the AST. But likely much more work since I have likely forgot a bunch of stuff which needs to change. A huge project which increases the maintenance burden for a relatively small gain.

> TBH PostgreSQL seem to be the only tool/software that has such a bonkers upgrade path between major versions…

This is caused mostly by PostgreSQL being heavily extensible unlike its competitors.


> A huge project which increases the maintenance burden for a relatively small gain.

No, it would be for a huge gain for the people who run PostgreSQL. Every long running PostgreSQL installation use has to go through the process and/or potential downtime figuring out a (potentially forced) upgrade every few years.

Instead of that, PG being able to transparently (or at least automatically) upgrade major versions as needed would remove that absolutely huge pain point.

Other databases have recognised this as being a major problem, then put the time and effort into to solving it. The PostgreSQL project should too. :)


> I'm a bit iffy about using it

It's not perfect, as it doesn't (yet) recreate indexes or automatically run `vacuum analyze` afterwards. We're working on those though. :)

You do make backups of your database though yeah?


Yes I do, but for the simplicy I just stick with the same PG version and doesn't touch it once deployed on local RPi :D


Heh Heh Heh. Yeah, that's how most people seem to do it, even on non-docker systems. ;)

Works well for a few years as PG is super stable. But we tend to see people when they've run their systems longer than that and it's gotten to the point of their PG install is end of life (or similar circumstance), so they're looking at their upgrade options.

The pgautoupgrade approach is one potential way to fix that, and (in theory) should mean that once they're using it they don't have to worry about major version upgrades ever again (in a positive way).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: