Hacker News new | past | comments | ask | show | jobs | submit login

I’d really really like if they improved “alter table” to include dropping/renaming columns/constraints, even if it required rewriting the whole table.



There is nothing stopping you from doing it yourself. The Sqlite FAQ[1] even has an entry on it.

Two things holding this back: 1. code being inside our outside sqlite would not be much different. 2. the amount of additional test code would be humongous for an operation of that level of complexity.

So, just do it yourself. It's not that hard.

[1]: https://www.sqlite.org/faq.html#q11


A lot of table alterations can be done without rebuilding the table, though perhaps a table scan might be needed to validate constraints (though this should optionally be a separate statement). To do this safely requires the RDBMS to support the functionality.


> There is nothing stopping you from doing it yourself.

Good database migration tools (e.g. Alembic) do this automatically.


Any idea on how to do this without blocking writes when SQLite is embedded in a server process?


Well if you don't mind rewriting tables, just create a table in the new structure, insert data from the old table to the new one, drop the old table, and rename the new table.


That’s not enough because it ignores all foreign key constraints.


Lock db exclusively, disable foreign key checking, rebuild and replace table, reconfigure foreign keys (they are gone after delete and rename), and everything should be fine.


One problem is that you then might need a special case for SQLite in migrations which can lead to different db schemas in tests and production


Use the same database for test and production. Nowadays, it is really easy to install PostgreSQL or MySQL locally or in your test environment. SQLite is great, but not for replacing PostgreSQL or MySQL during tests.


Any idea on how to do this without blocking writes when SQLite is embedded in a server process?


Create new empty table, set up trigger on old table to copy across any inserted/updated rows, backfill the remaining rows incrementally, and finally use ALTER TABLE to atomically replace the old table with the new one.


You're right, it should work!

When you wrote "backfill the remaining rows incrementally", did you mean having some background process or thread that reads a batch of rows from the old table (for example a few thousands of row), then inserts them in the new table, then commits, and keep doing this until all rows are copied? This way, other writers will be blocked only for the duration of each incremental transaction and will have an opportunity to lock the database for themselves between two batches?

Do you have experience doing this on production servers?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: