Hacker News new | past | comments | ask | show | jobs | submit login
First Contact with SQLite (brandur.org)
105 points by ingve 60 days ago | hide | past | favorite | 92 comments



> On off days, I sometimes wonder if I’m bought into some narratives too strongly. Like, is Postgres really the world’s best database? Experiences like this certainly cement my conviction. Yes, it is.

I find these 'different therefore wrong' takes to be immature.

Yes, SQLite is idiosyncratic in comparison to other relational database engines. There are reasons behind those idiosyncrasies: SQLite is designed for other use cases than those other engines, and therefore has other design decisions.

Ultimately, all computer programs are solutions to problems, and the approach to solving a problem depends on the nature of the problem. A list of grievances and a Boolean judgment is useless without stating the problem that the author is trying to solve.


The wacky approach to column types came from SQLite's origins of being closely integrated with Tcl. Knowing that doesn't somehow mean it was a good default worth carrying on for decades.


Firstly, while it's the default, it's not mandatory. As the author discovered you can use STRICT to make the columns typed, and types to be enforced.

The reason it remains unaltered by default is because one of the goals (and accomplishments) of SQLite is that the on-disk-data-file is completely backwards compatible, and cross-platform. This is a very important feature in some situations, and not lightly tossed aside because some old default or system is not in vogue anymore.


Put another way, "default to what's best for all users collectively" is not strictly equivalent to "default to what's most intuitive to unfamiliar users".

The latter is the luxury of end-user software unburdened by decades of legacy compatibility obligations.


STRICT only works for simple types though as the article noted, so you can't do

    CREATE TABLE mytable (
      id INTEGER PRIMARY KEY, 
      created DATETIME, 
      mything JSON
    ) STRICT;


That's the point of STRICT isn't it? 2 of the 3 types in your SQL statement aren't valid sqlite data types.


The point is that they are types supported by most other sql databases and supported as input/output by sqlite functions, but you can't validate that stored data is the correct type.


A lot of people get real obsessive about the "worlds best" thing, take a motorcycle for example, but them themselves lack the technical skill to actually require that level of equipment. Looking for the "best" whatever all the time is a sign you don't know what you are doing.


I've felt this way for a long time, and I enjoy using lower quality things sometimes just to confirm it's worth the investment into something more advanced.

Like violin, I still play my $50 special I bought in 2005, because it sounds fine and I'm terrible so it's Good Enough.

On the other hand, I am building my second computer keyboard that's going to cost me $300+ because, well, I make a living with these things and I use it 60+ hours a week. I have wrist problems, so a better keyboard literally translates into more hours billed.


I would be happy if SQLite offered a native DATETIME type. That has been in the SQL spec since 1992! and is supported by every RDBMS that exists today in the planet. But nooo - its gets stored as TEXT and causes all kinds of issues for folks who don't know this.


Advanced alter table operations do involve creating a new table, but the pattern for doing that is actually pretty robust: you start a new transaction, create the new table, copy the data across and then atomically swap the table names before you commit: https://www.sqlite.org/lang_altertable.html#otheralter

My favourite feature of my sqlite-utils CLI tool is the "transform" command which implements this pattern for you (also available as a Python library method): https://sqlite-utils.datasette.io/en/stable/cli.html#transfo...


This does not work (easily) when other tables have "foreign key" constraints to the table. You need to manually recreate them.


Yeah, my solution doesn't automatically handle that (yet).

That's why it provides a --sql option - if it doesn't entirely handle your particular case you can instead get it to generate SQL for you without executing it, so you can make modifications you need before running it.


There are things that different databases are best for.

This seems very blunty anti-sqlite, and things postgreSQL is the best, so I'd be interested to see a guide for (two things I've used sqlite for in the last week):

* Using postgreSQL to store data in an iPhone app

* Making a small python script which uses PostgreSQL, and then seeing how much work it is to send that to someone else, so they can use your work and extend it (send the database, and also instructions for installing postreSQL, and getting everything set up. Make sure it works on linux, mac and windows).


The author is both ignorant of SQLite history, and philosophy as also ignorant of the real use-cases SQLite solves for embedding a lightweight SQL-ish database into applications.

Looking into his "about" section he worked mostly with web APIs (Stripe, Heroku) and is a self-proclaimed fan of Postgres. Maybe when he acquires some experience working with embedded applications without boatloads of resources available nor a stable network stack he might appreciate SQLite for what it is since the comparison to Postgres is non-sensical when approaching from that viewpoint.


I didn't take this post to be overly negative, just a reflection of their experience in working with the tech.

For anyone coming from a heavyweight SQL background, it is indeed pretty easy to be caught off guard with SQLite. That doesn't reduce its value of course.


the clue to the difference is, perhaps, however, right there in the name....


None of the things mentioned in the article appear to have any significant impact on the resource usage and or embeddability if they were implemented, so it's totally fair to draw comparisons here.

If want to have more of an apples-to-apples comparison you could swap out Postgres for DuckDB (which aims to follow Postgres in SQL dialect), and all of the mentioned points should still hold.


> None of the things mentioned in the article appear to have any significant impact on the resource usage and or embeddability if they were implemented, so it's totally fair to draw comparisons here.

Which is why I mention "history and philosophy" on the same line of my comment.

The `ALTER TABLE` documentation on SQLite [0] has a very clear reasoning on why it's implemented the way it is, and gives steps on how to reproduce more common usage of `ALTER TABLE` in other SQL engines.

To directly quote from their docs:

> Why ALTER TABLE is such a problem for SQLite

> Most SQL database engines store the schema already parsed into various system tables. On those database engines, ALTER TABLE merely has to make modifications to the corresponding system tables.

> SQLite is different in that it stores the schema in the sqlite_schema table as the original text of the CREATE statements that define the schema. Hence ALTER TABLE needs to revise the text of the CREATE statement. Doing so can be tricky for certain "creative" schema designs.

> The SQLite approach of storing the schema as text has advantages for an embedded relational database. For one, it means that the schema takes up less space in the database file. This is important since a common SQLite usage pattern is to have many small, separate database files instead of putting everything in one big global database file, which is the usual approach for client/server database engines. Since the schema is duplicated in each separate database file, it is important to keep the schema representation compact.

> Storing the schema as text rather than as parsed tables also give flexibility to the implementation. Since the internal parse of the schema is regenerated each time the database is opened, the internal representation of the schema can change from one release to the next. This is important, as sometimes new features require enhancements to the internal schema representation. Changing the internal schema representation would be much more difficult if the schema representation was exposed in the database file. So, in other words, storing the schema as text helps maintain backwards compatibility, and helps ensure that older database files can be read and written by newer versions of SQLite.

> Storing the schema as text also makes the SQLite database file format easier to define, document, and understand. This helps make SQLite database files a recommended storage format for long-term archiving of data.

> The downside of storing schema a text is that it can make the schema tricky to modify. And for that reason, the ALTER TABLE support in SQLite has traditionally lagged behind other SQL database engines that store their schemas as parsed system tables that are easier to modify.

[0] https://www.sqlite.org/lang_altertable.html


> embedding a lightweight SQL-ish database into applications

Current personal project is a game with crafting. I'm tired of games with limited inventory and no search / filters so I've been enjoying SQLite to manage that. The fact the db is a simple file is awesome to manage backups and you don't have to start a server to tinker with it: DB Browser, open file and you're done.

Also kudos to the team behind the Godot SQLite wrapper.


I feel like the recent hype around SQLite made people use it for a lot of stuff that is not really suitable for SQLite. It has too many caveats especially around data types.

I'm not saying it does not deserve the attention, it is a fantastic piece of software but if I had the option to use PostgreSQL for something I'd never ever get close to choosing SQLite over it. It shines when you don't need or want something more feature packed.


> if I had the option to use PostgreSQL for something I'd never ever get close to choosing SQLite over it

I had the option for a recent project. It's a niche forum-like application with around 2,000 users. Went with a monolithic design and vertical scaling (if needed), so SQLite was perfect. Every dynamic HTML page renders in under 1ms. Litestream for live-replication to a couple S3 buckets.

Running PostgreSQL for something like this would be a pain in the ass, and add a minimum 10ms of latency to every request. There would genuinely be more maintenance required for the PostgreSQL server/daemon than the application itself. It just makes no sense.


> There would genuinely be more maintenance required for the PostgreSQL server/daemon than the application itself. It just makes no sense.

I think this is the key point the original post was missing. When you’re working on a project with a DB, you should factor in the cost/overhead of the DB as well. Postgres is a wonderful DB. But it is also big and requires work to keep it running. For many projects, the extra overhead is well worth it.

(And if you’re using a “cloud” DB, the overhead is still there, but you’re explicitly paying for the privilege of making it someone else’s problem. )

But for smaller projects, or ones with less DB requirements, something small like SQLite is much more appropriate. And it nearly removes the DB overhead from the maintenance equation.


And the value of maintenance-free differs by environment. A lot of my projects are one-offs where I'm not getting paid to maintain a database system, so I don't want to maintain a database system.


Is that 10ms of network latency? I remember Postgres being pretty low latency if you run it on the same host, although the point about maintenance makes complete sense.


Yea ~10ms is my experience using localhost TCP, using a Unix socket would probably be faster.


the real shock is how many websites, powered by wordpress, use mysql instead!


> if I had the option to use PostgreSQL for something I'd never ever get close to choosing SQLite over it.

I'm using both Postgres and SQLite for active projects. Postgres is great for a multi-user blog I run where the DB is hosted, backed up, etc. The same site would run fine and slightly faster with local SQLite (which it used to) but having Postgres lets me use Render.com's built in management features, which are nice.

SQLite works great for anything app-like. I have a script that OCRs certain video game screenshots and saves the OCR data in a searchable database, for example. Postgres would be complete overkill for this and would add nothing but hassle. I don't want to bother with keeping a separate database going for that in my Postgres server. I just want a folder of files, and SQLite works perfectly for that. I could use Postgres but it would offer zero useful benefits and might be a bit slower (even with a local server) due to my sloppy code.

They are just different tools for different tasks, with some overlap.


I think it is time for a real version/generation 4 of SQLite which drops some backwards compatibility (e.g. file format) and has

    1) strict always enforced.
    2) full datatypes (ints, floats, datetime, jsonb)
    3) all "ALTER TABLE" functionality, even if it has to rewrite the table


Not to be contrary but... why?

Why make a new vesion that breaks compatibility with the old version?

Why make a new version just so it behaves like all the other database engines out there? Isn't having difference the point of having choices?


As I interpret the record format (https://www.sqlite.org/fileformat.html#record_format) then new native datatypes (datetime, jsonb) cannot be added without breaking the file format.

When file format is no longer backwards compatible, it makes sense to make a new major (semantic) version, so you don't have a situation where version 3.50 can read a file but version 3.43 cannot.


Agreed. Most of that can be achieved with SQL (how hard is it to add STRICT?) or the code you use to interact with SQLite. I don't see how it's worth sacrificing backwards compatibility to achieve those things, which are mostly non-issues in practice.


I don't see what is so appealing about backwards compatibility. The database is an implementation detail and its specifics should not block anything.

Seeing several comments like yours is puzzling to me because I find myself unable to understand what are the devs preaching for it gaining from SQLite's extremely conservative backwards compatibility policy.

For example, PostgreSQL has `pg_upgrade`. You run that after you upgrade its major version and it's a bulletproof and easy transition 99% of the time.


Like with most things, context matters.

If you have a "closed" system (such that "you" control the database, and all access to it) then upgrades are "easy" to do. You just upgrade the server and all clients.

If your system is more diverse then it's harder because in that case server and clients gave to be upgraded together. If I'm using multiple different programs (Accounting, Payroll, Access control etc) possibly from multiple vendors, then coordinating everything to happen at the same time can be impossible. If the server is not backwards compatible with old clients then you can get stuck.

In the case of SQLite for example lots of systems rely on the stability of the file format. Breaking that would not be welcome.


Don't hate on the author.

He heard a hype (sqlite), decided to try it to see what is it about, found out it's not for him really, wrote a bit blurb about it on his blog.


Worth noting that this is from Brandur's "atoms" series: https://brandur.org/atoms - "Multimedia particles in the style of a tweet"

So it's more a short set of notes (like a TIL) than a full-fleshed blog post. Brandur's long-form writing has a different tone: https://brandur.org/articles


I find comparing SQLite with Postgres moot to begin with. I use SQLite when I don't want to run a database server out-of-band, or when I want to need to copy a single file to copy an entire database. For that, it is unparallelled, easily the best in the world, by far.

I don't understand the comparison here at all.


You can reflect on your early experience with SQLite while simultaneously reflecting on why Postgres is a great database. If it wasn't that way, we'd all be running SQLite for our web services, but we're not.

All technologies, and especially databases, are best applied with thoughtful consideration of the context. I didn't get the sense the author was suggesting their SQLite use case would actually be better served by Postgres. Just the same as a use case best served by Postgres is not suitable for SQLite.


Sure. I can also reflect on my early experience with a Ferrari Enzo while I simultaneously reflect on how much I love my horse, but it should be clear that that's just irrelevant musing, rather than a direct comparison.

> I didn't get the sense the author was suggesting their SQLite use case would actually be better served by Postgres.

I got exactly that sense, from "I tried SQLite, it lacked these things. Postgres really is the best database".


Your example underscores why it doesn't read that way for me. A Ferrari Enzo and a horse are such different transportation methods that the two barely overlap in use case. A horse needs no gasoline, can work off road, and is much quieter. The Ferrari can travel much faster and farther in a go, doesn't get spooked by loud noises, and doesn't require as much daily maintenance.

I wouldn't bring a Ferrari Enzo to do a horse's job, nor a horse to do a Ferrari Enzo's job.

The same is even more true for Postgres and SQLite. Postgres is a poor fit for an on-device database for a mobile app or embedded device, and SQLite is a poor fit to power a high traffic horizontally scaled web service.


> On off days, I sometimes wonder if I’m bought into some narratives too strongly. Like, is Postgres really the world’s best database? Experiences like this certainly cement my conviction. Yes, it is.

I really don't see how you can read this as anything other than "SQLite is worse than Postgres". He basically says "sometimes I doubt if Postgres is the best database, but then I use SQLite, and it really is".


Take from it what you want, but he did not say that SQLite is worse, that's an inference. He just said Postgres is the best database in the world, and that's quite valid from the perspective of features, correctness, and performance.

SQLite is a tradeoff: It's very small and to make it that small, some traditional database aspects need to be discarded. That small size makes it useful when size is a factor.


> If it wasn't that way, we'd all be running SQLite for our web services, but we're not.

Noting that every interactive site run under the Hwaci[^1] umbrella does, e.g. sqlite's own forum and source control system.

[1]: The company behind sqlite.


I want:

1. Strictness by default and with no escape hatches;

2. Proper data types like all dates / times / datetimes / JSON etc.

3. In-process / embedded database. Almost all projects I worked on over the course of a 22 years of career did not need a separate node / VM / pod for a database.

PostgreSQL / MySQL and SQLite are not some ideal polar opposites. There is potential for a lot of cross-pollination of features and the ability to achieve the perfect DB.

There. Now you understand the comparison.


Turso's SQLite fork libSQL[1] has an extension/improvement that adds the ability to alter columns and drop constraints (albeit not via a DROP CONSTRAINT clause). I'm not affiliated, but have been using libSQL recently and am finding it to be a very pleasant experience.

Although conceptually I agree that SQLite's limited type system is frustrating, if your usecase allows, an ORM might help with not having to think about it or touch it directly.

[1] https://github.com/tursodatabase/libsql/tree/main


My mind was blown when i've realized i can write bash command that pipes CSV file into the SQLITE, runs SQL query on it and spits out results. (all completely in ram, without need for temporary sqlite file on disk)


You need to look into duckDB.


He should read the docs and not try to transfer his previous knowledge, especially if it isn't really fitting. He would have an easier time and be less frustrated.


I wish SQLite would officially fork itself and:

- So make breaking changes as needed. (it was first released in 2000 and has fantastic backwards compatibility, but hardware & OS have radically changes over the last 24 years - as well as use cases).

- put more focus on client/server use cases

- make things more 'strict' (types, checks, etc)

Note: I say this with tremendous love for SQLite. There's just so many attempts for companies & projects to morph SQLite what it's not designed for, that you might as well embarrass these use cases and make an official fork to support them.


I guess I don't really understand the point of expecting distinct solutions to all converge toward the same general case. All of the criteria on your list are already fulfilled by Postgres, MySQL, etc. -- why dilute the optimality of SQLite for its specialized use cases just to target other use cases that are already served superbly with other tools?

This is like arguing how great bicycles would be if they had four wheels, enclosed cabins, and gasoline engines.


> All of the criteria on your list are already fulfilled by Postgres, MySQL, etc.

They are not, they introduce management of another node / VM / pod which I'll boldly say that likely at least 80% of all projects everywhere do not need.

I'd kick a puppy if that means we can get in-process / embedded PostgreSQL.

> why dilute the optimality of SQLite

What does that even mean? Such a strange wording, as if it's a competition or a fight.

> This is like arguing how great bicycles would be if they had four wheels, enclosed cabins, and gasoline engines.

No, that's akin to arguing that a bicycle would benefit from crash protection if it came with just 2-3kg extra weight.


> I'd kick a puppy if that means we can get in-process / embedded PostgreSQL.

Can you explain your thinking re a solution being both in-process/embedded and focusing on client/server use cases? On the surface, these seem contradictory to me.

> What does that even mean? Such a strange wording, as if it's a competition or a fight.

I don't know about a "competition or a fight" but forking a project to target contradictory use cases definitely involves trade-offs.


> Can you explain your thinking re a solution being both in-process/embedded and focusing on client/server use cases? On the surface, these seem contradictory to me.

What seems contradictory, not sure I understand?

In my consulting and contracting practice I have only ever had 3 projects that actually needed a big dedicated database. Everything else would have done just fine with an in-OS-process model like SQLite. But SQLite is too lax with data typing and I am not keen on 40% of the invoice for my customers to be "+300% extra data validation code because SQLite devs used and loved TCL". Sorry for the snark, but TCL influencing SQLite is a historical reality, if my memory hasn't betrayed me that is.

I'd love it if we had the same DB engine have an embedded and client-server variants. You start off with the embedded and if the project grows then you simply modify its config and don't have to change one line of code in your project (though obviously your platform team has to then provision it but that's a given).

Today this is sadly a fantasy and does not exist. I want it to exist.

And why PostgreSQL? Well, I like data strictness, and PG has a lot of desirable features like DDL transactions and enum types.


> What seems contradictory, not sure I understand?

In-process embedding implies making the DB engine part of the program itself, within a single runtime instance, in the same way as you'd import any other library. Client/server architecture is a situation in which one program is communicating with another one, running somewhere else, over some sort of messaging channel. On the surface, these seem to be mutually exclusive usage models.

> I'd love it if we had the same DB engine have an embedded and client-server variants.

It sounds like you want something that uses similar syntax and defaults as Postgre, but is used in an embedded fashion, a la SQLite. This is a reasonable idea, but it seems like you want to focus on embedded use cases, not client/server, and this sounds like something that would be best implemented as a third solution entirely, not a fork of either SQLite or Postgres.


> This is a reasonable idea, but it seems like you want to focus on embedded use cases, not client/server,

Not sure how I was unclear (sorry if I was), my take was basically "I want PostgreSQL[-like] engine that can work in embedded and client-server mode depending on project" really. I want more choice than we have right now, that is my wish.

> and this sounds like something that would be best implemented as a third solution entirely, not a fork of either SQLite or Postgres

I don't see why. Technically there are hurdles, sure, but there always are anyway -- I believe in the case of both SQLite and PostgreSQL it's either lack of resources or lack of motivation to go outside their niche. Whatever the case I am not judging them, it's their project and I am just a rando who wants to work less on their storage / validation layer.

But yeah, I don't see why must we get a 3rd player necessarily. You might still turn out to be correct, mind you, I am just saying that it's not necessarily the case that this DB engine (that will have both embedded and client-server modes) must be a separate project.

I love SQLite but I always end up having to write a lot of validation and at one point you do ask yourself whether your energy should not go somewhere else.

Time will tell, I suppose.


It will never happen. Please, move on.


>SQLite doesn’t have data types on columns. Data types (and there are only five) are on values only, so anything can go anywhere.

You can use CREATE TABLE STRICT[1] to get a typed table.

[1] https://www.sqlite.org/stricttables.html


The article mention that too.


I think what the author misses is that SQLite's choices make a lot of sense in a world where working with the DB is super easy and you can move a lot of the complexity to the code around it. With traditional databases it used to be the case that you manage everything in the DB, because it's expensive to call it, and because you don't always know who's going to call it, but that's clearly not a typical scenario for SQLite.


> I think what the author misses is that SQLite's choices make a lot of sense in a world where working with the DB is super easy and you can move a lot of the complexity to the code around it.

One concrete example of that is sqlite's own source control system, the Fossil SCM. Within Fossil, sqlite does _lots_ of the heavy lifting, replacing tens of thousands of lines of C code[^1]. Richard Hipp (of sqlite fame) recently mused that sqlite takes on at least the following distinct database roles in that project:

- Document database (how SCM records are natively stored[^2]).

- Graph database (queries which extract the lineages of projects' artifacts from directed acyclic graphs[^3]).

- Key-value store for config data of arbitrary types (all in the same table).

The first two can be done with any SQL db, but the latter requires sqlite's particular flexibility. Never once (literally never once) in the development of fossil has that flexibility caused us (==its many contributors) any grief.

[^1]: as a fossil contributor since 2008, i can say with complete confidence that that is no exaggeration.

[^2]: <https://fossil-scm.org/home/doc/trunk/www/fossil-is-not-rela...>

[^3]: <https://core.tcl-lang.org/tcl/timeline?c=2024-06-30> is a good example


All of the issues are well documented. It would make much sense to make ones first contact with the documentation to decide if given piece of software is suitable for ones needs. SQLite is like a bicycle, postgresql is like a freight train. Each has it's use.


SQLite's idiosyncrasies make more sense when you realize:

1. It started as a way for the author to access databases from TCL, in which everything is a string. Sounds kind of mad now, but that was the kind of thing you did back in the 90's.

2. SQLite is fanatically backwards compatible. That means that once you get a system that works, it will continue to work through all newer versions of SQLite. But that also means that you can't suddenly decide to enforce foreign keys or column types by default, because it will break loads of systems that worked just fine before.


Regarding 2, IIRC they guarantee that backward compatibility to their supporters. So it's very unlikely the desires expressed elsewhere will be fulfilled by the SQLite team.

The code is Public Domain, though. Anyone should feel free to scratch their itch.


As long as one can convert database dumps between the two without too much trouble, this tired "debate" seems silly.

Perhaps the debate might be over how much trouble is too much. AFAICT automated "find and replace" can cure just about every "incompatibility".


Funny! Next do a "First contact with c++ as a Python developer without reading a manual"


Do you think reading the manual is supposed to make those problems disappear or make them less frustrating for some reason? These kind of personal attacks just because their opinion differs from yours is low effort, specially given that they clearly mentioned "official recommendation" in multiple places implying they did go through the manual.


to be fair, it sounds like the author only referred to the manual after making uninformed decisions.


Do you read, in an exhaustive manner, the manual of each thing you ever buy? Or each programming language and library you ever used?

And if you decide you want to "win" here and thus stubbornly reply "yes" then I'd say "you are the vanishing minority".

Approaching a piece of tech -- that's yet unknown to you -- with expectations is natural. Sometimes with tragic results but still completely natural for us the humans.


This is a tangent and I'm not trying to win an argument, but I read the manuals for databases I use, along with many other bits of documentation like language specs. If anyone reading this is thinking of doing so then I highly recommend it.

My approach is to firstly skim the index and look at anything interesting, secondly just use the docs as a reference while I learn the technology, and thirdly come back and read them cover-to-cover (skimming over the boring/irrelevant bits).


Is there really an SQLite hype going on? I am getting a whiplash because of this pivot. From extreme complexity (Microservices! Kubernetes!) to extreme simplicity (Server-side rendering! Zero build! EC2!)


"First Contact" with anything new is usually jarring. It'd be much more useful to hear a review after using the thing every day for a year.


https://www.sqlite.org/quirks.html does list a few of these


Fact of the matter is that SQLite shouldn’t be used in a non embedded context. There’s a reason people use Postgres.

There are some neat things you can do with it like using HTTP range queries to query it directly from object store or Litestream, but my point stands.


Fact of the matter is SQLite is appropriate and better for a lot of low-write scenarios on a single machine.


It's just your opinion, not a fact.


My decision on SQLite vs Postgres is usually something like fixed and variable costs. For each little thing, Postgres is typically nicer to use. But the fixed cost of standing up and maintaining a dedicated database is high enough that I rarely need to go that route. (Hosted services just make that fixed cost into explicit $) And now with litestream, the server-side story has gotten nicer for SQLite, so I expect the percentage of time I need pg to continue to drop.


I use SQLite with my flask app - it has 5/6 users connected at once max - and its excellent. I've run into issues with alembic when running migrations occasionally. I had intended to move to something "beefier" but I see no point, unless I have to really scale up.

Its straightforward, fast and easy for a casual DB programmer. Everything is wrapped in SQLAlchemy anyway, so all the complicated logic is in my Python code.

Which have been around ALTER_TABLE and CONSTRAINTS


Hi. Interesting. Is there any special pattern to follow to make flask/sqlite work with multiple users? What about the deployment? Thanks.


I needed to use SQLite from php to develop a WordPress plugin. (Yeah, yeah, I have heard most if not all the jokes about that particular stack.)

Here are some notes about my experience, offered to anybody who might be able to use them. https://www.plumislandmedia.net/reference/sqlite3-in-php-som...


SQLite has its own decisions, to keep itself embeddable and low-footprint. There are many choices on the server but little on embedded devices like mobile phones. I have faced many problems that arise when using SQLite in a cross-platform mobile app, such as corruption and deadlock. However, there are almost no proven alternatives to SQLite.


How do you typically deal with "missing features" from SQLite (e.g. stored procedures)? Do you use extensions like https://github.com/nalgeon/sqlean in production?


Don't Application-Defined SQL functions fulfill the role of stored procedures?

https://www.sqlite.org/appfunc.html


Yes, but you have to write C; I am more confident in SQL, and SQL fits better with the rest of my data stack.


Indeed. In practice, SQLite is the only embedded SQL OLTP DB. It doesn't have a rival. What we need is more competition in this space. I for one would love to have something with a feature set and strictness comparable to Postgres. A drop-in replacement would be extra nice.


"I completely misunderstand the purpose of SQLite, so here are some rants"


It’s interesting to read a lot of push back to the points here.

I recently built a product with the backend using SQLite as the data store and ran into all these issues and many more. It is frustrating. I use SQLAlchemy and Alembic. It seemed everywhere I turned, the docs said “it works this way in all databases, except SQLite where X isn’t supported or you have to do Y differently.”

I think with litestream and D1 and other web SQLite tech emerging, you see the sentiment: “if you don’t have Google-scale, you can easily serve using disk-backed SQLite, plus enjoy skipping RTT network latency to DB.” Then, when someone does that and has a bad time, the comments instead go: “SQLite is only for embedded data stores.”

Personally, if I had to do it again, I would stick to the most boring tech for the target stack: Postgres and Django’s ORM.


I'm frustrated by someone reaching the conclusion that there can ever be a "best database". I spent several hours today writing a bash script that jammed CSV files into sqlite3 and ran some queries on them. Using postgres for this would have been insane, and I can imagine writing the exact opposite of this article if I had tried to do so: "It stores everything all over the place! There's no easily configurable in-memory storage option! All the docs and packages expect you to be running it as a service! Running my script takes an entire ansible playbook just to set things up! Why the hell is there a user table already? This thing uses hundreds of megs of memory at rest sometimes!"

Sqlite is not just for embedded data stores. However, it makes trade-offs to achieve ease of use and performance on certain workloads. If it tried to be postgres then it would be an inferior competitor; instead, it is an alternative that suits some use cases better and others worse. If you're trying to build a web app that can be deployed and backed up as two files, an executable and a db, then you'll probably want sqlite. If you're trying not to shoot yourself in the foot with "oh god why can't I do a full outer join and why are all the dates weird" then use postgres.

I imagine programmers would also find cockroachdb frustrating compared to postgres, if they didn't benefit from anything it offered and used it anyway.


>> I would stick to the most boring stack

Just use sql. Very straightforward and easy to use program understand maintain.


I think this is the best advice. The further you are away from your data store, the more you are relying on the used cases others have considered. Just because a library needs features that something like SQLite doesn’t provide doesn’t mean that you need that feature. But every extra abstraction in your stack has this danger.

I also think that the closer you are to your DB, more you appreciate the features you actually need are. A developer should know how their data is ultimately stored. And if an ORM hides this from you, that’s a problem.

Note: and ORM doesn’t need to hide this and can be a rational way to manage DB storage. But it’s then up to the developer to manage that abstraction. I’ve done this before where I used an ORM but I still knew exactly what SQL was going to be generated. But for many, especially new devs, the ORM is a black box.


> Just use sql. Very straightforward and easy to use program understand maintain.

In the end SQLAlchemy and Alembic generate and execute SQL. The weird behaviours are due to sqlite's idiosynchasies


I'm curious why you used SQLAlchemy and Alembic when (based on your description) Django's support for SQLite sounds a lot nicer? According to the docs, there are a handful of caveats with the Django ORM and SQLite, but nothing too bad it would seem.

https://docs.djangoproject.com/en/dev/ref/databases/#sqlite-...


Good question! I was (am) using FastAPI, not Django. Another decision that was easy to get started with, but I think is turning out to be more difficult to scale. Maybe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: