Hacker News new | past | comments | ask | show | jobs | submit login

It's not running the queries, just sanity checking them against table structures/etc. It's actually quite nice and worth checking out.

The real issue is that in performance benchmarks Diesel beats SQLx, at least last I checked. I'm hopeful the rewrite/upgrades to SQLx that are coming shorten this gap.




Doesn't it need to run introspection queries to check the table structures? This is still a significant overhead compared to code-only compilation, and it also requires an additional component (the database) in your build stack, which also needs to have a schema in sync with other developers and whatever deployment target to which you intend to push the compiled program. That also means that if you're building with a local copy of the DB, and deploying to a remote with its own DB, you have another source of risk for disparity between compile and runtime (I assume such an error would be caught quickly, so maybe it's not so much about additional "risk" as it is increased operational complexity).


I mean, the db schema has to stay in sync with the code no matter what right?

This is just ensuring it is valid at compile time.

On deploy you just run your migrations first. Obviously you have to do some planning.

It's a major productivity boost honestly, after building a semi large app with it.

And yes, it does actually run the queries. It populates dummy data and runs it in a transaction from what I can tell.


The .sql files or migrations or whatever else in your repo represents an intent, or contract, defined by your application. But those files aren't the source of truth, the database is the authority. Whether a SQL query is or is not valid is, by definition, something that can only be evaluated at runtime.


Yes, with sqlx, the migrations must be the source of truth for your DB schema. You cannot (ideally), manage them elsewhere. This is why the sqlx migration tool is so good, you kinda have to use it.

It's honestly very nice. The code, my migrations, my DB browser, all in one window in my IDE.

And the query is validated by running it, at compile time, by inserting mock inputs inside a transaction. It works really well.

Even if you disagree that the DB must be the source of authority and it can differ, the development boon of the process is worth the 5% chance there is some issue later, which can probably be easily resolved.


a .sql migration file is like a .tf terraform file, it describes intent

your infrastructure is not that .tf definition, it's whatever is currently deployed to ec2

in the same way, your schema is not that .sql file, it's whatever the database says it is :)


I don't really get your point. Yes, the db validates the requests. But that's based off the schema. The schema is determined from the migrations. You run the new migrations before any deploy.

You can keep pushing this argument but it doesn't mean anything in any practical sense.


my point is basically this: when the code in your application interacts with the DB, and it encounters a (runtime) error from a mismatch between the app's understanding of the DB schema and the DB's understanding of the DB schema, is that error handled gracefully? if yes, good! if no, because the app code assumes this is somehow impossible, that's a problem. that's all.


> Doesn't it need to run introspection queries to check the table structures?

Yes, but it can also run in 'offline' mode where it builds against saved metadata [0]

0: https://github.com/launchbadge/sqlx/blob/main/sqlx-cli/READM...


Yes, that’s exactly the issue. You trade that friction for not having the friction of type errors in queries at runtime. Whether that tradeoff is a good one depends on your situation and preferences.

(You can also check in the results of the database queries, so you only need the db in the build stack when you touch db-related code).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: