Hacker News new | past | comments | ask | show | jobs | submit login

I'd not mind writing raw SQL if it was anything other than a big, non-typed blob of crap as far as my language is concerned. I want compile time safety in my SQL. Tbh, I'm surprised that raw SQL folks don't promote some type of non-ORM but also compile-time type safe SQL implementation. The runtime-ness of SQL strings have always blown me away.

I imagine it would be pretty easy too. Move SQL out of the program language (ie, into files). Verify the syntax. Compare the SQL to the db to ensure validity. Bam, compile time verified SQL. Though, I've never used a setup like that, as that is what my ORM does, just in-language.




Nim has a library (ormin) that does this, although it is still far from production quality:

It parses your sql (in compile time), including the "create table" definition, and does type checking, rebuilds it with parameters (so that it is SQL injection safe and the DB optimizer can make a plan once and reuse it).

Obviously, it needs to be aware of specific SQL dialects if you need to use them .... so, it's unlikely to be very popular.


In Java I like to use JOOQ for this. It generates classes and constants from your database. Then you can use the library to build your queries using those generated constants and classes.

This gives you type safety and the ability to use IDE refactoring tools.


Nice. Yea in Rust I use Diesel for this as well. I imagine any ORM worth its weight basically handles this same thing. Even code generation from DB is basically a verification of the schema.


Your compiler doesn’t — and can’t — know what’s in your database.

To the extent you create a system that requires this constraint, you’ve created a brittle (and soon to be broken) system.

(I suppose there may be some case where all the data is all known up-front and will always be updated in lock-step with the consuming code/services but that’s rare in my experience.)


> Your compiler doesn’t — and can’t — know what’s in your database.

There's no reason you couldn't compile code against a schema, at least the portions that would be exposed to the application anyway (independent of the data content and non-exposed backing parts of the schema) just as you do against header files (independent of the implementation). It is a form of coupling, but its coupling that exists anyway in DB-consuming code, its just not typically statically verified and so is prone to unnecessary run-time breakage.

Unfortunately, you need tooling to statically analyze SQL schemas in whatever flavor of SQL you are using (and vendor differences will matter here), including inferring types through view definitions, etc., and then you need to tooling to map that to the type system of your implementation language, and potentially you need extension points so that you can do custom mapping for custom types from the database side.

Unless your DB and application platform share tight common control or are near ubiquitous, getting someone to make the investment to do this and keep it current is hard. Its not too hard to imagine Microsoft doing if for the SQL Server / .NET combo or Oracle doing it for Oracle DB / Java, but a general and maintained enough to be usable solution is harder to see getting the kind of support it would need.


Sure, you can validate your databases schema against types in your code. Nothing wrong with that either, as far as it goes.

But that isn't a guarantee of anything at runtime.

You could make a runtime requirement that the schema and code match, but you're going to pay a price for that tight coupling. E.g. you'll need to put in place a mechanism to be able to update all your database clients and all your database schemas atomically. Once you get a decent amount of data in your database or scale horizontally, this could become infeasible (e.g., due to the downtime to update the schema) or impossible (e.g., because you don't have complete control of all databases, data, and database clients).

If you are going to go in this direction, you're usually going to be better off targeting a mapping layer/API, not the base tables. The coupling is not so rigid (e.g. there's room for simultaneous support for multiple versions of the data access API/schema. This all exists, of course. Of course, ORMs and other higher level data access libraries generally do this kind of thing to a greater or lessor degree. Importantly, the mapping layer/data access API should live with the database. That is, be developed and deployed with the database (directly or in controlled parallel).


> Your compiler doesn’t — and can’t — know what’s in your database.

At compile time, yes it can. Not inherently the compiler itself, but at compile time the process can fail to build if your models do not match the schema in the DB. Diesel, for example, keeps the schema in code and (optionally) compares it to that of the DB. Ensuring that at compile to your code matches the schema of the DB.

That of course doesn't handle changes to the schema post-build, but I hope no one is ad-hoc modifying their DB :)

edit: And this becomes even more of a local, isolated solution if your ORM is also managing your migrations. Which, in the case of Diesel, it also (optionally) does.


Maybe I should have added: it doesn't know what's in your database at runtime.

It's not you'll be ad hoc modifying your DB. It's that your schema and your application-side entities will change over time (continuously, during development, and somewhere between continuously and periodically in production, depending on how you release). Generally you won't be able to guarantee that they are updated in lock-step, or you won't want to pay the price to ensure that they always are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: