Hacker News new | past | comments | ask | show | jobs | submit login
Ent: An Entity Framework for Go (github.com/ent)
173 points by thunderbong on Oct 30, 2022 | hide | past | favorite | 66 comments



I think Ent for Go has a ton of potential. Although I haven’t used this library, I have spent a lot of time studying the design space of ORMs because I’m currently iterating on an internal library that does ORM-like things at Notion. I really like the Ent approach because it allows working with a graph-based data model directly in the embedding language, instead of forcing developers to learn a new query language like Datomic datalog or SPARQL, while simultaneously avoiding lock-in to a specific data store backend.

I would like to build something like Ent that is ergonomic and usable on both the client and the server in Typescript, and facilitates easily moving business logic between the two. The context at Notion is that we have a very smart client using something ORM-like and a somewhat dumb server using plain row record values, so while we do share some code between the two, it’s difficult to port larger features from the client to the server. It’s kind of the reverse situation that most people have, which is a smart server pushing GraphQL to a relatively dumb client.

There are a lot of people looking into client-focused or end-to-end database abstractions right now, so the space exciting. What the upstarts should learn from Ent is:

1. Codegen is powerful, and often easier to understand than a mountain of typelevel magic.

2. A good ORM solution should really build in permissions and composing mutations at a deep level. The previous generation of ORMs like ActiveRecord left this stuff to user space to everyone’s detriment.

3. Embedded DSL as query language is a blessing and a curse. It’s better than forcing engineers to learn a weird alternative to SQL strings, but more frustrating to experts. Use these tradeoffs with caution!


I keep thinking that the query portion should be entirely split from the ORM portion. Query builder have specific usage, but are not the norm. None of them can model an advanced postgres query, but at the same time, they provide little (or nothing) utilities to map the data coming from a complex query, back to objects, which is funny, to me that would be the definition of "object relational mapper".

I know why activerecord does this: it's way easier to do the join in memory, so that one row is obtained per object and not an enormous row representing many, but not providing any utility for raw queries seems a big missing feature.

Edit: to be clear, by query I refer to read operations (select) that are not trivial (find one by id would be trivial)


In a typical ORM I agree about the query builder. Most ORMs encourage the typical use of a relational database where SQL fits the domain well.

But Ent is modeling data as a graph, and the DB behind the scenes is more of an implementation detail. The production version of Ent used at Meta stores data in TAO (https://engineering.fb.com/2013/06/25/core-data/tao-the-powe... my impression is that product engineers do not write any SQL when working with Ent at Meta.


I might be misreading here, but are you suggesting that the DB behind the scenes is not modeling things in the traditional sense but rather as a graph. Thinking of postgres, this could be by dumping a big json object in a generic "graph" table. Is that the case?

That would be acceptable from my point of view. The problem is using SQL as a graph produces very inefficient queries once hitting a certain scale, but if the DB is already a graph, the problem should be limited.


The way TAO (and Notion to some degree) deal with this is by tightly coupling the persisted (SQL?) data store to an in-memory cache that make KV style reads like “get foo by foo_id” extremely fast. They also limit the kinds of queries developers can write to the ones more likely to be very fast on the infrastructure.

Pushing graph queues down to a single SQL query so that the SQL DB spends massive CPU time on joins can be an issue. Instead these systems can avoid talking to the SQL DB at all for example if they’re on the happy path just chasing graph edges.


That makes sense. In a certain way, it's doing something similar to react-query


> the DB behind the scenes is more of an implementation detail

Ugh... what a bummer then.


Just to emphasize, that OSS version of Ent models SQL tables in the traditional way (edges as foreign-keys and join-tables). i.e., the database structure is not obscure or something and can be easily read by developers and ported to other ORMs in case it's needed.


> at the same time, they provide little (or nothing) utilities to map the data coming from a complex query, back to objects

Example of a query builder with built-in mapping capabilities (it’s mine):

https://github.com/bokwoon95/sq

I feel some exasperation when I see query builders that throw a query string back to the user and ask them to map the results themselves. That’s easily the most tedious and mistake-prone part of using SQL queries. In the case of my library, projection and mapping are handled by the same callback function so in order to SELECT a field you basically map it and it’s automatically added to the SELECT clause.


I skim-read it but couldn't find an example of what I think as challenging: a join of 3 tables (well, even two).

When you join 3 tables (assuming has many and "has many through" relationships), what you get back is enormous rows and multiple rows, all of this in tabular form, but in the software is usually represented as a graph. I'd love a library that helps building back these massive rows into relationships.

Please forgive me if your library does this, while I saw the "mapping function", I didn't see anything to help me build back graphs. I can map rows "easily", but I cannot recreate associations easily, it requires a bunch of work.


> I didn't see anything to help me build back graphs

Hmm you've certainly given me something to think about. Thanks.

BTW joins are not challenging, but you made me realize I didn't show any joins in my basic examples. Here is an UPDATE with JOIN in the meantime: https://bokwoon.neocities.org/sq.html#postgres-update-with-j....


Thank you for the examples. I see the joins example, but they seem to be about creating queries, not mapping data.

    sql = "select blog.name, post.content, author.display_name from blog join post on blog.id = post.blog_id join author on post.author_id = author.id"
Assuming the relationship: many blogs have many posts and posts have one author, I'd expect something along the lines of (sudo code):

    schemaOnTheFly = Blogs{}.HasMany(Posts{}.HasOne(Author{}) // Sorry the syntax for this doesn't really exist
    blogs := query.Exec(sql, params, schemaOnTheFly)

    fmt.Printf("%+v\n", blogs[0].Posts[0].Author)
That's what I'd expect. Do notice that the schema is per-query, I'll let the developer handle the sharing portion of the schema (might be shared by a few queries)


> I keep thinking that the query portion should be entirely split from the ORM portion

This is how SQLAlchemy does it. Lower level query tier you can use directly, and a higher level ORM tier that uses the query tier.


> 1. Codegen is powerful, and often easier to understand than a mountain of typelevel magic.

I disagree here. And I also don't like your framing.

Yes, type-level logic is harder to learn than understanding code you already read everyday.

It's also easier to count and calculate with your fingers and use concrete examples rather than learning abstract math. But we still do it, because in the end you learn it once and have a powerful tool that makes you more productive for the rest of your life and allow you do to things that you otherwise would never have been able to.

The same is true for code-generation in my opinion. And I believe it will change for Go as well. Go is not getting generics for no reason. It is because the push is strong - Go has more inexperierenced developers using it compared to other languages. But over time, they will turn into experienced developers. They learn more concept and want to become more productive. So they will gradually ask for more and more features until Go is nothing like it was before.

> 2. A good ORM solution should really build in permissions and composing mutations at a deep level. The previous generation of ORMs like ActiveRecord left this stuff to user space to everyone’s detriment.

Totally disagree. You can never solve those problems in a generic way for any domain. Just make it easy for the user to do it themselves.

> 3. Embedded DSL as query language is a blessing and a curse. It’s better than forcing engineers to learn a weird alternative to SQL strings, but more frustrating to experts. Use these tradeoffs with caution!

That's because SQL as a language has very very limited capabilities for abstraction. So every language will create a DSL to deal with it, even languages that can parse all the SQL and validate it at compile-time. Same reasons that I gave in 1.)


> But we still do it, because in the end you learn it once and have a powerful tool that makes you more productive for the rest of your life and allow you do to things that you otherwise would never have been able to.

Your own words are apt: I disagree here. And I also don't like your framing.

> Go has more inexperierenced developers using it compared to other languages. But over time, they will turn into experienced developers. They learn more concept and want to become more productive. So they will gradually ask for more and more features until Go is nothing like it was before.

Ignoring the rest of your comment, as well as the possibly snide remark regarding the experience of engineers who use Go, I believe this neither reflects reality nor the reality I wish to participate in.

Go eschews complexity, and is particularly paranoid of complexity sneaking in under the guise of convenience. Productivity is rarely hindered by having to hand-write a for-loop or some boilerplate to handle errors.

Go is designed to be easy to read and easy to write, to be comprehensible by both green and sage developers, and most importantly maintainable. This means preferring explicit behavior over implicit magic. The toolchain prioritizes fast compilation times over aggressive optimization.

I don't write Go to be productive (but I am very productive). I write Go so everyone can be productive. I don't want anyone spending time unraveling your life's work masterpiece. I don't want anyone to ever have to understand Scala or Haskell.

It's cool that you feel it necessary to reach for more powerful, expressive language constructs to be productive. I do not.


None of the industrial languages I’ve looked at (Kotlin, Typescript, Go, Java, Swift, Rust) can do the things I want to do using only typelevel features, especially at practical scale. Eg, transform a schema type that describes my database tables into an easy-to-use query builder and data abstraction API with rich method chaining.

Often in Java/Kotlin projects people end up writing compiler plugins / annotation processors which is like moving the codegen to build time and making it harder to inspect/understand, while simultaneously spending more developer time on it since those systems may need to run for every compiler invocation. The same goes for Rust; rust’s various macro systems target the problem I have, but Rust users report long compile times due to macro magic. More esoteric languages like Scala and Haskell might have the expressive power I want, but don’t seem practical to implement for other reasons.

Of course there’s always the runtime only approach in Ruby/ActiveRecord but that runs into correctness problems at practical scales.

Codegen is usually brittle and is often ugly, but is guaranteed to have more metaprogramming power than in-language typelevel features. Ultimately any such system can be complex, I just hope to build one where the leverage I payed for with complexity is well worth it.


The other benefit of codegen in situations like this (a company wide schema) is the potential for multi language bindings. Entgo demonstrates this by bridging a Go ORM, SQL migrations and a GraphQl schema.

Interesting reading along the same lines is Language Oriented Programming (1994) http://www.gkc.org.uk/martin/papers/middle-out-t.pdf


> Eg, transform a schema type that describes my database tables into an easy-to-use query builder and data abstraction API with rich method chaining.

TypeScript can do this. It was done already halfway-well 6 years ago with way fewer features in TypeScript (https://github.com/brianc/node-sql/blob/master/lib/types.d.t...) - today we can do much better.


I want to derive a Model type with methods from a Row type so I can write something like `const openDiscussions = blockModel.getContent({ where: c => c.getType() !== “page” }).andRecurse().getDiscussions({ where: d => !d.getResolvedAt() })` given an input row type like `type BlockRow = { id: BlockId, type: “page” | “text”, content?: BlockId[], discussions?: DiscussionId[] }; type DiscussionRow = { id: DiscussionId, resolvedAt?: Date }`


You cannot use the lambdas operators combo without a preprocessor (like ttypescript) mainly because closure captures cannot be accessed from the function AST.

You will also need to use a tuntime dsl to describe the schema, and then derive both the model classes and the model types from it (just because runtime parts cannot be derived from types)

Other than that, the rest is very possible.


> You will also need to use a tuntime dsl to describe the schema

This is kind of the whole ballgame right? How would you do any of this with types if the schema isn't defined until runtime?


I meant that since types are erased at compile-time in typescript, its much easier to make a schema description DSL that would then serve as the base to derive both the ORM DSL (runtime objects) and the types (compile time checks). If you go the types-first route, you will be forced to use proxies which can be more painful and constraining.

The popular choice here is to use classes plus decorators, but its not the only choice

https://typegraphql.com/docs/introduction.html#what

The convenient bit here is that classes already have both a type and a runtime representation, and decorators are also available to attach any extra metadata necessary to that runtime representation


FWIW I've given up the SQL query builder route and instead went with GraphQL query builder / Hasura route (https://typed-graphql-builder.spion.dev/).

Its technically codegen but only from the schema; queries are fully typed on-the-fly.


> More esoteric languages like Scala and Haskell might have the expressive power I want, but don’t seem practical to implement for other reasons.

Well, if you can't use languages that make it possible/feasible, then code generation might indeed be the best option. It's just that your claim sounded quite general, so I had to jump in. :)


> You can never solve those problems in a generic way for any domain.

This is rather final. Honestly curious if this was shown to be the case or just the result of failed earlier attempts.


Totally agree with you. Especially about point 1. Codegen is one of the bluntest tools there is and should be almost always avoided.


Why is that? I’ve done a few toy attempts to build out ORM-like model classes in Typescript that wrap an input schema type using typelevel features, and my profiles of the Typescript compiler show a lot of slowdowns coming from combinatoric explosions in the typechecker. In this language specifically, the type system has quite advanced expressiveness but using those features in practice runs into limitations of the checker’s implementation frequently, and more so once you have many input types. We have ~80 table types and often a single table type is a union of variants - our biggest, the type for a Notion block is a union with 64 variants. I wrote a type that takes such row type as input, and outputs an OO-style Model interface for it with accessor methods, relational shortcuts, etc, and a matching Proxy that does the same at runtime. It works, and for smaller types seems like a fine solution. But in practice all the magic gums up Typescript’s inference and makes the language service crash into memory limits. Or the compiler just gives up and says “this union type is too complex to represent”.

If I move the same type magic into codegen, the compiler is much happier; plus stack traces lead directly to a simple method implementation instead of jumping into difficult-to-follow proxy machinery. The downside is that the codegen logic that consumes the Typescript compiler API to understand the schema types is more complex than the mapped types before, but it does make it feasible to have my cake (derive an ORM/model API from a schema) and eat it too (not make the compiler so unstable and touchy).


While in that specific case codegen does sound like the right approach, I think in most regular cases you would be able to get away without it. Honestly, a 64-variant type is not really that common.

I think codegen is OK as long as

- its designed in a way where it doesn't cause users to want or need to modify the generated code (this is the biggest downside of codegen)

- its against "relatively stable" API thing (protobuf / swagger descriptions, DB schema etc).


I said almost never not never, 64 variants is quite esoteric I'd say. The reason why I dislike codegen so much is that you fix one problem but introduce multiple new problems.

How do you handle the generated code? Do you check it into source control? If so, how do you ensure people do not touch it. Manually modified code generated code is one of the worst places to be in for maintenance. You mentioned you wanted language server support this means the generated code must not be ephemeral and continually visible to the rest of the project. When will this generated code need to be regenerated? On a schema change? How do you ensure the generated code always matches the schema?

Like I said all of these question can be answered, but it's quite a burden. In fact if I were you I would have looked into fixing the typescript compiler since the problems your ran into probably weren't fundamental. -


My general tactic for codegen hygiene like this is to check in the generated files, and on every CI re-run the code generator. After the generator runs, CI asserts there are no changes in Git. If there are any changes, the job fails.

This ensures that generated code is consistent with its inputs (including the generator logic) for every merge to main. It prevents people from editing the generated code, since their edits will cause a diff in CI and fail the build, but no one ever actually loses their work since the codegen does not run automatically/continuously during local development.

Checking in the files also makes it very easy to review changes to the generator since you can always tell how the output is changing.

The total burden for the above system is:

- a 30 line command called `Notion assert-clean`

- A 30 line CircleCI job that loops over the list of code generator commands, calls the command, then calls assert-clean.

We’ve used that tactic for years for simpler stuff like “make sure the SQL dump is consistent with the SQL migrations” and “put all the file names in this directory tree into a typescript file so we can tab complete them”.

The only necessary bit is that your code generator shouldn’t be spitting out enormous mountains of unreadable code.


I guess you are aware of Prisma. What is your opinion of it in this context?


Prisma is a nice DB client library but does't do any of the things I'm interested in - reactive queries on the client, recursive graph traversal, raising level of abstraction for the organization. Schematizing the DB & a type-safe DB client is nice, but I'm more interested in stuff one or two levels of abstraction up. Like, after the data comes into memory Prisma is done and out of the picture. But getting the data into memory is the easy part IMO. Traversing it, adding permissions and business logic, managing & composing mutations, dealing with caching and reactivity.. that's the good stuff, and I'm not sure what if any Prisma offers there.


Codegen is a super power. It has consistently made me 5 to 10 times more productive compared with experienced developers I have worked with.

I have more than once implemented production code where 80% of the code is generated. With only the biz logic hand coded. Including a complex 50+ screen Typescript application developed from scratch in just 3 months.

However, as with any other tool, you have to use Codegen intelligently to get the benefits. I have more than once seen Codegen being used really badly. Making the people using it think that Codegen is always a bad idea. Codegen is a complex tool and there are many ways to use it incorrectly.


> 100% statically typed and explicit API using code generation.

Whenever I have to do code generation in my language of choice, I feel that the language should be able to do that without having to generate code. It's annoying to inspect and maintain and usually has quirks that I have to work around.

How do Go developers feel about that?


Code generation is feature of Go. We use it with ent to generate clients, services, and database all together using the same ent scheme.

We even use it to generate clients and servers for other languages with the help of templates.

The language is small and simple that maintaining the generated code isn't that big of a problem as you might think.


> Whenever I have to do code generation in my language of choice, I feel that the language should be able to do that without having to generate code.

A good use of code generation is to relieve yourself the burden of hand-writing repetitive mapping code, especially when modelling the database structure in application code. This is a perfect use case for that. Same is true for generating a client to some external API.

Otherwise I'm inclined to agree with you. A few times, I've written code generators before and realised I could've gotten away with some generic programming.

But that has its own set of gotchas and caveats. Though go has support for some generic programming now.


ORMs have the problem that the shape of the interface they provide mostly depends on configuration or data (the SQL schema). In static languages at least, this means that code has to be generated at some point. Tooling and convenience wise, there are differences between the approaches, but conceptually, does it really make a huge difference whether this step happens in a separate tool before compilation (seen here), or inside the compiler (in a macro system like Rust’s), or at runtime (as ORMs tend to do in JITted languages like Java and C#)?


> ORMs have the problem that the shape of the interface they provide mostly depends on configuration or data (the SQL schema). In static languages at least, this means that code has to be generated at some point.

Maybe for most languages that's true, but in general I don't think it is.

> Tooling and convenience wise, there are differences between the approaches, but conceptually, does it really make a huge difference whether this step happens in a separate tool before compilation (seen here), or inside the compiler (in a macro system like Rust’s), or at runtime (as ORMs tend to do in JITted languages like Java and C#)?

Between runtime and compiletime or code generation there is certainly a big difference. Between compile-time (or macro time) and code generation I think too. The reason is that, for instance, if it happens at compile-time then you have types that you can work with. You can use those types and reshape them to create even more types. E.g. you could take the database schema types and generate graphql types. That means, the mapping from the database to the types of the programming language has to be done only once and then other libraries can build upon it without containing this part anymore.

But if you use code generation, essentially every library has to know how to get the types from the database, since it is very hard to take the generated code and then transform it into types or code for e.g. graphql. Unless you parse the generated code and then transform it. Not only is there now a problem of the order of code generation but I believe it is also inherently harder to parse arbitrary code compared to parsing arbitrary types, simply due to the fact that types are much more restricted in what they can express.


I started to like Go's code generation approach.

Code generation make you very productive since IDE's auto-complete and code search works perfectly and snappy with the generated code.

The only complaint is that it tends to make noise during code reviews because they include autogenerated code too.


Is it considered OK to commit autogenerated code? With Java I don't do that. It's generated during build phase in output folders but never commited to the VCS. Never had any issues with that.


There is nothing wrong with committing generated code in Ent and Go. I don't know about Java but in Golang it does not create any platform-specific code in general, so committing code in the VCS would ensure reproducibility.

It is still OK to commit without generated code and things won't break as long as you pinned the package versions correctly. But some people don't like to put extra burden in the build phase. So it is totally organization's decision whether to commit generated code or not.


The advantage of Codegen is that it works for any language. Whether the language has macros or whatever. Another advantage is that you can generate code for multiple languages (say Typescript and C#) from the same spec, making sure the client/server code stays 100% in sync.

Codegen is my favourite super productivity tool. I routinely write production code where only the core biz logic is hand coded. The rest is fully automated. Most Excellent!


I've never had an issue with code generation in Go. However, they do tend to be verbose - hopefully they will start using generics to re-use the common bits and cut down on the amount of generated code!

The only instance I've had issues with them has been when generating code for things like schema languages (like GraphQL) on large projects where if you click to check the definition of a function/struct, the editor struggles a little to open up a file with tens of thousands of lines of code.


I've been using Ent for some time on a project and its been quite nice to just be able to write the schema in Go, testing has been a breeze with the enttest package, hooks work well, and everything feels intuitive to me unlike most other ORMs or ORM-adjacent tools.

My preferred package before Ent was Squirrel [1] but I definitely plan to use Ent for future projects.

[1] https://github.com/Masterminds/squirrel


Hey @thunderbong

One of Ent's maintainers here. Thanks a lot for sharing our project.

To all others interested, Ariel (a8m) and I are here following this thread, so feel free to ask us anything and everything!


Thank you, you're doing an amazing job!


Thanks so much for the kind words.

Join us on our discord server (https://discord.gg/qZmPgTE6RX) to share what you're working on!


I hear Entity framework and code as schema I get nervous

The .Net entity framework has caused a lot of problems for clients in the past. I am not as familiar with the new one in .net core which is now just .net again I think.

A huge mess managing deltas for the schema.

Often used by people who do not know SQL nor relational databases that leads to enormously slow and resource intensive queries, (that can be done really easy and fast in raw Sql, and once you figure out how to EF is doing things) schemas that are suboptimal

Once you start fighting EF to make it behave better you lose a lot of the benefit of the abstraction in the first place…


We ran into huge problems using Hibernate. Hibernate is such a complex beast and and includes so many traps that I think it's just not worth it, at least not when you don't have an absolute Hibernate expert on board. In our case, people don't care, as long as it "works" - which of course causes a lot of performance problems down the road.

I'm not opposed to ORMs in general, but there has to be a middle ground.


related -- an Ent inspired ORM in TypeScript: https://aphrodite.sh/, although with a focus on local software.


Very cool and very similar to some of the ideas I’ve been playing with!


How would this go as a Django replacement? I still use Django for new projects, but find that I end up just putting a GraphQL API on top of the Django ORM. At face value, Ent seems to tick both those boxes, but with the efficiency of a compiled language.


I have no experience in Django but in Ent with GraphQL.

Ent is not a full-featured web framework so you need to implement many of features by your own or use other libraries (e.g. http server and session management).

If you are only looking for ORM + GraphQL then I highly recommend trying Entgql, an Ent extension for GraphQL with Gqlgen library [1]. Once you define an ORM schema, it will generate GraphQL Query for Relay server. Still you need to implement GraphQL Mutations by your own but at least it will create Input types for you (both for Create/Update).

[1]: https://github.com/99designs/gqlgen


I can see how EF was useful in a stateful server setup (tracking changes, etc) but I just don't get why it's very useful for stateless API servers. Write some SQL and map it using dapper


I'd kill for a port of the .Net Entity Framework to JavaScript - the current ORMs for JavaScript are really nothing compared to EF.. If you haven't used it, it is super powerful at selecting super complex structured data from a database super easy. :) It's query format is also the linq standard so you use the same extension methods to work with in-memory collections as you do with database tables.


Do you have a link that quickly expresses the gist of what makes Entity Framework good/powerful? I tried using it in college for a little project, hit the docs like a brick wall, and went back to Rails. I always struggle “getting to the good parts” with Microsoft’s stuff.


Microsoft documents are generally fairly good. I haven't touched much .Net these last 10years but i believe that it took at least EF6 in the original .Net framework to have the main good stuff and be productive, add the core version to the confusion...Latest few core versions should be good as well.


“Go read the docs” is not helpful! I tried that 8 years ago based on HN sentiment about C# and my experience with the Microsoft docs was that they are very detailed but the level of detail makes it difficult to quickly pick up what it’s like to actually use one of their thingies. I walked into the docs for Entity Framework, spent 6 hours trying to get Visual Studio and their suggested development database set up, then gave up and went back to Rails which took 1 apt-get command and 2 Ruby commands.


Every time I try to create a new project in some sort of actor/graph manner I end up just creating some DB indexes and crons instead. Maybe I'm just not thinking big enough, but I hope I get to work on something that justifies this architecture someday because it looks like fun


I see it was inspired by Entity Framework. Does it have migrations?


Yes. The migration is built on top of Atlas: https://github.com/ariga/atlas


I thought this looked familiar..

Discussed 2 years ago:

https://news.ycombinator.com/item?id=26008521.

(39 points, 9 comments)


I first heard about Ent on HN about 2 years ago. At the time I was evaluating every ORM in Go to find something that would fit the bill and Ent happened to be exactly what we needed. We've now been using it for 2 years and it's been one of the best bets we made on a technology choice.

Things that have worked really well:

* Auto DB migrations using Ent+Atlas. Ent implements a lot of great low level defaults that I just don't want to need to think about (foreign key constraints, indexes, naming of join tables etc)

* Generating our GraphQL API from Ent Schema

* Generating Protobuf definitions for internal tools to talk to (now using Buf for the actual tooling, but having the protos generated saved a huge lot of time).

* Being able to quickly craft really complex multi-edge joins without really thinking through the SQL allows for quick implementation of new features.

* Query optimizations such as using WINDOW clauses for pagination in GraphQL queries (I wouldn't have even thought this was possible).

* The generated code is quite a lot of lines, but it's really nicely structured and idiomatic, making it easy to extend.

There's been heaps of other neat finds along the way, but that's a summary.

Shout out to Ariel and Rotem for being excellent stewards of the Ent community and helping us solve some complex problems along the way.


>* Generating our GraphQL API from Ent Schema

How do you make sure you don't accidentally expose an-authorized data to the user with auto-generating GraphQL APIs? Does Ent have built-in authorization validation?


I should have mentioned this, Ent has built in authorization at the row level: https://entgo.io/docs/privacy

There are some gotchas with this, but like all auth you need to take the time to think through it and once you do it is extremely powerful. This approach means you can essentially forget about needing to scope queries to specific users throughout your codebase as Ent will automatically apply that part of the query wherever it is needed.

I know there's been some comments on this thread and others about coupling auth to your ORM, I think it's necessary as it's otherwise too easy to forget this somewhere deep in your code and accidentally expose everything to the wrong user.


Thanks for the kind words, Ivan! You're awesome




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: