Hacker News new | past | comments | ask | show | jobs | submit login
My notes on Gitlab's Postgres schema design (2022) (shekhargulati.com)
488 points by daigoba66 10 months ago | hide | past | favorite | 161 comments



> It is generally a good practice to not expose your primary keys to the external world. This is especially important when you use sequential auto-incrementing identifiers with type integer or bigint since they are guessable.

What value would there be in preventing guessing? How would that even be possible if requests have to be authenticated in the first place?

I see this "best practice" advocated often, but to me it reeks of security theater. If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.

The only case I know of where this might be valuable is from a business intelligence standpoint, i.e. you don't want competitors to know how many customers you have. My sympathy for such concerns is quite honestly pretty low, and I highly doubt GitLab cares much about that.

In GitLab's case, I'm reasonably sure the decision to use id + iid is less driven by "we don't want people guessing internal IDs" and more driven by query performance needs.


> I see this "best practice" advocated often, but to me it reeks of security theater. If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.

Yes, but the ability to guess IDs can make this security issue horrible, or much much worse.

If you had such a vulnerability and you are exposing the users to UUIDs, now people have to guess UUIDs. Even a determined attacker will have a hard time doing that or they would need secondary sources to get the IDs. You have a data breach, but you most likely have time to address it and then you can assess the amount of data lost.

If you can just <seq 0 10000 | xargs -I ID curl service/ticket/ID> the security issue is instantly elevated onto a whole new level. Suddenly all data is leaked without further effort and we're looking at mandatory report to data protection agencies with a massive loss of data.

To me, this is one of these defense in depth things that should be useless. And it has no effect in many, many cases.

But there is truely horrid software out there that has been popped in exactly the described way.


Case in point, a recent security issue Gitlab experienced (CVE-2023-7028; arbitrary password reset by knowing one of the accounts associated mail addresses) was made worse by a feature of gitlab that few people know about; that the "userID" is associated with a meta/internal mail address.

This meant that people could send password resets for any user if they knew their userID. The mail format was like user-1@no-reply.gitlab.com or something.

Since it's a safe bet that "user ID 1" is an admin user, someone weaponised this.


I've already resolved to never use Gitlab entirely on the basis of that CVE but that makes it worse.

Password resets should just never go to an email that hasn't been deliberately attached to an account by the account's owner, full stop. There should not be a code path where it is possible to send any such thing to arbitrary emails. And redirect emails should never be treated as account emails in any way.


Even without that auto-incrementing ID, there are plenty of other options for guessing valid email addresses to use with that exploit. For example, if you're able to figure out the format an organization uses for their email addresses (e.g. first.last@company.com), and you're able to figure out who works at that org (via e.g. LinkedIn), then there's a very good chance you can reset passwords for, say, the company's CTO or other likely-highly-privileged users.

That is: this kind of proves my point. Removing autoincrementing IDs from the equation is of minimal benefit when things have already gone horribly horribly wrong like this. It's a little bit more work on the attacker's part, but not by anywhere near enough for such a "mitigation" to be of much practical benefit.


It’s mentioned in the article. It’s more to do with business intelligence than security. A simple auto-incrementing ID will reveal how many total records you have in a table and/or their growth rate.

> If you expose the issues table primary key id then when you create an issue in your project it will not start with 1 and you can easily guess how many issues exist in the GitLab.


Business intelligence isn’t really applicable on the database level with guids..way too many abstraction layers down.


>I see this "best practice" advocated often, but to me it reeks of security theater.

The idea of "security theater" is overplayed. Security can be (and should be) multilayered, it doesn't have to be all or nothing. So that, when they break a layer (say the authentication), they shouldn't automatically gain easy access to the others

>If an attacker is able to do anything useful with a guessed ID without being authenticated and authorized to do so, then something else has gone horribly, horribly, horribly wrong and that should be the focus of one's energy instead of adding needless complexity to the schema.

Sure. But by that time, it's will be game over if you don't also have the other layers in place.

The thing is that you can't anticipate any contigency. Bugs tend to not preannounce themselves, especially tricky nuanced bugs.

But when they do appear, and a user can "do [something] useful with an ID without being authenticated and authorized to do so" you'd be thanking all available Gods that you at least made the IDs not guassable - which would also give them also access to every user account on the system.


> Security can be (and should be) multilayered, it doesn't have to be all or nothing.

In this case the added layer is one of wet tissue paper, at best. Defense-in-depth is only effective when the different layers are actually somewhat secure in their own right.

It's like trying to argue that running encrypted data through ROT13 is worthwhile because "well it's another layer, right?".

> you'd be thanking all available Gods that you at least made the IDs not guassable - which would also give them also access to every user account on the system.

I wouldn't be thanking any gods, because no matter what those IDs look like, the only responsible thing in such a situation is to assume that an attacker does have access to every user account on the system. Moving from sequential IDs to something "hard" like UUIDs only delays the inevitable - and the extraordinarily narrow window in which that delay is actually relevant ain't worth considering in the grand scheme of things. Moving from sequential IDs to something like usernames ain't even really an improvement at all, but more of a tradeoff; yeah, you make life slightly harder for someone trying to target all users, but you also make life much easier for someone trying to target a specific user (since now the attacker can guess the username directly - say, based on other known accounts - instead of having to iterate through opaque IDs in the hopes of exposing said username).


>I wouldn't be thanking any gods, because no matter what those IDs look like, the only responsible thing in such a situation is to assume that an attacker does have access to every user account on the system. Moving from sequential IDs to something "hard" like UUIDs only delays the inevitable*

Well, there's nothing "inevitable". It's a computer system, not the fullfilment of some prophecy.

You can have an attack vector giving you access to a layer, without guaranteed magic access to other layers.

But even if it "just delays the inevitable", that's a very good thing, as it can be time used to patch the issue.

Not to mention, any kind of cryptography just "delays the inevitable" too. With enough time it can be broken with brute force - might not even take millions of years, as we could get better at quantum computing in the next 50 or 100.


> But even if it "just delays the inevitable", that's a very good thing, as it can be time used to patch the issue.

My point is that in this case, the additional time is nowhere near sufficient to make much of a difference. This is especially true when you consider that an attacker could be probing URLs before finding an exploit, in which case that tiny delay between "exploit found" -> "all users compromised" shrinks to zero.


Bugs happen also in access control. Unguessable IDs make it much harder to exploit some of those bugs. Of course the focus should be on ensuring correct access control in the first place, but unguessable IDs can make the difference between a horrible disaster and a close call.

It's also possible to use auto-incrementing database IDs and encrypt them, if using UUIDs doesn't work for you. With appropriate software layers in place, encrypted IDs work more or less automatically.


> The only case where this might be valuable is business intelligence

Nitpick: I would not call this "business intelligence" (which usually refers to internal use of the company's own data) but "competitive intelligence". https://en.wikipedia.org/wiki/Competitive_intelligence



In general it's a defense-in-depth thing. You definitely shouldn't be relying on it, but as an attacker it just makes your life a bit harder if it's not straightforward to work out object IDs.

For example, imagine you're poking around a system that uses incrementing ints as public identifiers. Immediately, you can make a good guess that there's probably going to be some high privileged users with user_id=1..100 so you can start probing around those accounts. If you used UUIDs or similar then you're not leaking that info.

In gitlabs case this is much less relevant, and it's more fo a cosmetic thing.


> In gitlabs case this is much less relevant (...)

Why, though? GitLab is often self hosted, so being able to iterate through objects, like users, can be useful for an attacker.


In my experience self-hosted GitLabs are rarely publicly-accessible in the first place; they're usually behind some sort of VPN.

As for an attacker being able to iterate through users, if that information is supposed to be private, and yet an attacker is getting anything other than a 404, then that's a problem in and of itself and my energy would be better spent fixing that.


This is again a defense in depth thing. In the age of WFH, cracking a corporate VPN is really not that difficult. If you can make an attacker's life harder for low cost you should do it just in case.


Except you ain't really putting up a meaningful obstacle against an attacker here. Compared to the typical effort of cracking a corporate VPN, brute-forcing IDs is downright trivial.

Like I said elsewhere: it's like calling ROT13 "defense in depth".


You're right, fair point.


> What value would there be in preventing guessing?

It prevents enumeration, which may or may not be a problem depending on the data. If you want to build a database of user profiles it's much easier with incremental IDs than UUID.

It is at least a data leak but can be a security issue. Imagine a server doing wrong password correctly returning "invalid username OR password" to prevent enumeration. If you can still crawl all IDs and figure out if someone has an account that way it helps filter out what username and password combinations to try from previous leaks.

Hackers are creative and security is never about any single protection.


> If you can still crawl all IDs and figure out if someone has an account that way it helps filter out what username and password combinations to try from previous leaks.

Right, but like I suggested above, if you're able to get any response other than a 404 for an ID other than one you're authorized to access, then that in and of itself is a severe issue. So is being able to log in with that ID instead of an actual username.

Hackers are indeed creative, but they ain't wizards. There are countless other things that would need to go horribly horribly wrong for an autoincrementing ID to be useful in an attack, and the lack of autoincrementing IDs doesn't really do much in practice to hinder an attacker once those things have gone horribly, horribly wrong.

I can think of maybe one exception to this, and that's with e-commerce sites providing guest users with URLs to their order/shipping information after checkout. Even this is straightforward to mitigate (e.g. by generating a random token for each order and requiring it as a URL parameter), and is entirely inapplicable to something like GitLab.


> Right, but like I suggested above, if you're able to get any response other than a 404 for an ID other than one you're authorized to access, then that in and of itself is a severe issue. So is being able to log in with that ID instead of an actual username.

You're missing the point and you're not thinking like a hacker yet. It's not about the ID itself or even private profiles, but the fact that you can build a database of all users with a simple loop. For example your profile here is '/user?id=yellowapple' not '/user?id=1337'.

If it was the latter I could build a list of usernames by testing all IDs. Then I would cross-reference those usernames to previous leaks to know what passwords to test. And hacking an account is not the only use of such an exploit, just extracting all items from a competitors database is enough in some cases. It all depends on the type data and what business value it has. Sometimes an incrementing ID is perfectly fine, but it's more difficult to shard across services so I usually default to UUID anyway except when I really want an incrementing ID.

Most of the time things don't have to go "horribly horribly wrong" to get exploited. It's more common to be many simple unimportant holes cleverly combined.

The username can still always be checked for existence on the sign-up step, and there aren't many ways of protecting from that. But it's easier to rate-limit sign-ups (as one should anyway) than viewing public profiles.

Do you leave your windows open when you leave from home just because the burglar can kick the front door in instead? It's the same principle.


> Do you leave your windows open when you leave from home just because the burglar can kick the front door in instead?

Yes (or at least: I ain't terribly worried if I do forget to close the windows before leaving), because the likelihood of a burglar climbing up multiple floors to target my apartment's windows is far lower than the likelihood of the burglar breaking through my front door.

But I digress...

> You're missing the point and you're not thinking like a hacker yet.

I'd say the same about you. To a hacker who's sufficiently motivated to go through all those steps you describe, a lack of publicly-known autoincremented IDs is hardly an obstacle. It might deter the average script kiddie or botnet, but only for as long as they're unwilling to rewrite their scripts to iterate over alphanumerics instead of just numerics, and they ain't the ones performing attacks like you describe in the first place.

> For example your profile here is '/user?id=yellowapple' not '/user?id=1337'.

In either case that's public info, and it's straightforward to build a list of at least some subset of Hacker News users by other means (e.g. scraping comments/posts - which, mind you, are sequential IDs AFAICT). Yes, it's slightly more difficult than looping over user IDs directly, but not by a significant enough factor to be a worthwhile security mitigation, even in depth.

Unless someone like @dang feels inclined to correct me on this, I'm reasonably sure the decision to use usernames instead of internal IDs in HN profile URLs has very little to do with this particular "mitigation" and everything to do with convenience (and I wouldn't be all that surprised if the username is the primary key in the first place). '/user?id=worksonmine' is much easier to remember than '/user?id=8675309', be it for HN or for any other social network / forum / etc.

> Sometimes an incrementing ID is perfectly fine, but it's more difficult to shard across services so I usually default to UUID anyway except when I really want an incrementing ID.

Sharding is indeed a much more valid reason to refrain from autoincrementing IDs, publicly-known or otherwise.


> multiple floors

But not if you lived on the ground floor I assume?

> iterate over alphanumerics instead of just numerics

Why would you suggest another sequential ID as a solution to a sequential ID? I didn't, UUIDs have decent entropy and are not viable to brute force. Don't bastardize my point just to make a response.

> I'm reasonably sure the decision to use usernames instead of internal IDs in HN profile URLs has very little to do with this particular "mitigation" and everything to do with convenience

It wasn't intended as an audit of HN, I was holding your hand when walking you through an example and chose the site we're on. I missed my mark and I don't think a third attempt will do much difference when you're trying so hard not to get it. If you someday have public data you don't want a competitor to enumerate you'll know the solution exists.


> But not if you lived on the ground floor I assume?

Still wouldn't make much of a difference when my windows are trivial to break into even when closed.

> Why would you suggest another sequential ID

I didn't.

> UUIDs have decent entropy and are not viable to brute force.

They're 128 bits (for UUIDv4 specifically; other varieties in common use are less random). That's a pretty good amount of entropy, but far from insurmountable.

And you don't even need to brute-force anything; if these IDs are public, then they're almost certainly being used elsewhere, wherein they can be captured. There's also nothing stopping an attacker from probing IDs ahead of time, prior to an exploit being found. The mitigations to these issues are applicable no matter the format or size of the IDs in question.

And this is assuming the attacker cares about the specific case of attacking all users. If the attacker is targeting a specific user, or just wants to attack the first user one finds, then none of that entropy matters in the slightest.

Put simply: there are enough holes in the "random IDs are a meaningful security measure" argument for it to work well as a strainer for my pasta.

> when you're trying so hard not to get it

Now ain't that the pot calling the kettle black.

> If you someday have public data you don't want a competitor to enumerate you'll know the solution exists.

If I don't want a competitor to enumerate it then I wouldn't make the data public in the first place. Kinda hard to enumerate things when the only result of attempting to access them is a 404.


> I didn't

Then how would you iterate it? You said:

>> iterate over alphanumerics instead of just numerics

I assume you mean kind of like youtube IDs where 123xyz is converted to numerics. I wouldn't call brute-forcing iterating, at least not in the sense we're discussing here.

> They're 128 bits (for UUIDv4 specifically; other varieties in common use are less random). That's a pretty good amount of entropy, but far from insurmountable.

If you generate a billion UUIDs every second for 100 years you have a 50% chance to have 1 (one!) collision. It's absolutely useless to try to guess even a small subset.

> And you don't even need to brute-force anything; if these IDs are public, then they're almost certainly being used elsewhere, wherein they can be captured.

Sessions can be hijacked anyway, so why not leave the user session db exposed to the web unprotected. Right? There will always be holes left when you fix one. That doesn't mean you should just give up and make it too easy.

> And this is assuming the attacker cares about the specific case of attacking all users. If the attacker is targeting a specific user, or just wants to attack the first user one finds, then none of that entropy matters in the slightest.

So your suggestion is to leave them all incrementing instead, do I understand you correctly?

> Put simply: there are enough holes in the "random IDs are a meaningful security measure" argument for it to work well as a strainer for my pasta.

It's such a simple thing to solve though that it doesn't really matter.

> If I don't want a competitor to enumerate it then I wouldn't make the data public in the first place. Kinda hard to enumerate things when the only result of attempting to access them is a 404.

There are lots of things you may want to have public but not easily crawlable. There might not even be the concept of users and restrictions. A product returning 404 for everything to everyone might not be very useful to anyone. You've been given plenty of examples in other comments, and I am sure you understand the points.

One example could be recipes, and you want to protect your work while giving real visitors (not users, there are no users to hack) the ability to search by name or ingredients. With incremental IDs you can scrape them all no problem and steal all that work. With a UUID you have to guess either the UUID for all, or guess every possible search term.

Another could be a chat app, and you don't want to expose the total number of users and messages sent. If the only way to start a chat is knowing a public key and all messages have a UUID as PK how would you enumerate this? With incrementing IDs you know you are user 1337 and you just sent message 1,000,000. With random IDs this is impossible to know. Anyone should still be able to add any user if the public key is known, so 404 is no solution.

I'm sure you'll have something to say about that as well. The point is to make it difficult to abuse, while still being useful to real visitors. I don't even understand your aversion to random IDs, are they so difficult to implement? What's the real problem?


> Then how would you iterate it?

Like with any other number. UUIDs are just 128-bit numbers, ranging from 00000000-0000-0000-0000-000000000000 to FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF.

I'll concede that iterating through the entirety of that range would take a very long time, but this still presumes that said iteration in its entirety is necessary in the first place.

> If you generate a billion UUIDs every second for 100 years you have a 50% chance to have 1 (one!) collision.

Maybe, if they're indeed randomly-generated. Are they indeed UUIDv4? Is your RNG up to snuff?

And the probability doesn't need to be as high as 50% to be a tangible danger. A 0.5% chance of an attacker guessing a UUID to exploit is still more than enough to consider that to be insufficient as a layer of security. Meanwhile, you're likely fragmenting your indices and journals, creating performance issues that could be exploited for a denial-of-service. Tradeoffs.

> Sessions can be hijacked anyway, so why not leave the user session db exposed to the web unprotected.

Hijacking a session is harder than finding a publicly-shared ID floating around somewhere.

> So your suggestion is to leave them all incrementing instead, do I understand you correctly?

My suggestion is to use what makes sense for your application. That might be a random number, if you've got a sharded database and you want something resistant to collisions. That might be a sequential number, if you need cache locality or simply want to know at a glance how many records you're dealing with. It might be a number with sequential high digits and random low digits, if you want the best of both worlds. It might be a text string if you just care about human readability and don't care about performance. It might even be multiple fields used together as a composite primary key if you're into that sort of thing.

My point, as was obvious in my original comment and pretty much every descendant thereof, is that of the myriad useful factors around picking a primary key type, security ain't really among them.

> There might not even be the concept of users and restrictions.

In which case everything served from the database should be treated as public - including the quantities thereof.

> You've been given plenty of examples in other comments, and I am sure you understand the points.

Correct, and if those examples were satisfactory for arguing a security rationale for avoiding sequential IDs, then I wouldn't still be here disagreeing with said arguments, now would I?

> One example could be recipes, and you want to protect your work while giving real visitors (not users, there are no users to hack) the ability to search by name or ingredients. With incremental IDs you can scrape them all no problem and steal all that work. With a UUID you have to guess either the UUID for all, or guess every possible search term.

You don't need to guess every possible search term. How many recipes don't use the most basic and common ingredients? Water, oil, sugar, salt, milk, eggs... I probably don't even have to count with my toes before I've come up with a list of search terms that would expose the vast majority of recipes for scraping. This is peak security theater.

> Another could be a chat app, and you don't want to expose the total number of users and messages sent. If the only way to start a chat is knowing a public key and all messages have a UUID as PK how would you enumerate this?

Dumbest way would be to DDoS the thing. At some number of concurrent sessions and/or messages per second, it'll start to choke, and that'll give you the upper bound on how many people are using it, at least at a time.

Smarter way would be to measure request times; as the user and message tables grow, querying them takes longer, as does inserting into them if there are any indices to update in the process - even more so when you're opting for random IDs instead of sequential IDs, because of that aforementioned index and journal fragmentation, and in general because they're random and therefore entail random access patterns.

> I don't even understand your aversion to random IDs, are they so difficult to implement? What's the real problem?

My aversion ain't to random IDs in and of themselves. They have their place, as I have repeatedly acknowledged in multiple comments in this thread (including to you specifically).

My aversion is to treating them as a layer of security, which they are not. Yes, I'm sure you can contrive all sorts of narrowly-tailored scenarios where they just so happen to provide some marginal benefit, but in the real world "sequential IDs are insecure" is the sort of cargo-cultish advice that's impossible to take seriously when there are countless far-bigger fish to fry.

My additional aversion is to treating them as a decision without cost, which they also are not. As touched upon above, they carry some rather significant performance implications (see e.g. https://www.2ndquadrant.com/en/blog/sequential-uuid-generato...), and naïvely defaulting to random IDs everywhere without considering those implications is a recipe for disaster. There are of course ways to mitigate those impacts, like using UUIDs that mix sequential and random elements (as that article advocates), but this is at the expense of the supposed "security" benefits (if you know how old a record is you can often narrow the search range for its ID), and it still requires a lot more planning and design and debugging and such compared to the tried-and-true "just slap an autoincrementing integer on it and call it a day".


> I'll concede that iterating through the entirety of that range would take a very long time

You don't say?

> but this still presumes that said iteration in its entirety is necessary in the first place.

It is, because it's compared to sequential IDs where you know exactly the start and end. No way of knowing with UUIDs.

> Maybe, if they're indeed randomly-generated. Are they indeed UUIDv4? Is your RNG up to snuff?

Stop constantly moving the goalposts, and assume they're used correctly, Jesus Christ. Anytime someone talks about UUID it's most likely v4, unless you want non-random/deterministic v5 or time-sortable v7. But the most common is v4.

> Hijacking a session is harder than finding a publicly-shared ID floating around somewhere.

Even firebase stores refresh tokens accessible by javascript (localstorage as opposed to HTTPOnly). Any extension with sufficient users is more viable than finding a single collision of a UUID. Programs with access to the cookies on the filesystem, etc. It's much easier to hijack sessions than guessing UUIDs.

> Dumbest way would be to DDoS the thing. At some number of concurrent sessions and/or messages per second, it'll start to choke, and that'll give you the upper bound on how many people are using it, at least at a time.

Won't tell you a single thing. Might be a beefy server serving 3 users, or a slow server with lots of downtime for millions of users. They may all be using it at once, or few of them sporadically. No way of knowing which. It's a retarded guesstimate. But this only shows the mental gymnastics you're willing to use to be "right". Soon you'll tell me "I'll just ask my contact at the NSA".

> sequential IDs are insecure

Nobody claimed this. They can be enumerated which may or may not be a problem depending on the data. Which I said in my first comment, and you seem to agree. This entire thread was a complete waste of my time.


> This entire thread was a complete waste of my time.

The feeling is mutual. Have a nice day.


I follow this best practice, there’s a few reasons why I do this. It doesn’t have to do with using a guessed primary ID for some sort of privilege escalation, though. It has more to do with not leaking any company information.

When I worked for an e-commerce company, one of our biggest competitors used an auto-incrementing integer as primary key on their “orders” table. Yeah… You can figure out how this was used. Not very smart by them, extremely useful for my employer. Neither of these will allow security holes or leak customer info/payment info, but you’d still rather not leak this.


> extremely useful for my employer.

I've been in these shoes before, and finding this information doesn't help you as an executive or leader make any better decisions than you could have before you had the data. No important decision is going to be swayed by something like this, and any decision that is probably wasn't important.

Knowing how many orders is placed isn't so useful without average order value or items per cart, and the same is true for many other kinds of data gleamed from this method.


That’s not correct. Not every market is the same in it’s dynamics.

Yes, most of the time that information was purely insightful and was simply monitored. However, at some moments it definitely drove important decisions.


Like what?

What's going to change how a team develops (physical) products? What's a merchandiser or buyer going to learn that influences how they spend millions of dollars or deal with X weeks-on-hand of existing inventory? What's an operations manager going to learn that improves their ability to warehouse and ship product? How's marketing going to change their strategy around segmentation or channel distribution? What's a CEO going to learn that changes what departments or activities they want to invest in?

At best you get a few little tidbits of data you can include in presentations or board decks, but nothing that's going to influence critical decisions on how money is getting spent to get the job done or how time is getting allocated to projects. Worst case you have a inexperienced CEO that's chasing rather than leading, and just end up copying superficial aspects of your competitors without the context or understanding of why they did what they did.

I've called up execs at competitors and had friendly chats that revealed more about their business in 30 minutes than you could possibly find out through this "method".


One good argument I found [^1] about not exposing primary keys is that primary keys may change (during system/db change) and you want to ensure users have a consistent way of accessing data.

[^1]: https://softwareengineering.stackexchange.com/questions/2183...


It's also exposes your growth metrics. When using sequential id's one can tell how many users you have, how many users a month you are getting and all sorts of useful stuff that you probably don't want to expose.

It's how the British worked out how many tanks the German army had.


> This is especially important when you use sequential auto-incrementing identifiers with type integer or bigint since they are guessable.

I thought we had long since moved past that to GUIDs or UUIDs for primary keys. Then if you still need some kind of sequential numbering that has meaning in relation to the other fields, make a separate column for that.


Except now people are coming back around, because they’re realizing (as the article mentions) that UUID PKs come with enormous performance costs. In fairness, any non-k-sortable ID will suffer the same fate, but UUIDs are the most common of the bunch.


Whether you think it's a real problem or not, if you want to solve it somehow, I compiled the current options one have in Rails apps:

https://nts.strzibny.name/alternative-bigint-id-identifiers-...


There are also reasons outside infosec concerns. For example where such PKs would be directly related to your revenue, such as orders in an e-commerce platform. You wouldn't want competitors to have an estimate of your daily volume, that kind of thing.


It can be really handy for scraping/archiving websites if they're kind enough to use a guessable id


It really depends but useful knowledege can be derived from this. If user accounts use sequential ids the id 1 is most likely the admin account that is created as first user.


> For example, Github had 128 million public repositories in 2020. Even with 20 issues per repository it will cross the serial range. Also changing the type of the table is expensive.

I expect the majority of those public repositories are forks of other repositories, and those forks only exist so someone could create pull requests against the main repository. As such, they won't ever have any issues, unless someone makes a mistake.

Beyond that, there are probably a lot of small, toy projects that have no issues at all, or at most a few. Quickly-abandoned projects will suffer the same fate.

I suspect that even though there are certainly some projects with hundreds and thousands of issues, the average across all 128M of those repos is likely pretty small, probably keeping things well under the 2B limit.

Having said that, I agree that using a 4-byte type (well, 31-bit, really) for that table is a ticking time bomb for some orgs, github.com included.


It is still under the limit today with 362,107,148 repositories and 818,516,506 unique issues and pull requests:

https://play.clickhouse.com/play?user=play#U0VMRUNUIHVuaXEoc...


I'm guessing this won't be including issues & PRs from private repos, which could be substantial


Elapsed: 12.618 sec, read 7.13 billion rows, 42.77 GB

This is too long, seems the ORDER BY is not set up correctly for the table.


Also,

> `repo_name` LowCardinality(String),

This is not a low cardinality:

7133122498 = 7.1B

Don't use low cardinality for such columns!


The LowCardinality data type does not require the whole set of values to have a low cardinality. It benefits when the values have locally low cardinality. For example, if the number of unique values in `repo_name` is a hundred million, but for every million consecutive values, there are only ten thousand unique, it will give a great speed-up.


> LowCardinality data type does not require the whole set of values to have a low cardinality.

Don't mislead others. It's not true unless low_cardinality_max_dictionary_size is set to some other value than the default one: 8192.

It does not work well for hundred million values.


This is an ad-hoc query. It does a full scan, processing slightly less than a billion rows per second on a single machine, and finishes in a reasonable time with over 7 billion events on GitHub from 2015. While it does not make sense to optimize this table for my particular query, the fact that it works well for arbitrary queries is worth noting.


That query took a long time


I'm convinced that GitHub's decision to move away from Rails was partly influenced by a significant flaw in ActiveRecord: its lack of support for composite primary keys. The need for something as basic as PRIMARY KEY(repo_id, issue_id) becomes unnecessarily complex within ActiveRecord, forcing developers to use workarounds that involve a unique key alongside a singular primary key column to meet ActiveRecord's requirements—a less than ideal solution.

Moreover, the use of UUIDs as primary keys, while seemingly a workaround, introduces its own set of problems. Despite adopting UUIDs, the necessity for a unique constraint on the (repo_id, issue_id) pair persists to ensure data integrity, but this significantly increases the database size, leading to substantial overhead. This is a major trade-off with potential repercussions on your application's performance and scalability.

This brings us to a broader architectural concern with Ruby on Rails. Despite its appeal for rapid development cycles, Rails' application-level enforcement of the Model-View-Controller (MVC) pattern, where there is a singular model layer, a singular controller layer, and a singular view layer, is fundamentally flawed. This monolithic approach to MVC will inevitably lead to scalability and maintainability issues as the application grows. The MVC pattern would be more effectively applied within modular or component-based architectures, allowing for better separation of concerns and flexibility. The inherent limitations of Rails, especially in terms of its rigid MVC architecture and database management constraints, are significant barriers for any project beyond the simplest MVPs, and these are critical factors to consider before choosing Rails for more complex applications.


Do you have any sources on GitHub moving away from Rails? This is the first that I've heard and my googlefu has returned zero results around this. Just last year they had a blog post around Building GitHub with Ruby and Rails[0] so your remark caught my off guard.

[0]: https://github.blog/2023-04-06-building-github-with-ruby-and...


Perhaps too late, but Rails 7.1[1] introduced composite primary key support, and there's been a third-party gem[2] offering the functionality for earlier versions of ActiveRecord.

[1] https://guides.rubyonrails.org/7_1_release_notes.html#compos...

[2] https://github.com/composite-primary-keys/composite_primary_...


Whilst I would agree that a monolith can run into scalability issues, I am not sure your characterisation of Rails as such is proportionate.

To say that Rails' architecture is a "sigificant barrier for any project beyond the simplest MVPs" is rather hyperbolic, and the list of companies running monolithic Rails apps is a testament to that.

On this very topic, I would recommend reading GitLab's own post from 2022 on why they are sticking with a Rails monolith[1].

[1] - https://about.gitlab.com/blog/2022/07/06/why-were-sticking-w...


I can't really comment on GitHub, but Rails supports composite primary keys as of Rails 7.1, the latest released version [1].

About modularity, there are projects like Mongoid which can completely replace ActiveRecord. And there are plugins for the view layer, like "jbuilder" and "haml", and we can bypass the view layer completely by generating/sending data inside controller actions. But fair, I don't know if we can completely replace the view and controller layers.

I know I'm missing your larger point about architecture! I don't have so much to say, but I agree I've definitely worked on some hard-to-maintain systems. I wonder if that's an inevitability of Rails or an inevitability of software systems—though I'm sure there are exceptional codebases out there somewhere!

[1] https://guides.rubyonrails.org/7_1_release_notes.html#compos...


Do we know for sure if gitlab cloud uses a multi-tenanted database, or a db per user/customer/org? In my experience products that offer both a self hosted and cloud product tend to prefer a database per customer, as this greatly simplifies the shared parts of the codebase, which can use the same queries regardless of the hosting type.

If they use a db per customer then no one will ever approach those usage limits and if they do they would be better suited to a self hosted solution.


Unless something has substantially changed since I last checked, gitlab.com is essentially self-hosted gitlab ultimate with a few feature flags to enable some marginally different behaviour. That is, it uses one multitennant DB for the whole platform.


Not according to [1] where the author said

> This effectively results in two code paths in many parts of your platform: one for the SaaS version, and one for the self-hosted version. Even if the code is physically the same (i.e. you provide some sort of easy to use wrapper for self-hosted installations), you still need to think about the differences.

1. https://yorickpeterse.com/articles/what-it-was-like-working-...


I've toyed with various SaaS designs and multi tenanted databses always come to th forefront of my mind. It seems to simplify the architecture a lot.


> Having said that, I agree that using a 4-byte type (well, 31-bit, really) for that table is a ticking time bomb for some orgs

A bomb defused in a migration that takes eleven seconds


The migration has to rewrite the whole table, bigint needs 8 bytes so you have to make room for that.

I have done several such primary key migrations on tables with 500M+ records, they took anywhere from 30 to 120 minutes depending on the amount of columns and indexes. If you have foreign keys it can be even longer.

Edit: But there is another option which is logical replication. Change the type on your logical replica, then switch over. This way the downtime can be reduced to minutes.


In practice the only option that I’ve seen work for very large teams and very large relational databases is online schema change tools like https://github.com/shayonj/pg-osc and https://github.com/github/gh-ost (the latter developed for GitHub’s monolith). It’s just too difficult to model what migrations will cause problems under load. Using a binlog/shadowtable approach for all migrations mostly obviates the problem.


Rails migrations really fail to be viable at scale. And its not really because of the migrations in Rails, its because changes in PostgreSQL get very very expensive. Things have gotten better in PG 15, but its still not quite there yet.

I've ran into an integer PK being almost exhausted in values twice now. In one of these cases I used pg-osc to rewrite a 8TB table on disk without downtime over a period of a week to fix the issue. In the other case it was about 6TB within 2 days. Its doable, but the amount of planning and time it takes is non-trivial.

I so far have no found any other viable solutions either. I keep hearing about logical replication but I haven't seen a great real-world solution at scale so far.


What has worked in my experience:

- decouple migration execution from code deployment. Require that all migrations are in totally separate PRs can be run well-in-advance of the code that depends on the db change being run

- maintain a stateful migration execution system using an online schema change system supervised by DBAs

- PRs adding migrations still need to be merged and deployed, but can only do so after the actual migration has run.


+1. We have been working on a schema migration tool called Bytebase for 3+ years. And we follow the similar principles.


You can also migrate it using logical replication.


This largely depends on the disk. I wouldn't expect that to take 30mins on a modern NVME drive, but of course it depends on table size.


Large tables take hours, if not days. I attempted a test case on AWS using souped up io2 disks (the fastest most expensive disks they have) and a beast of a DB server (r5.12xl I think) and it became abundantly clear that at certain scale you won't be doing any kind of in-place table updates like that on the system. Especially if your allowed downtime is one hour maintenance window per week...


I did it on a r6.24xlarge RDS instance and the CPU wasn't doing much during the operation. IO peaked at 40k IOPS on EBS with provisioned IOPS, I'm not sure if a local disk would be any faster but I already know that rewriting the table and creating the indexes are all single threaded so there isn't much you could gain.

Once I got the logical replication setup to work I changed 20 tables on the replica and made the switch with 15 minutes of downtime. That saved me a lot of long nights.

You can get an idea of how long this could take by running pg_repack, it's basically doing the same thing: Copying the data and recreating all the indexes.


> not sure if a local disk would be any faster

Uh, yes. NVMe drives can hit 1,000,000 IOPS, plus the latency is orders of magnitude lower than EBS.


Disk wasn't the limit in my case, index creation is single threaded.


Which is still very IO bound... Wonder what kinda IOPS you were observing? Also they make pretty fast CPUs these days :)


In JavaScript land, postgres bigints deserialize as strings. Is your application resilient to this? Are your downstream customers ready to handle that sort of schema change?

Running the db migration is the easy part.


Depends on the lib. Max safe int size is like 9 quadrillion. You can safely deserialize serial bigints to this without ever worrying about hitting that limit in many domains.


> Max safe int size is like 9 quadrillion.

2^53, to be precise. If your application involves assigning a unique identifier to every ant on the planet Earth (approx. 10^15 ≈ 2^50), you might need to think about this. Otherwise, I wouldn't worry about it.


11 seconds won't fix all your foreign keys. And all the code written against it that assumes an int type will accommodate the value.


Migrating primary keys from int to bigint is feasible. Requires some preparation and custom code, but zero downtime.

I'm managing a big migration following mostly this recipe, with a few tweaks: http://zemanta.github.io/2021/08/25/column-migration-from-in...

FKs, indexes and constraints in general make the process more difficult, but possible. The data migration took some hours in my case, but no need to be fast.

AFAIK GitLab has tooling to run tasks after upgrade to make it work anywhere in a version upgrade.


Being two orders of magnitude away from running out of ids is too close for comfort anyway.


> github.com included

Typo?


The point about the storage size of UUID columns is unconvincing. 128 bits vs. 64 bits doesn't matter much when the table has five other columns.

A much more salient concern for me is performance. UUIDv4 is widely supported but is completely random, which is not ideal for index performance. UUIDv7[0] is closer to Snowflake[1] and has some temporal locality but is less widely implemented.

There's an orthogonal approach which is using bigserial and encrypting the keys: https://github.com/abevoelker/gfc64

But this means 1) you can't rotate the secret and 2) if it's ever leaked everyone can now Fermi-estimate your table sizes.

Having separate public and internal IDs seems both tedious and sacrifices performance (if the public-facing ID is a UUIDv4).

I think UUIDv7 is the solution that checks the most boxes.

[0]: https://uuid7.com/

[1]: https://en.wikipedia.org/wiki/Snowflake_ID


> The point about the storage size of UUID columns is unconvincing. 128 bits vs. 64 bits doesn't matter much when the table has five other columns.

But it's not just the size of that one column, it's also the size of all the places that id is used as a FK and the indexes that may be needed on those FK columns. Think about something like a user id that might be referenced by dozens or even hundreds of FKs throughout your database.


...and this has not just a size impact but also a substantial performance impact.


> 128 bits vs. 64 bits doesn't matter much when the table has five other columns.

!!!!

But those 5 other columns are not indexed.

---

There are three levels of database performance:

1. Indices and data fit in memory.

2. Indices fits in memory, data does not.

3. Neither indices not data fit in memory.

If you can do #1 great, but if you don't have that, fight like a madman for #2.

---

Doubling your index sizes is just makes it harder.


think of the primary keys in a database like typedef void* ie it's your fundamental pointer and the size of it will impact every aspect of performance throughout - memory/disk footprint and corresponding throughput bottlenecks, cpu time comparing keys which is what every operation reduces to in the deepest inner-most loops of joins and lookups etc.

when x86-64 cpus were new the performance impact from switching to 64-bit pointers was so bad we had to create x32/ilp32 and the reason .NET still has "prefer 32-bit" as a default even today.

using 128-bit uuids as PKs in a database is an awful mistake


Prefer 32-bit does nothing for modern .NET targets. This is actually the first time I've heard the term being used in many years, even back in .NET Framework 4.6.x days it wasn't much of a concern - the code would be executed with 64-bit runtime as a default on appropriate hosts.


the 32bitpref corflag isn't part of a .net core target since those are always il, it is more properly a runtime concern as it should be

it's still the default in .net as of 4.8.1 (has been since it was introduced in 4.5 roughly coinciding w/java's pointer compression feature which is also still the default today)


The .NET Framework target might as well not exist :D

Many libraries are straight up dropping NS2.0 TFM and overall community tends to perceive the request to still target it very negatively. Sure, Visual Studio still runs some in-process features on top of it, and so do Office add-ins, but it's an active work-in-progress to get rid of any new code having to be written while still targeting it.

So, in that regard, this setting does not matter.


in .net core there's no setting, you target the x32 abi by choosing a runtime or target platform ending in "32" or "x86"


The v7 isn’t a silver bullet. In many cases you don’t want to leak the creation time of a resource. E.g. you want to upload a video a month before making it public to your audience without them knowing.


> There's an orthogonal approach which is using bigserial and encrypting the keys...

Another variant of this approach: https://pgxn.org/dist/permuteseq/

It is also feasible to encrypt the value on display (when placing it in URLs, emails, &c):

https://wiki.postgresql.org/wiki/Pseudo_encrypt

This maintains many of the benefits of sequential indexes and does allow you to change the key. However, if the key is changed, it would break any bookmarks, invalidate anything sent in older emails -- it would have the same effect as renaming everything.


It very much does when you have a ton of FKs (enforced or not) using such a column, and thus indexed and used in many joins. Making it twice as hard for the hot part of an index to fit to RAM is never good for performance, nor for the cloud bill.

If you have a column that is used in many joins, there are performance reasons to make it as compact as possible (but not smaller).


If I’ve learned anything in my 7 years of software development it’s that this kind of expertise is just “blah blah blah” that will get you fired. Just make the system work. This amount of trying to anticipate problems will just screw you up. I seriously can’t imagine a situation where knowing this would actually improve the performance noticeably.


Would it ever make sense to have a uuidv7 as primary key but then anther slug field for a public-id, e.g. one that is shorter and better in a url or even allowing user to customize it?


Yes sure but now you have to handle two ids and guaranteeing uniqueness across machines or clusters becomes hard.


That and a uuid is going to be unique across all tables and objects, whereas a slug will only be unique within a certain subset e.g. users within an organization. I’ve seen a production issue IRL where someone (definitely not me) wrote a query fetching objects by slug and forgot to include the ‘AND parent_slug = xxx’


Slight nit-pick, but I would pick up the author on the text vs varchar section.

The author effectively wastes many words trying to prove a non-existent performance difference and then concludes "there is not much performance difference between the two types".

This horse bolted a long time ago. Its not "not much", its "none".

The Postgres Wiki[1] explicitly tells you to use text unless you have a very good reason not to. And indeed the docs themselves[2] tell us that "For many purposes, character varying acts as though it were a domain over text" and further down in the docs in the green Tip box, "There is no performance difference among these three types".

Therefore Gitlab's use of (mostly) text would indicate that they have RTFM and that they have designed their schema for their choice of database (Postgres) instead of attempting to implement some stupid "portable" schema.

[1] https://wiki.postgresql.org/wiki/Don%27t_Do_This#Don.27t_use... [2] https://www.postgresql.org/docs/current/datatype-character.h...


>The author effectively wastes many words trying to prove a non-existent performance difference and then concludes "there is not much performance difference between the two types".

They then also show that there is in fact a significant performance difference when you need to migrate your schema to accodomate a change in length of strings being stored. Altering a table to a change a column from varchar(300) to varchar(200) needs to rewrite every single row, where as updating the constraint on a text column is essentially free, just a full table scan to ensure that the existing values satisfy your new constraints.

FTA:

>So, as you can see, the text type with CHECK constraint allows you to evolve the schema easily compared to character varying or varchar(n) when you have length checks.


> They then also show that there is in fact a significant performance difference when you need to migrate your schema to accodomate a change in length of strings being stored.

Which is a pointless demonstration if you RTFM and design your schema correctly, using text, just like the manual and the wiki tells you to.

> the text type with CHECK constraint allows you to evolve the schema easily compared to character varying or varchar(n) when you have length checks.

Which is exactly what the manual tells you ....

"For many purposes, character varying acts as though it were a domain over text"


And what is wrong with someone figuring out for themselves and explaining to others why some suggestion makes sense logically, rather than just quoting the manual?


> And what is wrong with someone figuring out for themselves....

Whatever floats your boat, of course.

However, the Postgres manual is amongst the best written and most comprehensive in the industry.

Therefore wasting your time proving the Postgres documentation is correct is, well, a rather interesting hobby.

And if I'm expected to read waffle on someone's blog post, I would rather they tell me something I do not know, and not something obvious that is already very clearly and comprehensively stated in a million places, whether the manual, the wiki or elsewhere.

Finally, as I said, its old, VERY old news. So its not even like the author is highlighting something brand-new in the latest release. Its been that way basically forever at this stage.


Foreign keys are expensive is an oft repeated rarely benched claim. There are tons of ways to do it incorrectly. But in your stack you are always enforcing integrity _somewhere_ anyway. Leveraging the database instead of reimplementing it requires knowledge and experimentation, and it more often than not it will save your bacon.


Has anyone written about or noticed the performance differences between Gitlab and GitHub?

They're both Rails-based applications but I find page load times on Gitlab in general to be horrific compared to GitHub.


I used Gitlab a few years ago, but then it had severe client-side performance problems on large pull requests. Github isn't ideal with them too, but it manages to be decent.


Yeah, good reason to spit up pull requests ;) I do think it improved a lot over the last two years, though


> compared to GitHub.

this is like comparing chrome and other browsers, even chromium based.

chrome and github will employ all tricks in the book, even if they screw you. for example, how many hours of despair I've wasted when manually dissecting a git history on employer github by opening merge diffs, hitting ctrl F, seeing no results and moving to the next... only to find on the 100th diff that deep down the diff lost they hid the most important file because it was more convenient for them (so one team lead could hit some page load metric and get a promotion)


I mean GitHub in general has been pretty reliable minus the two outages they had last year and is usually pretty performant or I wouldn’t use their keyboard shortcuts.

There are some complaints here from a former dev about gitlab that might provide insight into its culture and lack of regard for performance: https://news.ycombinator.com/item?id=39303323

Ps: I do not use gitlab enough to notice performance issues but thought you might appreciate the article


> I mean GitHub in general has been pretty reliable minus the two outages they had last year

Huh? GitHub has had major outages practically every other week for a few years now. There are pages of HN threads[1].

There's a reason why githubstatus.com doesn't show historical metrics and uptime percentages: it would make them look incompetent. Many outages aren't even officially reported there.

I do agree that when it's up, performance is typically better than Gitlab's. But describing GH as reliable is delusional.

[1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Delusional? Anecdotal maybe…I was describing my experience so thanks for elaborating.

I only use it as a code repository. Was it specific services within GitHub that failed a lot?


My apologies, that came off a bit harsh.

> Was it specific services within GitHub that failed a lot?

Well, everything from its frontends, to the CI service, to the Git service itself. People weren't able to push, pull, deploy or review code for hours at a time. Just major disruptions all around, which happened regularly.

I do think this has improved slightly over the past few months, but you were lucky if you weren't affected much.


No worries you’re fine. I guess downtime was much more widespread than I could even imagine.

I do remember one time all of us at work joking that we should get the day off when GitHub was done. :D


You can't throw around specific metrics ("two outages they had last year") then claim you were just sharing a personal anecdote when someone calls you out and brings receipts. Just take the L.


I absolutely can because this is a discussion. Being called delusional for sharing my experience on a platform I use _at least_ 40 hours a week was a bit much but in case you missed it, I relented. They brought receipts and I came down from my position because there was none. I wasn't arguing with anyone or saying anyone was wrong about anything.

Do you expect all of your conversations to be backed by facts and citation? Is that why the only comments you make are aggressive and confrontational? Because it's easier than the bar you're setting for everyone else? Yea, I looked back at your history, not a single url.

And now I'm taking a position.

> No worries you’re fine. I guess downtime was much more widespread than I could even imagine.

Did you miss that?


More comments on this submission: https://news.ycombinator.com/item?id=39333220


I always wondered what the purpose of that extra “I” was in the CI variables `CI_PIPELINE_IID` and `CI_MERGE_REQUEST_IID` were for. Always assumed it was a database related choice, but this article confirms it.


> 1 quintillion is equal to 1000000000 billions

it is pretty wild that we generally choose between int32 and int64. we really ought to have a 5 byte integer type which would support cardinalities of ~1T


Yeah it doesn't make sense to pick something that's not a power of 2 unless you are packing it.


I guess that depends on how the index works.


we usually work with some page size anyways, 64 5-byte ints fit nicely into 5 512-bit registers, and ~1T is a pretty flexible cardinality limit after ~2B


It's reasonable to not have auto increment id's, but it's not clear to me if there is benefits to have 2 IDs, one internal and one external. This increases the number of columns / indexes, makes you always do a lookup first, and I can't see a security scenario where I would change the internal key without changing the external key. Am I missing something?


You always have the information at hand anyway when doing anything per project. It’s also more user friendly to have every project’s issues start with 1 instead of starting with two trillion, seven hundred billion, three hundred and five million, sevenhundred and seventeen thousand three hundred twentyfive.


> As I discussed in an earlier post[3] when you use Postgres native UUID v4 type instead of bigserial table size grows by 25% and insert rate drops to 25% of bigserial. This is a big difference.

Does anyone know why UUIDv4 is so much worse than bigserial? UUIDs are just 128 bit numbers. Are they super expensive to generate or something? Whats going on here?


UUIDv4s are fully random, and btree indices expect "right-leaning" values with a sensible ordering. This makes indexing operations on UUIDv4 columns slow, and was the motivation for the development of UUIDv6 and UUIDv7.


I'm curious to learn more about this heuristic and how the database leverages it for indexing. What does right-leaning mean formally and what does analysis of the data structure look like in that context? Do variants like B+ or B* have the same charactersistics?


The 25% increase in size is true but it's 8 bytes, a small and predictable linear increase per row. Compared to the rest of the data in the row, it's not much to worry about.

The bigger issue is insert rate. Your insert rate is limited by the amount of available RAM in the case of UUIDs. That's not the case for auto-incrementing integers! Integers are correlated with time while UUID4s are random - so they have fundamentally different performance characteristics at scale.

The author cites 25% but I'd caution every reader to take this with a giant grain of salt. At the beginning, for small tables < a few million rows, the insert penalty is almost negligible. If you did benchmarks here, you might conclude there's no practical difference.

As your table grows, specifically as the size of the btree index starts reaching the limits of available memory, postgres can no longer handle the UUID btree entirely in memory and has to resort to swapping pages to disk. An auto-integer type won't have this problem since rows close in time will use the same index page thus doesn't need to hit disk at all under the same load.

Once you reach this scale, The difference in speed is orders of magnitude. It's NOT a steady 25% performance penalty, it's a 25x performance cliff. And the only solution (aside from a schema migration) is to buy more RAM.


I think its because of btrees. Btrees and the pages work better if only the last page is getting lots of writes. Iuids cause lots of un ordered writes leading to page bloat.


Random distribution in the sort order mean the cache locality of a btree is poor - instead of inserts going to the last page, they go all over the place. Locality of batch inserts is also then bad at retrieval time, where related records are looked up randomly later.

So you pay taxes at both insert time and later during selection.


We shouldn’t assume that this schema was designed all at once, but rather is the product of evolution. For example, maybe the external_id was added after the initial release in order to support the creation of unique ids in the application layer.


Is it just me that thinks in general schema design and development is stuck in the stone ages?

I mainly know dotnet stuff, which does have migrations in EF (I note the point about gitlab not using this kind of thing because of database compatibility). It can point out common data loss while doing them.

However, it still is always quite scary doing migrations, especially bigger ones refactoring something. Throw into this jsonb columns and I feel it is really easy to screw things up and suffer bad data loss.

For example, renaming a column (at least in EF) will result in a column drop and column create on the autogenerated migrations. Why can't I give the compiler/migration tool more context on this easily?

Also the point about external IDs and internal IDs - why can't the database/ORM do this more automatically?

I feel there really hasn't been much progress on this since migration tooling came around 10+ years ago. I know ORMs are leaky abstractions, but I feel everyone reinvents this stuff themselves and every project does these common things a different way.

Are there any tools people use for this?


One thing I like about hand designing schema is it makes you sit down and make very clear choices about what your data is, how it interrelates, and how you’ll use it. You understand your own goals more clearly.


So many people I encounter seem to think it’s the code that’s important when building the back end of an application. You see this when people discussing database schemas start comparing, say, rails to hibernate. But ORMs emphasise code instead of data, which in my experience is a big mistake.

In my experience, getting the data structures right is 99% of the battle. If you get that right, the code that follows is simple and obvious.

For database applications, this means getting the schema right. To this end, I always start with the underlying table structures, and only start coding once I understand how the various tables are going to interact.

Sadly, too many people think of the database as the annoying hoops we jump through in order to store the results of our code. In my world, the code I write is the minimum required to safely manipulate the database; it’s the data that counts.

Some people seem to think I’m weird for starting with the database (and for using plpgsql), but I think it’s actually a superpower.


This is true for in memory data as well. Object oriented programming is great for some problems, but it's also limiting the way we think about data by putting it close to the code operating on it. ORMs do the same to databases by pretending that rows are objects when that's only one way of modeling your problem.


It’s part of the dirty little secret of why document databases and other NoSQL systems became popular.

Required even less up front thinking about how to model your data. Throw some blobs of JSON into Mongo or whatever, and worry about the rest later.


It wasn’t always code first - you mention Hibernate but 15-20 years ago it was entirely feasible to inherit a database schema or design one up front, and then create performant metadata mappings to a usable object model. That sort of practise was tainted by the Bad Enterprise brushes of verbosity and XML in general, and so everyone moved to some flavour of active record. This allowed programmers to write less code and fit it into neater boxes, at a time when there was enormous demand for basic CRUD web apps, but a lot of power and expressiveness was lost.

Somewhat ironically, many modern enterprises have peeled all the way back to SQL for a huge amount of logic anyway, so I don’t think we’re done caring about database schemas quite yet.


You are absolutely correct; my previous business ended up ejecting almost all database logic from Java ORMs, and moved almost everything to straight SQL manipulated by stored procedures. Doing so resulted in a significant performance and productivity increase, relative to all the layers of nonsense we used to jump through.

One of the difficulties of doing this was that the tooling isn’t great. We had to write our own tools to make it work, but the benefits of going back to SQL were more than worth it.

(Many years later I made an open source version of that tooling, https://github.com/pgpkg/pgpkg)


A simple function call as a wrapper for a well optimized sql query just can’t be beat for performance. I have never understood why anybody would use an ORM, it’s usually as much work to learn and maintain them as SQL.


Yes, agreed. I spent a few years away from the technical side of things and when I jumped back in I was horrified at the amount of complexity these things added.

In Java it appeared that JPA was trying to completely abstract the database into the JVM, but it just meant a million annoying cache problems, terrible performance, a serious cognitive load, and poor interoperability with other database users.

The guy who promoted this internally kind of wrote the problems off as being a "leaky abstraction" but it wasn't even that. JPA entirely failed to resolve the object-relational "impedance mismatch" by failing to acknowledge that the boundary between the database and the application is actually a legitimate surface, like any other API surface.

What a nightmare it was.


Yup - and you can’t code your way to real scale either. At real scale the game is all about data structures. Code just gets them from A to B.

Or as they say at Google, the job of SWE is “moving protos”.


Exactly that. Sitting down and thinking about your data structures and APIs before you start writing code seems to be a fading skill.


It absolutely shows in the final product, too.

I wish more companies evaluated the suitability of software based on reviewing the back-end data storage schema. A lot of sins can be hidden in the application layer but many become glaringly obvious when you look at how the data is represented and stored.


Theres no right abstraction for it because everyones data is different. From my experience what most developers dont realize is that data is more complex than code. Code is merely the stuff that sits on top of the data, shuffling it around... but designing and handling the data in an efficient way is the real engineering problem.

Any abstraction you could come up with wouldnt fit 90% of the other cases


EF core doesn’t to drop/create for columns in db providers that support renaming columns. It only does it for ones that don’t like MySQL or SQLite


Not a silver bullet for every project but the Django ORM largely solves this with its migrations. You define your table classes and it just generates the migrations.

Throw in a type checker and you're in pretty good shape.

Rust also has sqlx which will type check your code against the DB.


I'm assuming this is why you say it's not a silver bullet, but to make it more explicit: the Django ORM will happily generate migrations that will lock crucial tables for long amounts of time and bring down your production application in the process.

You still need to know what SQL the migration will run (take a look at `manage.py sqlmigrate`) and most importantly how your database will apply it.


Dealing with a bunch of automigrate headaches in the Prisma ORM convinced me to just drop the layer entirely and write plain old SQL everywhere. It’s forced me to learn a bunch of new stuff, but the app runs faster now that I can optimize every query and migrations are much simpler with a single idempotent SQL setup script I can run to provision whatever deployment of the DB I need. I’m sure some problem spaces might benefit from all the additional complexity and abstraction, but the average app certainly can make do without for a long time.


It's a tradeoff! I think using the ORM to start and then move off it later is valid, depending on how much time you have to get an MVP out.


I also switched to plain SQL migrations and queries because I find it much simpler. I hear this a lot, that ORMs are easier/quicker to use, but I've found that writing plain SQL has almost no friction for me. I mean, learning at least basic SQL is part of the first year in most CS degrees.


There are some things that are really annoying in raw SQL and that are much easier with ORMs. Dynamic queries are one of those, if you don't want to concatenate SQL strings yourself you need at least a query builder at that point. The other part is updating/creating entities with many relations, that gets very tedious quickly and is somethings ORMs can handle for you.

It depends a lot on the specific application, for me those things are so common that I prefer to use an ORM even though I could write the SQL myself. ORMs are easier to use, but you still need to learn what they do under the hood and understand them or you will run into issues along the way.


I’m not suggesting anyone implement the raw binary protocol themselves! The postgres connection library you’re using should be able to handle dynamic queries perfectly fine.


An ORM is NEVER the solution, ever ever ever. Repeat after me: ORMs are not the solution to this problem. They work in your little toy apps with 4 customers but they are nothing but pain on real enterprise grade software.


also, every company I've worked at used ORMs in some capacity. Sometimes, not always.

Also, I don't really work on any apps with only four customers. either they have almost a million or zero :P Try again. :)


I don't think insults are really necessary here.

Also ORMs can be very useful, just don't do dumb stuff, like with any technology.

I use them when appropriate.


If you use MySQL, Planetscale’s branching is really amazing. Not using them, but wish I could for that. Gives you a complete diff of what you’re doing, and can also pre-plan migrations and only apply them when you need with their gating.


> Also the point about external IDs and internal IDs - why can't the database/ORM do this more automatically?

It has pretty big implications for how your application code interacts with the database. Queries that involve id's will need to perform joins in order to check the external id. Inserts or updates that need to set a foreign key need to perform an extra lookup to map the external id to the correct FK value (whether it's literally a separate query or a CTE/subquery). Those are things that are way outside the realm of what EF can handle automatically, at least as it exists today.


I think that stuff works about as well as it possibly could. If you think that's painful think about something like DynamoDB where if you didn't really think through the access patterns up front you're in for a world of pain.


I found this post very useful. I'm wondering where I could find others like it?


I recommend Postgres FM podcast, e.g. available as video on Postgres TV yt channel. Good content on its own, and many resources of this kind are linked in the episode notes. I believe one of the authors even helped Gitlab specifically with Postgres performance issues not that long ago.


I like this, although a bit narrower - https://blog.mastermind.dev/indexes-in-postgresql



Great read! And even better comments here.


so anyone use schema.rb in production? even dhh once campfire use .sql instead schema.rb




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: