> how is it that we can have a whole population of systems (eg scientists) eventually coming to a consensus about some phenomenon (eg the value of some physical quantity?) It seems necessary to have a unified global system coordinating the whole thing.
Not necessarily. It's not a "global system coordination" thing, it is coming to a consensus that "we will use this reference frame as our starting point", which might look like a global system coordinating. I guess you can say that 'science' initiallizes a reference frame in which we can all participate, compare answers, reproduce results, etc.
> If that happens to be a bottleneck and you can do better, you should definitely do it in code locally. But these are two ifs that need to evaluate to true
If the OP said what you are saying, I'd probably agree. However, the above statement makes it clear that the OP is saying "put it in the database unless you can prove it doesn't belong there".
That is what I disagree with. There's a lot of reasonable things you can do with a database which aren't the best thing to do from both a system and performance perspective. It is, for example, reasonable to use the sort method on a database. It's also not something you should do without proper covering indexes. Especially if the application can reasonably do the same sort.
OP has identified a universal norm: "Law of Large Established Codebases (LLEC)" states that "Single-digit million lines of code, Somewhere between 100 and 1000 engineers, first working version of the codebase is at least ten years old" tend to naturally dissipate, increasing the entropy of the system, inconsistency being one of characteristics.
OP also states that in order to 'successfully' split a LEC you need to first understand it. He doesn't define what 'understanding the codebase' means but if you're 'fluent' enough you can be successful. My team is very fluent in successfully deploying our microfrontend without 'understanding' the monstrolith of the application.
I would even go out and make the law a bit more general: any codebase will be both in a consistent and inconsistent state. If you use a framework, library, or go vanilla, the consistency would be the boilerplate, autogenerated code, and conventional patterns of the framework/library/programming language. But inconsistency naturally crops up because not all libraries follow the same patterns, not all devs understand the conventional patterns, and frameworks don't cover all use cases (entropy increases after all). Point being, being consistent is how we 'fight' against entropy, and inconsistency is a manifestation of increasing entropy. But there is nothing that states that all 'consistent' methods are the same, just that consistency exists and can be identified but not that the identified consistency is the same 'consistency'. And taking a snapshot of the whole you will always find consistent & inconsistent coexisting
LOC is 'not a good' metric to 'you should be able to understand a codebase'. In either scenario, too many people or too few people, or (my favorite) 'not enough' (whatever that means). Mythical Man-Month comes to mind. What I think you're trying to get at is you need skill to reverse engineer software. And even if you have that skill it takes time (how much?). We work in a multifaceted industry and companies need to build today. At any given project, the probabilities are small that there is a dev who has the skill. We all know 'they can do it/they can learn on the job/they'll figure it out'. And then OP's observation comes into fruition.
To be proven wrong would be that Cursor is used by all devs or that IDEs adopt AI into their workflow?
Like OP using cursor has been a huge productivity boost. I maintain a few postgres databases, I work as a fullstack developer, and manage kubernetes configs. When using cursor to write sql tables or queries it adopts my way of writing sql. It analyzed (context) my database folder and when I ask it to create a query, a function, a table, the output is in my style. This blew me away when I first started with cursor.
Onto react/nextjs projects. In the same fashion, I have my way of writing components, fetching data, and now writing RSA. Cursor analyzed my src folder, and when asked to create components from scratch the output was again similar to my style. I use raw CSS and class names, what was an obstacle of naming has become trivial with Cursor ("add an appropriate class to this component with this styling"). Again, it analyzed all my CSS files and spits out css/classes in my writing/formatting style. And working on large projects it is easy to forget the many many components, packages, etc. that integrated/have been written already. Again, cursor comes out on top.
Am I good developer or a bad developer? Don't know. Don't care. I'm cranking out features faster than I have ever done in my decades of development. As has been said before, as a software engineer you spend more time reading code than writing. Same applies to genAI. It turns out that I can ask cursor to analyze packages, spit out code, yaml configuration, sql, and it gets me 80% done with writing from scratch. Heck, if I need types to get full client/server type completion experience, it does that too! I have removed many dependencies (tailwind, tRPC, react query, prisma, to name a few) because cursor has helped me overcome obstacles that these tool assisted in (and I still have typescript code hints in all my function calls!).
All in all, cursor has made a huge difference for me. When colleagues ask me to help them optimize sql, I ask cursor to help out. When colleagues ask to write generic types for their components, I ask cursor to help out. Whether cursor or some other tool, integrating AI with the IDE has been a boom for me.
Design review 'should' be a process that overlaps with the technical review. In other meanings, not isolated from the org. And that overlap 'should' overlap multiple times with the technical review , not just once, at the end nor just at the beginning (shift in priorities, new team new people, etc).
As said elsewhere, a lot can change between initial design and release so having multiple design/technical reviews 'should' be standard. But inherent in the design/technical reviews is time, resources, and culture which many many companies lack and/or don't include into budgets/project estimates/etc and/or a culture of development practices.
A shop might have a design team and a bunch of devs with probably a single person who actually understands how things are connected. Limited time and resources precludes a thorough design/technical review process But also consider that many companies willfully avoid 'wasting' developer time, setting up meetings, etc.
Turns out figma's design culture inspired change in their engineering culture to overlap their processes in a continuous development manner.
why not both? Have the table column autogenerate UUID if insert does not have one or else insert if it does. This provides an added flexibility for scenarios mentioned in the comments.
I do this (i.e. if UUID wasn't supplied in the API call - the backend will generate it), but for me it's only useful for debuging or manual API testing.
Reason: I'm not a machine, so I dislike generating UUIDs by myself ;)
(1) [X] sortable by insertion, [X] timestamp
(2) All of (1) and [X] transferable between databases
(3) Use UUIDv7 as a primary key for internal and UUIDv4 for external. App or SELECT statement will need to extract the timestamp from UUIDv7 if you need to use it. Also, if you're using a DB Client you can't just view the 'created_at' column to get an idea of when a row was created.
(4) Use UUIDv7 as a primary key for internal & external use.
Not necessarily. It's not a "global system coordination" thing, it is coming to a consensus that "we will use this reference frame as our starting point", which might look like a global system coordinating. I guess you can say that 'science' initiallizes a reference frame in which we can all participate, compare answers, reproduce results, etc.