It's all about clearly stating your intent. With INNER JOIN you're literally saying "I want to join these two tables together on this particular relation and work on the result", while with the more basic WHERE form you're saying "just lump these two tables together and then we'll filter out the rows that we actually want to see". The join becomes more of a happy side-effect with that, rather than the thing you clearly want to do.
Not only does writing your code in such a way that it states your intent make it easier to read for other humans, it also makes it easier for compilers/query planners to understand what you're trying to do and turn it into a more efficient process at run-time. Now query planners are usually pretty good at distilling joins from WHERE clauses, but that form does also make it easier for mistakes to creep in that can murder your query performance in subtle and hard-to-debug ways.
> it also makes it easier for compilers/query planners to understand what you're trying to do
Hopefully that's not true. SQL's a declarative language, where you describe what you want and the system figures out how to do it. If you describe exactly the same thing using two different syntaxes, the system shouldn't really do anything different. That just makes the programmer's job harder.
Ideally, but not always true. For various versions of MySQL and Postgres, the planner will infer when a semi/anti-join can replace a WHERE [NOT] IN clause, but not always. IIRC there are still edge cases where neither one picks up on it, and so it’s always safer (and much more clear to the reader) to explicitly use WHERE [NOT] EXISTS if that’s what you want.
Also, using the ON clause is consistent with the syntax for outer joins, where you have to use it (because there's a very important logical difference between putting the condition in the ON clause vs in the WHERE clause).
The thing about Nine Inch Nails version of 'Hurt' is that it works best within the context of the album 'The Downward Spiral'. As a song by itself it's fine, but it doesn't really hit home unless you've been through the entire journey that the album takes you on. It segues right from the album's title track and maintains the noisy crackling from that song, making it feel incredibly fragile. It's a great way to cap off a very personal and self-reflective album, but take it out of that context and it's just, eh, pretty good I guess.
Johnny Cash's version takes the song and puts it into a new context, specifically as a reflection on Cash's own life and career. It hits very differently that way, and I think it's easier for people to relate to. Both versions are excellent in their own way, and I am grateful to Johnny Cash for bringing this song into the public consciousness.
Not just innovation, but layoffs are also devastating to the morale of the employees who are allowed to stay. This I feel is a factor that managers tend to grossly overlook when planning mass layoffs.
AI has increased the perception of productivity to unwitting managers, which is a dangerous pit to be falling into.
It's like a lot of those fabled 10x engineers that I've run into over the years; they may seem like they're incredibly productive and managers love them, but in reality their work is so riddled with issues and rushed half-baked ideas, that they end up costing the other engineers on the team 10x the amount of their time to review and fix it all. And it's those other engineers who do all of the invaluable but underappreciated cleanup work who are getting the boot right now.
Oh I'm sure they exist, that's why I said "a lot", not "all". My point is more about how managers perceive the employees they manage. They all want to believe that they found the goose that lays the golden eggs, but in reality so many perceived rockstar engineers are just over-confident or over-eager people mass producing garbage.
That's the important parallel with using AI in production: on the surface it may seem like the solution to all productivity problems, but in reality it still takes a lot of human brainpower to review, assess, fix and maintain whatever the AI model spits out. Especially since generative AIs like to present their work with supreme confidence, and it's getting increasingly hard to separate the chaff from the wheat.
I think a sizeable % of engineers can be 10x-ers as ICs / leads - but it's kind of an "in the zone" thing, when they're experienced in the stack and the product, can navigate the org, are motivated and aren't held back by issues with team or management (or personal issues etc).
But it's like growing plants, in the sense that it only takes one crucial element to be missing to limit growth (or "output").
I've met a few ppl who were _reliable_ 10xers (could navigate a much wider range of techs / domains / situations than others) but I've seen even the best crash and burn at times when the circumstances aren't right.
Widespread use of tools like Copilot hasn’t been happening for long enough for you to draw these generalisations with an appropriate degree of confidence. This just comes across as you misrepresenting your personally-aligned hypothetical as reality.
I have come to the conclusion that a lot of the 10x developer stuff is basically down to the 10x person being the original author of the code.
Every other developer on the project has to do at least 3 times as much work, they have to read through the original code and try and understand what the original person was thinking of when writing it before even getting to solving the problem at hand.
A true 10x developer would write code good enough that every other developer could maintain it as easily as the original author.
I'm not sure if it would've been John Carmack in this particular case. He was more concerned with the engine and rendering side of things, less with the gameplay logic stuff. More likely it would've been either John Romero or John Cash who wrote this code, but it's hard to say for sure. Best thing we can hope for is that John Romero pitches in with his impeccable memory :D
It's a _fascinating_ snapshot into Quake's development.
I have no idea if his .plan is a record of what he, specifically, was doing, or if he was just capturing what the programming team was doing, but at the very least, it makes clear that he was aware of huge amounts of very highly specific game code issues as they were being worked on and was almost certainly deeply involved.
Hello fellow I-was-once-11-and-fascinated-by-.plan! I wasn't sure if I was the only one. St Louis in 2000 made it seem like there were only a handful of people interested in programming at all.
Hmm... 11 would put me at 19999, so I must've been more like 12 or 13. I remember that's when I started taking gamedev seriously.
An example of how you could fill this in: identify a small subset of the problem that is relatively quick and easy to test. If the entirety of the problem can be solved, then for sure this small subset has to be solvable too. If you can't solve this small sub-problem, then you know there's not much point diving into the larger rabbit hole yet. However if you do solve the sub-problem, then that might show you the potential that exists, it may allow you to already look at adjacent problems using the results of this early test, and also important: it will give you additional motivation to keep going.
Lowering the volume does not equal introducing more dynamic range. It was just done to avoid clipping when pasting two bits of track on top of each other.
The masters delivered by Mick Gordon were meant for use in the game. I'm not an expert at game audio engineering, but the brickwall mastering may have been intentional to make the music stand out over the rest of the game audio. Either way those masters were approved by id Software for use in the game so nothing was wrong with those.
The problem is that those exact same masters were then used to produce the OST. To do a proper OST, you would have to go back to the source materials, remix them and produce a new master that is suitable for playback as an album. One that isn't as brickwalled and maintains more dynamic range. This is the important part that was skipped by Chad Mossholder and not caught by id Software's internal QA. If you take an already mastered piece of music meant for a different context and just cut it up and splice it back together, without regard for volume leveling, tempo adjustment or proper balancing, then you're inevitably going to produce garbage.
>Lowering the volume does not equal introducing more dynamic range. It was just done to avoid clipping when pasting two bits of track on top of each other.
If you overlap two tracks, and reduce volume to avoiding clipping, the combined track has spikes in volume where they overlap. This is increased dynamic range. But like I said, it is not musically pleasant to listen to, and it's certainly reasonable to complain about it.
You're making an extremely pedantic distinction, which is only correct in a purely technical sense. Which is the worst kind of correct.
Yes, mastering engineers work from track-level dynamic range (usually achieved with slow-response compression) to transient-level dynamic range (fast compression/limiting), and the range in between. When the context for this discussion is about "brickwall limiting", we're talking about very fast, transient-level compression, and your comment mistakes slower dynamic range for the transient-level dynamic range everyone else is discussing.
So, no. In this context, what you're talking about isn't increased dynamic range.
I'm well aware of the difference, as acknowledged by my first post in this thread ("not in a way that's pleasant to listen to"). But it is not pedantry, and it is relevant to the context, because this unusual form of dynamic range provides evidence as to when the dynamic range compression was applied. If it was applied after the edits described in the article, I do not think we would not see the volume spikes.
I do not see a single complaint from Gordon about the dynamic range, only the editing. But there is a paragraph in the article suggesting the dynamic range was intentional:
"Marty says that Chad, apparently working in a hurry, only had my supposed “bricked” in-game score to work with. He points to my so-called “bricked” score and adopts that as the reason behind Chad’s poor editing. But not only is Marty confused and clearly doesn’t understand the mastering process, but he also seems ignorant of how Chad’s editing introduced significant problems."
Note that there is no objective definition of "bricked". I personally tend to prefer higher dynamic range, but I've heard plenty of recordings with higher dynamic range than I would like. It seems very likely that some people prefer lower dynamic range than I do. Arjuna attributed the dynamic range to some unknown "they". There is no evidence for this in the article, or in Arjuna's linked Twitter thread. I believe the most reasonable interpretation is that Gordon deliberately chose low dynamic range as a stylistic choice. If you disagree, how about providing evidence?
Yes, it's clear Gordon has applied a mastering limiter to the tracks he delivered to Bethesda, and given that the style of the music involves heavy processing/effects and multiple levels of compression, a somewhat aggressive mastering limiter seems appropriate here.
But your point about "increased dynamic range" due to the editing errors is a distraction from your claim that Gordon applied a mastering limiter (which he clearly did). It creates ambiguity, because you're using it in a way that's not aligned with common usage in this context. That's part of why you're getting pushback.
In any case, if we want to try to answer the question of why the OST has low dynamic range (in the mastering limiter sense), I am somewhat receptive to ndepoel's argument - it seems reasonable that in-game tracks could be mastered more aggressively, and with lower dynamic range, than what would be appropriate for a proper OST release. Caveat: I haven't done mastering work in the context of game audio so I can't say if that's common practice, but it seems a little more likely than not.
That wouldn't increase the dynamic range, it's just preventing the overlaps from clipping. Both tracks would still have the same dynamic range, the relative overall volume might be different for the track you lowered/increased to match the other one, but I doubt the transition between the two would be considered increasing the dynamic range.
In order to increase the dynamic range of a mastered track you would have to uncompress/master it in the first place. If you are just decreasing/increasing the overall volume, you would have the same dynamic range just at a quieter or louder listening level.
The problem isn't that it's particularly hard or risky to dismantle the bridge. It's that this specific bridge is an industrial monument that holds a special place in many people's hearts and after the last restoration in 2017 a pledge was made that the bridge would never be dismantled or otherwise harmed again.
Then along comes Mr. Rich Man with his exorbitant needs and suddenly these promises have all become worthless. It's a matter of principle; people are fed up that the rules never seem to apply to the super rich and everything and everyone has to make way for their demands.
Why would you build a bridge that can be opened up and then promise never to open it up again, especially when there’s a shipyard on the other side of it? That makes no sense.
Because it doesn't function as a bridge anymore. It stopped functioning as a bridge back in 1993. It's a monument now, an industrial monument and part of the city's heritage. It's already permanently in the open state and more than high enough to let almost any ship through easily. Except for Mr. Bezos' new superyacht. That's why it would have to be dismantled.
It's a bridge for trains that now use another track, and it is permanently 'open' for normal shipping. Bezos' toy isn't a normal boat so they have to dismantle it, which includes lifting out the movable part in its entirety. This is not without risk to the structure, as has been proven in the past.
Not only does writing your code in such a way that it states your intent make it easier to read for other humans, it also makes it easier for compilers/query planners to understand what you're trying to do and turn it into a more efficient process at run-time. Now query planners are usually pretty good at distilling joins from WHERE clauses, but that form does also make it easier for mistakes to creep in that can murder your query performance in subtle and hard-to-debug ways.