I think it's fine if code is difficult for newcomers to understand (of course, to a point). Most programmers are taught in C-like languages and paradigms. Using FP (ML-like) languages is already "difficult to understand".
The question then becomes: how large is the disconnect between the "theory" in the mind of the newcomer(s) vs. the "theory" they need to be useful in the codebase -- and is this gap worth it?
For example, programming with explicit effects (i.e. `IO`) or even just type-safe Futures. There's not too much difficulty with simply getting started, and it builds up a general theory of effects in the newcomer, which would presumably be useful in many contexts of the codebase even outside of async effects, e.g. error-handling with `Either`.
Everything is difficult for newcomers to understand. Newcomers should be helped to learn new things. Every programming language will be incomprehensible to non-programmers. That's not the target audience.
One dev's "you've just gotta learn how we do things" is another dev's "holy hell why does this codebase have its own mind-bending do-everything hyper-abstractions that don't exist outside this organization". (I'm thinking not of the basic concepts of popular FP languages so much as the powerful abstractions they can be used to create. Though if a company invents its own language, that can very quickly enter the 'overly large gap' range.)
I agree that there's a spectrum here, but IME it's very easy for existing devs familiar with everything to underestimate the gap between newcomers' knowledge and their own, or to overestimate how much it's needed. In the worst case, you end up with brittle spaghetti 'abstractions' where the old-timers swear up and down that everything is perfectly sensible and necessary.
(I've personally seen this kind of overestimated necessity twice, in the context of API design. In both cases, the author of a public library published a new version incompatible with the old version, under the justification that users really ought to learn and pay attention to the fine details of the problem space that the library is solving, and that changing the API is a good way to force their hand. But from an outside perspective, the fine details weren't really important for 99.9% of use cases, and it isn't practical for every dev to be conscious of every tradeoff and design decision in the whole stack.)
> the worst case, you end up with brittle spaghetti 'abstractions' where the old-timers swear up and down that everything is perfectly sensible and necessary.
i had this at last place. all previous members of the original team were gone. it was myself and a part time dev (1/2 days per week).
writing new tests would take longer than writing a change - not because we were writing a significant amount of tests. because the “factory” spaghetti code god class used to create all tests was just a nightmare.
got to the point where i just focussed on manual QA. was faster for each change (small team helped there).
and rewriting from scratch on 20k existing LoC for that repo wasn’t gonna work as we just didn’t have the people or time.
basically — we didn’t have time to deal with the bullshit.
keep it stupid.
for the love of everything good and sacred in the world, please, i beg you, keep it stupid (addressing this generally, not at the parent).
it’s easier to get it right, quickly, when it’s done stupidly.
—
i now want to have a moan into the vast empty space of the interwebs:
the data/mental model for the whole thing was just wrong. split tables for things that had a 1:1 relationship. code scheduling for worker tasks spread across multiple services. multiple race conditions i had to redesign to fix.
oh and no correct documentation. what was there was either wrong or so high level that it was essentially useless.
and roll-your-own-auth.
apparently thus was all done under the clean code cargo cult. which tracks cos there were so many 10 line methods everywhere which meant jumping around constantly.
Depends on if we can interpret the final hidden layer. It's plausible we evolve models to _have_ interpretable (final/reasoning) hidden layers, just that they aren't constrained to the (same representation of) input/output domains (i.e. tokens).
For various micro-bench reasons I wanted to use a global clock instead of an SM-local one, and I believe this was needed.
Also note that even CUDA has "lower level"-like operations, e.g. warp primitives. PTX itself is super easy to embed in it like asm.
You are right. That's my opinion, which is of no interest at all and factually there was not any conviction.
Let me rephrase :
The Streisand effect in full force. Hiring a competent and transparent person—someone who shows remorse, regret, and explains how they’ve grown from a difficult situation—isn’t necessarily the real issue. But hiring someone who shifts the blame onto others for making past incidents public? That’s an entirely different problem.
Although I agree, I think there is a mild hazard with the label of "model minority". Not saying a reaction like that is really warranted in this case, though.
I'm not sure if that's true, but it's presumably what the other person was saying they observed. Though I think in casual conversation, the reality is that P(racism | <such a sentence>) is likely higher than "baseline", I'd prefer to give the benefit of the doubt on HN.
Also, much prefer a place where mentioning "race" doesn't immediate trigger strong reactions. "Race" being in quotations, because I think the original sentence has more to do with the culture of (certain parts of?) Asia rather than actual race.
A side analogy: I think that Asians are more likely to like Hello Kitty than non-Asians.
The question then becomes: how large is the disconnect between the "theory" in the mind of the newcomer(s) vs. the "theory" they need to be useful in the codebase -- and is this gap worth it?
For example, programming with explicit effects (i.e. `IO`) or even just type-safe Futures. There's not too much difficulty with simply getting started, and it builds up a general theory of effects in the newcomer, which would presumably be useful in many contexts of the codebase even outside of async effects, e.g. error-handling with `Either`.
reply