Wish these were more 'angry zoomer tears' inducing, matching the meme format. At their best, these memes reveal a controversial opinion of some expert that cuts through industry bullshit that masquerades as 'best practice'.
A recent one that I found amusing.
IQ55: How much do they read?
IQ100: We evaluate people based on a comprehensive survey that aligns their skills and competencies with our needs.
One thing to keep in mind, when looking at complex topics like expert vs. beginner opinions and so on:
> "When You Shouldn't Use the Bell Curve: There are some types of data that don't follow a normal distribution pattern. These data sets shouldn't be forced to try to fit a bell curve. A classic example would be student grades, which often have two modes. Other types of data that don't follow the curve include income, population growth, and mechanical failures."
I did that when I was in my early 20s, got taught to use dedicated TST and UAT environments by more experienced engineers, and now in my 40s I'm telling everyone who's willing to listen that isolated testing is nearly worthless.
The amount of issues I can uncover by using telemetry from production is just insane. Any half-way decent APM will have a dashboard view that summarises errors by how often they occur, which is an automatic priority list.
Some teams I tell this to just blink at me slowly and say they only do testing in UAT, and that after "sign off" they don't see any further need for monitoring. It's signed off, you see! Done. Finished. Released. End. Of. Story.
Depends on the criticality of your product. I used to work for Duo and testing in production would be a nightmare there. Deploy a bug and doctors are unable to authenticate. That said, the vast majority of products aren't that critical
I’m interested in the last one, specifically pair programming. I think we’ve lost a lot of serendipitous communal wisdom with remote work, and meetings don’t help. Does anyone have experience with pair programming helping with this, or does it always just feel like another long pointless meeting where everything gets done independently later? I would really like to find the best way to do this.
Pair programming has become 100x better after going fully remote. The context under which it is happening matters a ton though. Doing a meeting that's the equivalent of a PR review session with multiple peers lobbing all sorts of random unfiltered criticism your way, in real time, sucks hard and you shouldn't do that. But pair programming that is spur of the moment where one party says "hey I have this predicament", "what do you think about doing X like this" or "can you help me debug something" and you smash out a solution together in an afternoon is great.
I know people that love it, but my experience has been that it usually feels like an inefficient use of time. I mostly agree with https://matt-rickard.com/against-pair-programming/ on the same site, especially this bit: 'For existing employees, pairs are often stuck at the speed of the least senior pair.'
It has a few powerful but niche use cases, particularly around mentorship (where I think asynchronous feedback is often too little, too late) and when two people are doing work that's closely intertwined.
I've found from a management position it's a useful tool to confirm suspicions about salary thieves. Sometimes I pair them with a no-nonsense dev who will tell me specific problems. Sometimes I pair two problem developers together and see if the pressure of another person makes them build properly.
Mostly, though, pair programming is my favorite way to get people up to speed with a project and to prevent information silos about critical functionality. My devs do them about once a quarter with each other for a week or as needed.
The beginner/journeyman/expert bell-curve meme images benefit from being posted in context with an existing discussion. By themselves, they don't carry enough information to be useful. It's the reason behind the placement that matters.
I'll use the OP's first item, Kubernetes, as an example because I'm familiar with it.
> [Common Practice]: You should always use Kubernetes and other
> "correct" infrastructure.
>
> Beginners/Experts: Don't Use Kubernetes, Yet. Use the simplest
> abstraction you need for now.
So, first, is that really common practice? Overwhelmingly I've seen Kubernetes struggle for adoption among intermediate-level engineers because they're familiar with local development but still getting used to the basics of running software "in the cloud", and find the complication around replicated distributed systems off-putting.
Second, the reasons for "don't use Kubernetes" are often vastly different between the two "tails" of the bell-curve. A bad reason to reject universal Kubernetes is because the person wants to build everything out of Puppet and bash scripts, held together by human suffering (aka "on-call"). A good reason is to note that some bits of infra are adjacent to (or below) Kubernetes in the dependency graph; you don't necessarily want your distributed scheduler to be a hard dependency for log ingestion or certificate issuing.
Third, I'm suspicious of any sort of advice that can be inverted without being obviously incorrect. Let's try that here (and also split/elaborate the tails):
> [Common Practice]: Don't Use Kubernetes, Yet. Use the simplest
> abstraction you need for now.
>
> Beginners: You should always use Kubernetes and other
> "correct" infrastructure. Best practices exist and should be
> followed without wasting time on the "why".
>
> Experts: Kubernetes should be the standard API between product
> and infrastructure. The additional local complexity introduced
> by containers pays off by reducing global complexity and
> enforcing shared technical contracts between teams.
This more closely matches my experience in the industry (beginners and experts both pushing for Kubernetes, with rejection of it by the "middle"). But! Big but! Note that both versions could be a sample of the same bell curve, depending on where you sliced it. Trying to allocate positions to beginner/journeyman/expert contains an implicit parameter of the writer's own location on the global curve, and which portions of it they have visibility into.
WSL1 is good for maybe 50% of things I tried - as it has rudimentary or nonexistent support for some syscalls - eg more advanced ptrace() commands are not supported, similarly not all clone() options work. And it's just the beginning of problems
It's good for stdio/net code, but code which makes use of more arcane OS functionalities will frequently break.
Also, why run all of Microsoft’s spyware and deal with their forced updates just so you can get through to a slightly slower and less compatible version of Linux.
This one threw me off as well because the common "middle of the curve" advice is to avoid "premature optimization", but then you get experts that talk about "non-pessimization" as an euphemism for writing optimal code.
Same with the "technical debt" one. The midwit opinion is that you should take it on in the short term so it can be paid off by the business later on. But experts tend to just want to do the right thing(s) faster because they have the capacity to do so and don't sacrifice future speed towards often unjustified short-term goals.
IIRC, the midwit opinion was the original idea of technical debt. To you, is the beginner/expert version a misuse of the term, or has the thinking on technical debt changed?
Basically I wouldn't say that the idea of taking on technical debt in order to pay it off at a later date is beginner or expert. It's simply the norm within most businesses.
The expert probably goes deep on some problems and punts on others depending on how intractable or voidable they might be in future, but I wouldn't say there is a hard/fast rule they can apply. Also, I suspect they probably avoid the worse technical debts you get in weaker teams, so even if they appear to leave technical debt in some cases these are rarer or mitigatable. We can look at their actions instead of their words to see what side of the divide they are truly on...
Beginners may try to optimize some things, but by fact of their newness, don't even understand the extent to which they can optimize.
Think of the futility of ex-FAANG engineers who try to implement similar technology (e.g., reproducible and declarative build systems) at a 10-person startup.
Are some of these really beginner opinions? It's ok for the sake of a joke, but some of these reasonings are far more sophisticated that I've seen in practice from people with little or no experience.
Just to clarify for the boomers, the meme is that beginners and experts share similar ideas not that they reason about or execute on them in the same way.
Also, the endearing label for those of you in the middle of the bell curve is "midwit".
This meme can mostly be boiled down to accepting the constraints of reality and working around them. The midwit component of the meme is often something like "nooo, but everything has to be perfect".
A recent one that I found amusing.
IQ55: How much do they read?
IQ100: We evaluate people based on a comprehensive survey that aligns their skills and competencies with our needs.
IQ145: How much do they read?
Another one.