Lasting and solid foundations are made by experiencing
Doing With your hands. Seeing in reality. Getting burned with the
soldering iron. Smelling the flux. Hearing the signals and seeing them
on the oscilloscope. We need presence, feeling, the ownership of
knowledge as personal experience, not vicarious hand-me-down accounts
or diagrams.
As a kid I wired up NAND gates and transistors. When it came to logic
it felt like there was something tangible I could reach out and touch
through tactile imagination. Building a computer from chips,
wire-wrapping hundreds of connections to a 68000, RAM and EEPROM chips
took a whole summer. After that I could see a data-bus and an
address-bus. I know what they feel, and smell like. I got good at
patching dataflow DSP because 20 years earlier I spent hours in the
studio patching analogue synths.
Descartes Error is a book by Antonio Damasio [1] that talks about the
weakness of purely rationalist epistemology. The foundation is laid
long before we are even aware of knowing and learning. That book had
an influence on me to understanding cognitive activity as embodied.
This is why we need to let kids fix bikes, fall off skateboards and
climb trees. It's why giving them tablets and chromebooks instead of
things that get their hands dirty is no good.
I think that is the reason why I prefer to develop software by debugging. I can "see it in action". So "debugging" is not just about getting rid of bugs, but having a better view of your software as it exists when it executes. Just reading the source-code doesn't give as good of a view at all. Source-code is like circuit-diagrams, debugging is like plugging in power to it.
You mention physical electronics. After the raspberry pi wave I dabbled into electronics, thinking I would have an easy time, boy was I wrong :) There are so many things even for the tiniest circuit. And yeah even soldering is a sophisticated piece of knowledge with moving parts everywhere (alloy, flux, iron, board geometry and layers, ground planes) and you quickly get a sense of: I need to KNOW how to solve this good and fast. Not just "just use that tool and maybe it will work". It went to the point of rewiring how I approach software.
> A so-called "senior" developer started screaming at the compiler, then at the IDE, then at the operating system, then at his colleagues. He was frustrated.
This is one of the worst traps to fall into. I call it out whenever I can to people who fall into this: It's never the compiler, it's never the CPU, and if you're an application developer, it's never the OS. And if it is you can only get to that conclusion by assuming it still isn't, unless, Sherlock Holmes style, you are left with no choice. Never let it be your working hypothesis, always try to find out how those things working correctly matches your observations instead.
Working on very low level code, I do run into actual compiler and CPU bugs, and just two weeks ago or so I deeply regretted assuming something to be a CPU bug in an obscure part of it towards the end of a lengthy bug investigation, after the gathered data clearly suggested it was the CPU misbehaving. It still wasn't: I missed a crucial half-sentence in the spec.
I get what you mean and mostly agree with it, but compiler / OS / hardware errors are ultimately not that rare. For example:
Not too long ago, the latest Apple clang simply crashed when compiling some C++ codebases I was working on. It was an obvious compiler bug, and we got a fix a few months later.
During the 32-bit to 64-bit transition, it was common that large disk writes would fail silently because some layer truncated the length to 32 bits. Usually the culprit was the standard library, but I once saw it happening in the file system.
For a long time, I assumed that OpenMP had some nontrivial memory overhead, because I often saw simple multithreaded data processing code using more memory than I would have expected. Then one time the overhead was tens of gigabytes and continued growing. When I investigated, it turned out to be degenerate behavior in glibc malloc/free. If you allocated and freed memory in different threads, you could end up with many fragmented arenas where the memory freed by the user could not be reused or released back to the OS.
Bit flips and other memory errors seem to have become more common in the recent years, but only on consumer hardware. Maybe it's time to start using ECC memory everywhere.
Yes, these bugs exist. But for the majority of developers they are usually so rare, that your working hypothesis should always be that something wrong happened on some upper layer, starting with your own code, unless you are downright forced to accept that there is no way that e.g. the CPU is behaving correctly.
As for bit flips, I tend to wonder how many of those were in reality rarely triggering concurrency bugs, especially with memory ordering involved. I definitely saw some issues that could only be explained through a real bit flip or other spooky hardware failure (and we know of their definitive existence anyway), but it takes a very specific set of circumstances to fully exclude, say, a rare memory stomper. For example, an erratic bit set instructions whose operand address was fetched without the proper barrier or exclusive instruction, getting the address wrong in only 0.2% of common cases, and even then most of the time it would hit a word in memory where it doesn't matter or become apparent.
> that your working hypothesis should always be that something wrong happened on some upper layer, starting with your own code, unless you are downright forced to accept that there is no way that e.g. the CPU is behaving correctly
This is true for production hardware. I work with a lot of pre-release hardware so I start every bug with the latest errata open. Even then it's wrong to squint at the errata documentation until it might kind of look like your bug. Rather you do a quick look to make sure there's no clear smoking gun, and then continue to debug at the upper layer. It's ultimately usually not that hard to demonstrate an actual hardware bug with a reduced test-case once you have a general idea of what is wrong; if I had a nickle for each time I've heard "this is so random it must be a hardware bug" only to demonstrate it was a software race-condition, I would be a wealthy man.
Yup, I work with pre-release hardware, too, and you are spot on. The general audience I was addressing usually is not hit by CPU errata, and anyone who is should know about it.
Usually those bugs have such a large surface area that you can google and find a bunch of other people with the exact same issue. The issue are the folks who blame it without having concrete proof of that being the actual cause. "The Runtime is the issue? Alright point me to the Github issue or Stackoverflow entry that shows exactly that".
Sometimes really it is the tool underneath you that is broken: I've been fortunate enough to not hit a CPU bug, but I sure have hit OS bugs, and definitely database bugs, and runtime environment bugs.
In those cases, however, the bugs have to be understood as features, because your chances of changing them might be very low. If your database doesn't match documented behavior, or sometimes refuses to return the right thing, the application developer has to find a way to get the result they need anyway. You can report the bug, or if you are lucky send a PR, but until it's all approved and released, bugs have to be worked around and accepted: Little is fixed by screaming.
This is not limited to software debugging, it's a life problem. When facing a problem you can either look to yourself (painful because it means both work and facing your own personal limitations) or to outside things that you have not responsibility for and limited control over.
The worst part is that sometimes it really is one of those outside things.
But if you're working with a shit framework and after countless times prudently assuming the bug must be in your code only to be forced to finally conclude (with a stripped down repro) that some basic functionality is broken in the framework, then let it be your working hypothesis that it's the framework.
In some niche cases it can be the tooling. I remember having code that ran well with ASAN and ~segfaulted (or code was non-functional, don't remember) when I finally decided to disable it. The irony..
I won't say the cause yet, so that you can try to guess.
Indice: there was a problem in the code, but the runtime behavior was correct and ASAN didn't show any error/warning.
I can't find the exact quote, but I believe Richard Feynman said at one point something about how creating theories is easy, the hard part is making sure your new theory matches every single other theory out there.
The physics research today is exactly like that. I have to compare against like tens of experiments to demonstrate that my theory about a single thing is alright.
But experiments gets to publish on better journals LOL.
The analogy is fun. If you believe something false, everything you build upon it is also questionable (though not necessarily false - it might be true for other reasons).
Beginners need some (although leaky) abstractions to start from, otherwise they will be unable to make decisions at all. This is why asking questions as a beginner is really important.
Trying to think of knowledge as house of cards is stupid because it somehow implies that knowledge is inherently unstable, which is not true.
>Trying to think of knowledge as house of cards is stupid because it somehow implies that knowledge is inherently unstable, which is not true.
Not to be a pedant, but actually to be a pedant : Are you sure? I think the list of absolutely true things 200 years ago and the list of absolutely true things now are probably pretty divergent!
Nassim Taleb's Black Swan is also a great read on the instability (or in his words, fragility) of knowledge. A single observation is quite often enough to break what has been considered solid knowledge for years.
Eh, you also have to discern between individual bits of knowledge and systems knowledge when considering systems stability.
You can have complete knowledge of a component, but that may not be helpful when attempting to determine the affect of that component in a system where you have incomplete knowledge of other components.
Nonlinear systems that trigger at thresholds are a good example of this.
I guess I'd go the other way and say that all abstractions are leaky... except perhaps abstract math? At least more CS folks have started to realize that not only are their libraries useful, but questionable... so are their floats, their compilers, their databases, their OSes, instruction caches, RAMs, etc. Knowing the boundaries of your abstraction is crucial!
The key is knowing where your abstraction is likely to break down and having some bounds checking and fault tolerance to deal with to makes things robust. At the same time having abstractions (or models for Engineers and Physicists) that are all encompassing tends to make them so complex that they aren't very useful or even comprehensible.
To the extent that knowledge is a model for how the world works that we can hold in our heads and reason about... the usefulness of the model is often inverse to the accuracy outside of its bounds. Including general rel or quantum effects in most earthly trajectories is so complex and useless that it's silly... and yet at the same time we "know" newtonian mechanics is "wrong".
I'd be happier to say that I know how to build a robust house of cards for the situation. That I know there are gaps in the foundation, and I know when it's important to fill them in, and how much. At the same time, stress testing, realizing fundamental dependencies, and knowing how things can fail often just comes from experience.
The point is, it is unstable if it's wrong. Almost all "knowledge" is a simplification of reality anyway, so it still holds even if it's more right than wrong.
Philosophically, all knowledge has the Agrippa/Münchhausen trilemma: everything must ultimately rely on itself (circular reasoning), infinite regression (which can’t be fully enumerated), or assertions that are not further justified (dogmatism).
I think the house of cards is also potentially diamond shaped, where a lot of systems are built on top of other systems the organisations no longer understand, which are held up by a single engineer that has been there long enough and holds enough IP to know where to look when trouble strikes.
Basically, if a pulled card collapses a few levels you're maybe ok, but hit the widest part of the diamond and you better hope that engineer is around. Take the engineer away and you have a reeaally precarious, potentially very expensive house of cards. I have a hunch this is not uncommon, especially as the upper levels of cards fill with 'automation' and abstraction.
I put quite a lot of work into getting st and ct (optional) ligatures into the Linux Libertine typeface. And, turned them on by default in my copy.
And, I have my browser set to use Linux Libertine for all text, whether "serif", "sans", or whatever the page server said it ought to be. (Button wingdings something look odd.)
So, I don't see anything odd on the page, but I do see "st" ligatures. Just not theirs.
On my phone, where even Firefox utterly refuses to use the fonts it is directed to, I see ligatures in the sans-serif font, which is just ugly. But they do not make the sans-serif any worse than it always is.
I read your comment and thought "STar Wars does it, what's the issue?" And then I opened tfa... The ligature doesn't even make sense.
PS: For those who don't see it, the ligature is a circumflex connecting the top of the t with a serif coming out the top end of the s, which is even weirder because it's a sans serif font.
OP applies a wrong model in their analysis of models. They say the JS code is the reality and you can have a perfect or an imperfect model of such reality.
I'd say the reality in this context is this triad: "input -> output". The input, the arrow, and the output. Thus, the code is not part of the reality, but just the model of that arrow. The code is (an inherently imperfect) description of how a real input is to be transformed into a real output, an artificial text in an artificial language that allows a human programmer to make any progress whatsoever.
What follows is that OP proposes that a model of the model (i.e. their understanding of their JS code) can be imperfect or it better be perfect. In the latter case it calls it "a solid foundation", which introduces extra mental category for no benefit. This can be said simpler: for the reality, use directly the model that fits in your head. If it doesn't fit in, make more room for it by "reading the docs on type coercion and truthy values". Or, drop it into the nearest trashcan and search for a smaller model that would fit in comfortably.
But do not fall into the local minimum where you develop an imperfect model of an imperfect model of the reality and call it a day.
Firefox 100, Ubuntu. Something might be wrong with my system because it happens in Chrome and Edge too. I've never noticed this on any other site before.
Works fine in FF100 on Windows though. That's pretty wild.
Interesting, thanks for sharing. I've just setup the site so I think it's more likely the error is on my end. Can't say I know the answer but I'll try to figure it out :) and again, thanks for sharing the screenshot!
Arguably, if this 'structure' is so flimsy, it is not knowledge. This house of cards is simply faux knowledge, built on assumptions and misunderstandings. Once you truly understand something, to the degree that you do, it is solid.
If we talk about programming, which stems from math and logic... it is a solid as it gets, perhaps a diamond castle. Unfortunately I've come to the realization that software companies don't really appreciate this fact, and invest no effort in making logically correct systems. Therefore engineers spend a lot of their time navigating this flimsy structure that often falls apart because the engineers that came before had to make assumptions to finish things quickly because of deadlines and all that.
I think a useful generalization of this is Mental Models [1]. As with all models, they might not be perfect, but some are useful.
Also for the purpose of this article, I think its ok to have simplified or imperfect mental models of things, until we need more details. For example, we might think about hardware in an abstracted high level way, until we need to deal with low level programming, high performance, weird hardware bugs, etc. Being aware of Mental Models helps you to find your blind spots and work on them as necessary.
A practise I’ve found very fruitful in my programming journey is to do deliberate practise, even when (especially when) I’m knee deep in another problem. For me this looks like an explicit “learning” directory structure where I write short examples for myself, generally to explore an unfamiliar package/module, or to work up a simple but self-contained piece of code. I keep this separate from any other active project, and I often find myself referring to my own examples to refresh my memory, or use it as starter code for similar situations in the future.
> A so-called "senior" developer started screaming
I am amused by the “senior” in quote. I think people don’t all agree on what it’s supposed to mean anymore, and as a title ornament it’s probably way past time to retire it.
I hope it is hyperbole, but I’m also slightly baffled the focus of the example is the problematic mental model and not the screaming behavior.
I’ve witnessed more than my fair share of high level engineers actually screaming at each other. What was interesting: afterwards it was business as usual. No hard feelings for either party. I actually think maybe it’s healthy, instead of internalizing everything, but what do I know.
Better to yell when the car diverts west when it really needs to turn south immediately to get to the destination. Otherwise you end up all the way west and that’s the biggest waste of time of all. This is where being friendly is not optimal (or even correct).
Doing With your hands. Seeing in reality. Getting burned with the soldering iron. Smelling the flux. Hearing the signals and seeing them on the oscilloscope. We need presence, feeling, the ownership of knowledge as personal experience, not vicarious hand-me-down accounts or diagrams.
As a kid I wired up NAND gates and transistors. When it came to logic it felt like there was something tangible I could reach out and touch through tactile imagination. Building a computer from chips, wire-wrapping hundreds of connections to a 68000, RAM and EEPROM chips took a whole summer. After that I could see a data-bus and an address-bus. I know what they feel, and smell like. I got good at patching dataflow DSP because 20 years earlier I spent hours in the studio patching analogue synths.
Descartes Error is a book by Antonio Damasio [1] that talks about the weakness of purely rationalist epistemology. The foundation is laid long before we are even aware of knowing and learning. That book had an influence on me to understanding cognitive activity as embodied.
This is why we need to let kids fix bikes, fall off skateboards and climb trees. It's why giving them tablets and chromebooks instead of things that get their hands dirty is no good.
[1] https://www.goodreads.com/work/quotes/100151-descartes-error...