> But why are people surprised then that in most software the process is not as thorough as in aerospace or elsewhere with physical things.
Outside of engineering, few people have ever even heard the phrase “risk management”, let alone understand the theory behind it. It’s really hard to argue that a developer can consider themself an engineer, when they don’t bother understanding what happens when the thing they “engineered” fails.
Yes this applies to web applications too. If someone can’t access an application that gives them directions, it’s possible that they will be stranded. If a healthcare website fails to load a list of available physicians, it’s possible the user gives up and doesn’t seek medical care.
The current prevailing attitude in web dev of “who cares” is really a testament to how little most web developers and real engineers have in common.
Software engineers working on safety-critical systems deserve that title because they actually have to understand software risks, design appropriate mitigations, and prove their effectiveness.
Thanks for this comment. Clearly in my original comment I made I failed to articulate how useful failure analysis is for even highly mundane things. So if we take this as a post mortum then I think we can see the failure mode of my language. The big question though is if this is within reasonable bounds of noise as over emphasizing mundane examples may result in the concept not making an adequate impression on the importance of the topic. I did have a large number of points with no responses prior to receiving misinterpretations but that's a noisy metric itself so now I'm asking a third party consultant if they have any thoughts on how I can improve my communication.
And not to say there aren't plenty of software engineers that do failure analysis. I mean we do have exceptions, exit codes, traces, and even QA (though many hate them). But I'm not under the impression that the idea is as ingrained in the average software engineer as it is in the average engineer. Having taught CS I've been surprised at how much less time is spent on this topic and how less explicit it is usually discussed when compared to other engineering disciples. The same goes for ethics which I think go hand in hand here. My impression is that this is because when software harms it is often less tangible and less direct than in other engineering disciplines where it's obvious how a bridge failing causes substantial harm and economic impact but where a failure to protect against injection in your not mainstream platform also causes large damage because people reuse passwords and that passing the buck to user error is unacceptable given that it's expected behavior from the user (work on expected behavior and account for unexpected behavior but never rely on domain knowledge unless under special circumstances). We all know everyone reuses their Gmail and bank password on websites like Netflix or fucking neopets.
We wouldn't have as frequent front page posts about users getting accounts unlocked only due to the fact that they were able to go viral if software engineers better planned for failures in the system. Do Gmail devs really deeply think about what happens when someone loses access to their account? If there's a means of recovery if they're devices are also taken or destroyed? That this solution works in practice, not just theory? Do they audit and dogfood this process to ensure it's working properly and understand how it fails and the consequences of that? I'm assuming no because we see people locked out of these systems ("these" because I'm generalizing beyond Gmail)
> My impression is that this is because when software harms it is often less tangible and less direct than in other engineering disciplines
You're certainly on to something here. It feels easier to disassociate oneself from an ephemeral codebase whose existence vanishes entirely from your life as soon as you leave a company. Compare that with someone working on an airplane that a family member may rely on to safely transit.
That's what makes it even more sad that computer science majors are shielded from all those concerns in engineering. There really should be a "software engineering" minor that delves into what the rest of the engineering world contends with.
> so now I'm asking a third party consultant if they have any thoughts on how I can improve my communication.
Hope this isn't too blunt, but I find your writing style to be a bit "chatty". I think you must have got the same advice I did, which is "write like you speak".
For most people, the vast majority of our verbal communications are highly informal, filled with lots of slang. Verbal phrases and euphemisms don't translate very well to writing sometimes.
If I were to offer advice, I'd say write shorter sentences and use simpler words. It should feel a bit uncomfortable to write like this. I feel slightly uncomfortable writing this even.
On forums I write like I speak but I write differently for blogs, lectures, and papers. I appreciate the advice and know I'm often verbose. I love nuance, so I tend to write like a misattributed Mark Twain quote. I am working on it. I'm not as concerned with verbiage as the reason I come to HN is a typically higher... quality, but it's a bit noisy, I'm okay if that only filters.
Outside of engineering, few people have ever even heard the phrase “risk management”, let alone understand the theory behind it. It’s really hard to argue that a developer can consider themself an engineer, when they don’t bother understanding what happens when the thing they “engineered” fails.
Yes this applies to web applications too. If someone can’t access an application that gives them directions, it’s possible that they will be stranded. If a healthcare website fails to load a list of available physicians, it’s possible the user gives up and doesn’t seek medical care.
The current prevailing attitude in web dev of “who cares” is really a testament to how little most web developers and real engineers have in common.
Software engineers working on safety-critical systems deserve that title because they actually have to understand software risks, design appropriate mitigations, and prove their effectiveness.