Hacker News new | past | comments | ask | show | jobs | submit login
Is Uncle Bob serious? (dev.to)
155 points by rbanffy on Oct 10, 2017 | hide | past | favorite | 171 comments



The current state of software safety discussion resembles the state of medical safety discussion 2, 3 decades ago (yeah, software is really really behind time).

Back then, too, the thoughts on medical safety also were divided into 2 schools: the professionalism and the process oriented. The former school argues more or less what Uncle Bob argues: blame the damned and * who made the mistakes; be more careful, damn it.

But of course, that stupidity fell out of favor. After all, when mistakes kill, people are serious about it. After a while, serious people realize that blaming and clamoring for care backfires big time. That's when they applied, you know, science and statistic to safety.

So, tools are upgraded: better color coded medicine boxes, for example, or checklists in surgery. But it's more. They figured out what trainings and processes provide high impacts and do them rigorously. Nurses are taught (I am not kidding you) how to question doctors when weird things happen; identity verification (ever notice why nurses ask your birthday like a thousand times a day?) got extremely serious; etc.

My take: give it a few more years, and software, too, probably will follow the same path. We needs more data, though.


I don't think you can compare software to other disciplines like medical, air transportation or architecture. These areas are well understood and pretty mature and move pretty slowly. If we ran air transportation like software somebody would already have self flying airplanes in service. They would crash from time to time though. I personally like the imperfection of software development and the freedom to imagine new things. If we want to be more mature we also have to accept much slower development cycles and innovation will be slower.


> I don't think you can compare software to other disciplines like medical, air transportation or architecture. These areas are well understood and pretty mature and move pretty slowly.

Architecture maybe. Air transportation is barely a century old, though you could make the argument that it's a offshoot of other industrial engineering disciplines.

When it comes to medicine however, while modern medicine is older than computer science it's not by much, and what's called medicine until the late 19th or early 20th century is as similar to modern medicine as computers of the early 20th century (aka rooms of people with rulers and mechanical calculators) are to computers of the second half.

And medicine has in fact changed at a pretty frenetic pace, the "miasma theory" was only vanquished circa 1880~1890, and we can now sample and edit living beings at coffee-cup price levels.


I still would argue that medical is moving much slower than software. I work in medical devices and simple experiments can take years to get done once humans are involved. In software we would get the result in a few days or weeks. I am not advocating easing the restrictions in human experimentation but these rules definitely slow down progress.


I am not advocating easing the restrictions in human experimentation but these rules definitely slow down progress.

They surely slow down the work, but whether they slow down useful progress is a different question. Given the amount of time and money that depend on software systems today, as well as the more general effect of software on our quality of life, poor quality software costs society as a whole a great deal. If we built our software more slowly and carefully but also with higher reliability and fewer issues with security, privacy and so on, would we really be worse off?


"If we built our software more slowly and carefully but also with higher reliability and fewer issues with security, privacy and so on, would we really be worse off?"

Probably not. Personally I often enjoy the Wild West attitude of a lot of software development but on the other hand this industry has a really short memory and reinvents the wheel every few years. So yes, it would probably be a good idea if we held ourselves to higher standards and adopted best practices that often have been around for a long time.


> I don't think you can compare software to other disciplines like medical, air transportation or architecture.

Why not? After all, the safety of a pacemaker is largely because of the software it runs. In many cases, medical safety and software safety are nearly the same. And throughout our world, this is true in many industries. Driving. Banking. Our software today is our safety for nearly everything. It will only continue to grow this way.


I know a little about pacemaker software. They are super conservative and very reluctant to change. For a good reason! But it's certainly not an area where you will get much innovation from. I bet there would no AWS or cloud if they had to file every incident and environment change to an FDA equivalent.


Interesting. Let's discuss this further during your office hours?


> Back then, too, the thoughts on medical safety also were divided into 2 schools: the professionalism and the process oriented

The key difference is in the medical world safety has been a primary concern from day one.

I.e. There has always been a high level of professionalism.

That is not true in the software world.

Imagine a doctor saying it's 5pm on a Friday and I'm meeting a friend in an hour so I'll just do a rush job of this surgery and it will probably work out fine.

I've seen devs happily check in shoddy work just to be finished hundreds of times in my career.


> Imagine a doctor saying it's 5pm on a Friday and I'm meeting a friend in an hour so I'll just do a rush job of this surgery and it will probably work out fine.

Imaging right now. No problem. Humans are humans. Humans sometimes do rush jobs. Or are overworked. Or stressed. Maybe they have private problems, so their mind isn't 100% on the task.

Your suggestion that doctors are all professionals and wouldn't do such things is actually the exact opposite of the medicine professions conclusions: Humans make errors. Doctors, software developers, we all. And what helps to fix these things are better tools and allowing other people to check something (see above, nurses training to question doctors is exactly for this problems).


> Imaging right now. No problem.

Same here.

While some argue that the medical malpractice system has been abused, I don't think anyone argues that genuine malpractice doesn't exist. It happens all the time. As you say, doctors are human beings, and sometimes they get lazy, careless, or overworked.


> Imaging right now. No problem. Humans are humans

> Your suggestion that doctors are all professionals

I'm saying the level of professionalism displayed by doctors is way above the average level of professionalism in software development.

I've worked in QA and have had dozens of developers blatantly lie to me.

How many doctors routinely lie and deliberately mislead their coworkers or their patients? 20%? 30%?

I've been a hiring manager and have interviewed hundreds of software developers and a huge proportion have misrepresented their actual skill set.

How many surgeons will claim they are experts in X surgery when in truth they sat in on an X surgery once while in med school?

What proportion of doctors knowingly push suboptimal solutions because it looks good on their resume or because they personally find it interesting?


"I've seen devs happily check in shoddy work just to be finished hundreds of times in my career"

From my experience a developer who does everything the correct way and takes his time do so has no chance of surviving in most companies. There is a lot of pressure from leadership to get things done quickly even if quality is compromised.


I don't know about that... Doctors famously didn't even wash their hands between patients.

In fact many still don't. The best practices and process-oriented thinking seem to not be uniformly spread.

I'm also reading Normal Accidents by Perrow and what he says about nuclear safety up to the 80s (that's when the book was published) is scary.

My thoughts: our safest endeavours look and feel safe, but they are still more failure-prone than one would assume and making them safer is incredibly hard because of social, technical and human issues exhibiting hidden coupling - system issues.


In an article about Medical safety I read they mentioned the numerous (!) times that someone got the wrong leg amputated, and at least one of the medical staff in the room knew that it was the wrong leg but were scared of being targetted for abuse if they publicly corrected their superiors.

This doesn't fit my personal definition of "professionalism".


This an interesting article about medical processes: https://www.newyorker.com/magazine/2007/12/10/the-checklist

The gist of it is that introducing a checklist for bloodline infection prevention had an enormous impact on survival. It was still difficult to introduce the process.


You don't seem to know much about the history and evolution of medicine.


Your comment is correct but getting downvotes (including mine) because of its tone / manner of statement. Don't say "You don't know anything about X", say "You don't seem to know about X, here are examples A, B, C... etc."


I'm reminded of an article I read a while back about the rather sizable effectiveness of checklists in the health sector, and how they're still not 'standard practice'. I think it's this article, but I'm not sure:

http://www.nytimes.com/2010/01/24/books/review/Jauhar-t.html


Aviation safety is also a domain to look up to.


> My take: give it a few more years, and software, too, probably will follow the same path.

I doubt it. The stakes are much lower than people's lives and health.


The Atlantic article that Uncle Bob was writing in response to lists several specific examples of software failures that did, or could have, resulted in loss of life: a failure of the 911 system; air traffic control systems; medical devices.


But then these scenarios already have the required incentives. Even software has liability issues if the stakes are this high.


Agreed. Let's not forget that a huge portion of software written is games.


I would bet it's a not huge fraction of the software written is for games. Games are a niche if software.


So basically work is safer when done by robots

Or if you spend a lot of resources in training people to robotize them

I prefer real robots


Except when something unexpected happens, in which case you'd love having had a human. The best is combined teams of man and machines. This also has its own problems, but nothing is ever perfect.


I was being sarcastic

The original post was about how professionalism was replaced by method in healthcare

But in my opinion saying "you have to be professional" and "you have to follow this checklist" it's the same thinkg

A doctor that was amputating the wrong leg without checking was not making a mistake because "tools", but because was being unprofessional.

A paramedic that is not checking all the vitals because it wasn't on the checklist, is unprofessional

Not having a checklist in the first place is unprofessional!

It's the same thing, said in a different way, just to have more doctors onboard willing to be trained to perform procedures like robots, that bring profits into private hospitals.

I think it's mainly an US thing though, I've worked in healthcare in Europe and it's not like that

Having said that, programmers skill vary a lot based on the type of work they do, the industry they work in, the experience they have, the context, where they come from etc. etc. etc.

Uncle Bob might be wrong or not showing enough data, but the lack of tools is still a programmer's fault.

You can blame management if they are doing it poorly, but if the tool you're using are not the best for your job, you're just being less than professional.

Maybe it's enough most of the time, but that doesn't mean that being professional is something else.


This article by Uncle Bob has generated a lot of debate. I read it for the first time today and, I have to say, I am not impressed.

too many programmers think that schedule pressure makes it OK to do a half-assed job

This is solely blaming the programmer . I have been on software projects where someone is basically told deliver what you can by this date and then move onto something else. Then leave someone else to fix the bugs. This despite the programmer working very unsociable hours and trying their absolute hardest. What does Bob want the developer to do ? Refuse and be sacked, when they have a family to feed? Yes there are lots of lazy software developers in our industry who shouldn't be employed, but this kind of generalisation by Uncle Bob isn't helpful.

Better REPLs are not the answer. Model Driven Engineering is not the answer. Tools and platforms are not the answer. Better languages are not the answer

This is just nonsense. They are not the only answer, but of course they are part of the answer.

I stood before a sea of programmers a few days ago. I asked them the question I always ask: “How many of you write unit tests on a regular basis?” Not one in twenty raised their hands.

I would like him to provide evidence of who these people were and if they have any relevance to the article, as the article's jist is about serious and critical software. Are teams delivering such software really not writing automated tests ? I really doubt it.

Automated testing is great. Test Driven Development is a useful technique which I have in my arsenal. But Uncle Bob's obsessive focus on this can cause issues too. I once worked with a developer who had to write everything using TDD and apply EVERYTHING based on Bob's videos. It was too much. APIs weren't designed, data models weren't sensible and there was too much code (methods like isOpen, isNotOpen everywhere). It was clean code taken to the extreme. And not a good extreme.

I am not defending software developers as a whole. The levels of professionalism shown by some in our industry is at times scary. But a silly article like that which Uncle Bob originally wrote isn't helpful.


As the only experts on the code, Programmers need to learn to say 'no' when appropriate. When they stay silent like code monkeys, they deserve all the blame Uncle Bob and I can heap on them.


That's fine if you're in a regulated profession like engineering or medicine or law, where management need appropriate professional sign-offs before going ahead, and it's a matter of professional ethics.

Unfortunately, in a field like software that isn't regulated (and IMHO isn't ready to be), you can say "no" up to a point, but if you continue to do that once management has determined that the answer should be "yes", you may simply get yourself fired and replaced by someone else who will give the desired answer. Unless it's a matter of moral judgement and you are willing to give up your job because you don't want to make what the boss wants you to make, software developers saying "no" rarely achieves much if management aren't listening.


Your rationale-fu is strong!

Look, by whatever accident of history, programmers are currently in demand. If your shop is suicidal/trigger-happy on firing their valuable programmers, then just leave that place before that happens and find happier digs. Job #1 for any professional programmer should be to sleep well at night and not work overtime.

Stop making excuses for sloppy, rushed programming. Use your power to enforce good practices. Ah, but you say, "all that is relative, no one agrees on what the practices should be." More rationalizing; no more weak excuses! Every team can write unambiguous house rules that form the social contract over what definitions and expectations are regarding ready-ness, quality, done-ness, etc. Until the whole rest of the enterprise is behind you and aligned with you, all that matters is your team.

A right attitude is key. You, the programmer: do the simplest thing possible and use the scientific method when writing code through rigorous testing. Meaningful, working tests are proof.

After your team learns to hustle, then you have power to hold your product owner's feet to the fire! Question every bit of the requirements they set. Your designs should sell the work, embody those requirements, and draw out all the questions you have to the business - when you do a great job, they will ask and then answer all your questions and more for you. Sloppy, imprecise, crooked requirements must burn and be tested in the crucible of your design. You have the power to talk and argue and you have the power to say 'no'.

Reject any slow or unproductive tools, libraries, practices. Put tremendous, outsized, decadent effort into automating the hell out of everything mundane or annoying in your sphere. Always protect your brand/product/team: all vanity or pet tools/patterns/languages/CQRS-ES must eventually be jettisoned out the airlock whenever team productivity is being impacted.


Such big words. In other words: non-sense.

I worked in 6 different companies in 4 different countries across 3 different continents during my 10 years in IT. And no-one gives a shit about delivering bug-free code or good practices.

There is such a thing called peer-pressure, probably you have never heard about it, it works like this: while you are fiddling with your scientific method to test everything properly, the guy sitting right next to you will get the same feature done when by going home and putting in a lot of overtime to make it look like he was working extremely fast and he pulled it off in matter of days and much faster than you.

He will also give a shiny presentation and show off his working demo application while you are still writing tests in your scientific method.

So after a few days your manager comes over to your desk and asks the guy sitting next to you if he could do the same thing, but now in a much more important project, where the deadline is super important. "Of course" - he replies, and he gets into a very interesting project and gets appraisals both verbally and also on his linked-in profile. He also gets a salary raise, since he is such a reliable guy who outperforms everyone. While you are still writing tests and wondering why nobody cares about your scientific method.

I hope it's clearer now.


Everything you described can be summarized pretty briefly: toxic work atmosphere.

Look, I get it -- I was working in such organizations in the past. There are quite a few of lazy bums in there but they specialized in a few important aspects: (1) take credit for somebody else's work, (2) always have a pre-baked excuse or shifting the responsibility to somebody else if something in their job is not okay, and (3) have more years in there than you.

There's a lot of internal politics and intrigue in these places. If you try to beat this system, eventually you'll become a part of the problem -- you'll spend most of your time making sure nobody takes credit for your work, that you're blame-free etc.

There is no point. What you described does exist out there, yes. But I can bet my balls that the programmers I work with are times better than the parasites you describe.

Plus, don't forget burnout. That guy you described can only maintain that rhythm for no more than 2 years, 3-4 if he's a real masochist. But it eventually comes to an end. The solutions these toxic places opt for are always short-term (clarification: that works pretty well for many of them by the way; they are perfectly aware of what are they doing and they just replace the burned out yongsters with fresh youngsters, and the cycle resets).


6 companies, 4 countries over 10 years? Are you a consultant? Otherwise, that's a LOT of job-hopping.

So, I was a consultant for a "systems integrator" for 14 years. I did enterprise Java, web apps, and backend legacy integration work. If you are anything like me, then you know of course no one in your organization (or the client's organization) cares about quality! You guys are hired guns, the "tiger team". You fake it till you make it. You move around, you're like a bedouin trader, you're a nomad.

Your comments about the coworker putting in OT to showoff and get ahead are just you wallowing in a bit of loathing. Never ever compare yourself with anyone else - we're all on our own tracks with our own tragedies and triumphs. If the guy wanted to bust his ass to make himself feel good, what do you care? God bless that dummy.

Anyhow, I think I can help with the loathing by getting a little philosophical with you. Two questions:

1. Are there bits of code or projects you can recall you've written over the last decade that you're really proud of? (Like, you can tell war stories about working on that code and you love those stories?)

2. Is ANY of the code you wrote going to be around in 5 or 3 or even 2 years from now?

The first question is to remind you of your craft, that it's really cool to make stuff work. The second question is to remind you that almost nothing you've done is built to last and is ultimately not going to leave any mark.

Now, don't despair. You can always take joy from your craft: let fly your creations to work in the world and do their little jobs and then move on to the next thing.

Reading your words, I can see that you're not taking much joy from the work right now. You don't have quality to hold up and you don't have good team dynamics where other people have your back.

In consulting, code quality will never be a priority nor of interest to anyone, especially in smaller firms. And, it'll be tough to build strong teams because you move around so much. You should really settle down in a city and get a regular salary job. You won't take much of a pay cut once you factor in all the benefits.

And there are plenty of big, established companies that practice TDD and have real Agile practices. Just search around and look at Fortune 1000 companies - the software teams working for those companies will signal quality to you through their blogs and open source offerings on Github.

As I said before, your #1 goal should be to sleep well at night and not work overtime. That means not being afraid of PagerDuty, not being afraid of deploying code at the last minute on a Friday before leaving for the weekend, not being afraid to relentlessly refactor the code to improve it, not feeling dirty about your code because you're not hacking anymore. When you're not afraid, you can have a team where members have each others' backs instead of backstabbing too. That goal of sleeping good at night is totally possible, I'm living it right now.

You need a new mindset. You need to improve your practices and start to claim your power as the developer. Read some code quality books, write some open source and teach yourself how to do TDD (I prefer "London" style TDD.) Listen to some of Uncle Bob's videos on YouTube or Safari Books online if you want to get into the groove. He's a goofball and a codger, but he's on the right track more than he is on the wrong track.

And, buck up dude ... I hope it's clearer now. :)


You don't know anything about me still you set off to give advices. That's not very wise TBH.

But if you want to be philosophical, then I will be psychological with you in exchange. I think that the fact that you write a 3 paragraph long reply to a guy who just simply describes everyday development practices which is common around the world tells a lot about you. Namely that you don't take very well when people don't agree with you. Most people who are like this just want to render themselves superior which shows their inferiority complex.

I pointed out and gave very specific examples why your ideas don't hold any water and how unrealistic are, and then you come back and tell me even more unrealistic things.

Also I watched all Uncle Bob's videos, I bought the Clean Coder series and I follow TDD. Which again proves that you don't know anything about me. And no. It's not clearer. And for the life of me can't understand why are you doing this (but I don't even care to be very honest).


> And for the life of me can't understand why are you doing this (but I don't even care to be very honest).

Wow.

> Also I watched all Uncle Bob's videos, I bought the Clean Coder series and I follow TDD.

And everyone else on the team does TDD as well? Do you work alone? I want details about what's going wrong.


I am in favour of high standards in software, but Mr Martin always seems to come up short of specific measures to get there. He does not much beyond the "man up and show some personal responsibility" school of discipline.

This doesn't seem to me to be the right way to get there. I would prefer approaches that are driven by data, experiment and outcomes - i.e. what works. I would expect that ideas such as "blameless postmortems" after failures, which are inevitable; and encouraging openness and team safety would have a better outcome than this clenched "just don't fuck up" stuff.

https://www.inc.com/leigh-buchanan/most-productive-teams-at-...

https://codeascraft.com/2012/05/22/blameless-postmortems/


My problem with Martin's article is that his solution isn't a catch all. In fact, testing is aided by the examples he gives.

The things he moans about actually reduce the runtime state set of a piece of code by moving it to compile time. It reduces the amount you need to test.

With testing, you write code that ensures a certain set of states and, importantly, ensures against the complement of that set.

If your compiler can reduce the set of potential states, that means a few things:

1. Less testing code required. This is better because testing code is still code.

2. Catching more errors since it's easier to cover a reduced set of states.

3. You can focus your discipline on higher level problems. Discipline is great. It's even better when not focused on crap a compiler can pick up.

Tools are not the 'answer'. Tools are tools. If a tool helps solve problems, it should be used.


Once again, because I never really get a clear answer for this question: why do we care? How is this not just another instance of "random dude wrong about something"? There are tens of millions of those.


This person wrote a series of books about clean code which are sometimes recommended to junior programmers. Some people have a high opinion of him.

It's a good thing that his claims and experience are being questioned.


I read his books and they're pretty good in showing Junior developers better ways to structure their code.

This doesn't mean that Uncle Bob is right on every topic. Safety critical software is completely out of his realm (at least I don't know of any project he worked on that's SC).

From my own experience I've seen quite a few safety critical codebases and they're often abhorrent in lack of common sense software structure, precisely because there are so many regulations that you can't "quickly refactor" something, and in embedded systems there are too many mechanical engineers writing software, without caring about the craft.


What are some major projects he's worked on?


They're not, he doesn't back up his recommendations with data, and instead expects readers to believe that they are appropriate and correct.


Neither do any other coding guidelines I've seen. What data do you want to gather about whether a name could replace a comment? It's like structuring an essay has a lot of rules, that you can break if you know what you're doing, but if you're just starting to write will help you be more clear in conveying your message.

It's all subjective.


Furthermore, some of his followers are very voluble and dogmatic in places like Stack Overflow/Exchange, driving out nuanced, reasoned discussions about how to improve software development.


Because he is not random but influential, simple as that.


Why is he influential? Because he wrote some books? That's the reason given downthread. Past a fairly low karma threshold, so could any commenter on HN.


>Why is he influential? Because he wrote some books?

Yes - which is something that distinguishes him from most HN commenters.

You started this thread with a question, you got valid answers, and you reply with bogus arguments like this. No wonder you don't understand.


Because the books he wrote got bought by a lot of people, so there are many people who know the author.


Beyond criticizing Uncle Bob, this article shows what's being done to actually address the problem and puts forward lots of interesting material to read.


Couldn't the same be said of yours or any other's comments here on HN?


Sure, but no-one I'm mentoring at work has yet suggested we should change our whole development process to something trendy but poor because they read something that Thomas wrote. The same cannot, unfortunately, be said of certain "personalities" in the industry, including Bob Martin.


Of course. I don't see a lot of stories written about me being wrong about stuff, and I'm like 96% sure that's not because I'm right all the time.



I think they're both right. You don't learn cryptography by coding, but at the same time you don't attack cryptographic implementations with mathematics. The design of the algorithms is a science, the construction of the programs which use them is engineering. And the even more modern crypto algorithms have started to take into account the engineering side through side-channel resistance and the like.


Because you to some extent is one of those "random dudes", and Uncle Bob is not? He is even family ;)


Anybody who calls himself Uncle <whatever> (outside of family context) can't expect to be taken seriously in my book. Maybe it's just me, but that seems like an appeal to authority right there in the name.


Uncle Bob and Silver Bullets:

https://www.hillelwayne.com/post/uncle-bob/

makes some of the same points. Discussed here:

https://news.ycombinator.com/item?id=15415278


Both better tools and better software discipline is needed.

Better tools could help a lot. It's hard to see how anyone disagrees with that. But, on the flip side, at my most pessimistic, I find that a massive % (say, 50%) of developers struggle to do the most basic things correctly. I can't fathom the types or scope of tools needed to solve the size of the problem (real AI that puts us all out of a job??).

So, while we wait for a thousand silver bullets, I agree with Uncle Bob: more automated tests, more pair programming, more code reviews. Whatever new tool they come up with, without basic competence, discipline and vigilance, things aren't going to change.


This is what I don't understand about AI proponents and/or fear mongers. Human level AI writing code? Wouldn't that just be a lot of the same human level bugs? Then people say Super AI will evolve more in a day than we have in a million years. Sure, maybe, it's not known if that's even possible, at this point.


I only mentioned AI to highlight (exaggerate even) the level new tools need to be at in order to solve this problem. I meant it in the self-aware/skynet/HAL sort of way, not tensorflow.


Well, when someone makes an AI that would really be able to understand customer requirements and then implement them, that would put us out of our jobs! :-D


We already have tools that let us do the 'right thing' if we spend enough time and money on it.

Another relevant quote from that paper, with regard to formal verification methods:

> Any new software technology in this field must address both the cost and time issues. The challenge here is daunting because a reductions of a few percent is not going to make much of an impact.

> Something like an order of magnitude is required.

I think that's pretty much spot on.

I think that's really the issue I see; we don't need more arbitrary tools that just add yet more complexity to already complex systems.


> ...if we spend enough time and money on it.

This is a much deeper thought than it's getting credit for.

The tools and discipline are all there. People just don't want to spend the money for them. Developers and their leaders don't get raises and promotions for meeting quality metrics. Product owners don't listen to people who produce exceptionally quality products (they listen to people who say 'yes' to unrealistic plans a lot, regardless of quality).

Why don't people want to spend the money? Sometimes they literally can't afford it. The suits are still iterating on their business plans and haven't factored in the true cost of developing the product they will ship (which, as all software professionals know, is different than the product they think they're making).

Sometimes people can but don't afford the work necessary to product a quality product. Way too often I see developers give a reasonable estimate that gets stripped down because a deadline (usually arbitrary) budget or deadline has already been set. Likewise, I see estimates get pared down because they seem too big. I also see (probably unintentional) budget shopping, where managers ask for estimates from a few places and then (surprise!) think the smallest estimate is probably the accurate one.

So we can go on and on about tools versus discipline, but it all boils down to incentives and selection bias. We need to figure out how to communicate quality to stakeholders. Or, if that's to technically difficult, we at least need to figure out how to impart reputation to people who can provide subjective (but better) evaluations of quality for experts to rely on.


I talked with a few mechanical engineers, a much older profession than software engineering. They have higher standards and whatnot, but I'm seriously happy that I don't work in that industry. Many I talked to even switched to SE because it's easier money and people in SE are more relaxed.


> They have higher standards and whatnot

Note that "higher standards" in civil engineering usually boils down to a) having standards at all instead of fuzzy process management frameworks b) these standards often boil down to "do this task/product with this regulatory mandated large margin of error" and c) building according to specification (i.e. having a reliable specification in the first place), which is exactly how reliable software is built (SIL, ASIL, aerospace level redundancy). There is nothing magical about it, but it needs to be driven by business and is not something that us lowly developers can just chose to do on a whim (because it increases costs by a factor 10 - 100).

Embracing agile (as his ThoughtWorks contract requires him to do) while lamenting quality and lack of professionalism, as Mr Martin does, is extremely dishonest.


Embracing agile (as his ThoughtWorks contract requires him to do) while lamenting quality and lack of professionalism, as Mr Martin does, is extremely dishonest.

"It is difficult to get a man to understand something when his salary depends upon his not understanding it." -- Upton Sinclair


> Embracing agile (as his ThoughtWorks contract requires him to do) while lamenting quality and lack of professionalism, as Mr Martin does, is extremely dishonest.

Why exactly? I fail to see the connection between lack of quality and embracing agile.


In principle, 'agile' is so weakly defined that it is possible to follow a process that produces quality software and describe it as agile. In practice, insofar as agile stands for anything at all, one of its pillars is what might be called an 'active disinterest' in doing much thinking ahead, and in practice, it is very hard to produce quality software without doing some serious thinking ahead.


Is this really true?

I have the feeling "producing software" is a highly volatile process.

First, software can do almost everything. You have WhatsApp, DOOM, Photoshop, Ableton Live, Google, Linux etc. which are hugely different systems and it was just the stuff that came to my mind in 10 seconds.

Second, the requirements change and change. One person thinking ahead and getting a brilliant idea may end up with producing complete garbage, because many things have changed since they had the idea.

I think the reason for the fact, that it hasn't been formalized and regulated like, for example, building cars, is simply that it can't be.

Software isn't a car or a house or a ship.

Software is an abstraction layer above this. It's more like the accumulated orders needed to build these things.


> I think the reason for the fact, that it hasn't been formalized and regulated like, for example, building cars, is simply that it can't be.

NASA writes highly reliable software, as do the organizations producing the software for fly-by-wire airplanes, so your assertion is empirically false. You are extrapolating too far from your personal level of knowledge and experience.


> NASA writes highly reliable software, as do the organizations producing the software for fly-by-wire airplanes, so your assertion is empirically false.

What I find interesting is that both of your examples (and many others) are for software which controls physical systems, i.e. which is pretty well defined due to the nature of the machines it controls. Do you have comparable examples which aren't defined by the physical systems they interact with?


Operating systems/kernels as a whole largely deal with abstractions, and some are built to incredibly high, and in some cases mathematically proven, standards. seL4 is an example of a formally verified kernel.


So are you arguing that very well-defined and carefully thought-out requirements are, in practice, essential to the development of high-quality software?


That would be quite the no-brainer, wouldn't it be?

All the dancing around waterfall, agile or what ever turns around the problem that people usually don't know what their requirements are, are not able to formulate what their requirements are, but hopefully know when they see something it if something helps them or not. If we could just get people to define their requirements in all the detail needed we would take a great step forward, but looking around .. I don't see this happening in the near future.


I.e. Let's avoid the question of quality altogether by declaring it an unsolvable problem in the limit.


Sure they do, but how much is because of formalization?

I mean, just look at SQLite, which is a highly reliable software. They bought this stability with a gigantic test suite.


In what way do you think this supports your claims? What argument are you making here?


For the reasons explained in the previous project. A source of quality in mechanical and civil engineering is that they have rigid standards, legal regulations, and thorough specifications. These cost a lot of money, sacrifice a lot of flexibility, and add a lot of time, but they work.

Agile development methods do create useful software quickly and cheaply, but are very dissimilar to this sort of strict framework.

As an employee of ThoughtWorks, he can't suggest increased regulation and specification, because he's paid to suggest less of those things. It's disingenuous, then, to complain about a lack of the results of those processes when he's advocating a method that lacks those processes.


Civil engineers don't I suppose agree to build a bridge before they've surveyed the land and then begin construction before knowing the loads.

Contrast that to software engineering, where beginning before you have requirements is seen as a requirement of being agile.


> Contrast that to software engineering, where beginning before you have requirements is seen as a requirement of being agile.

Not that I agree with all the agile ideas, but if I remember correctly we had a process already which was supposed to start with "gather all requirements" and then the rest follows. It was called waterfall development and didn't work so great either, mostly due to the 'all' part in requirements being violated all the time in practice.


This is kind of precisely backwards. A survey is like a test; it asks 'will this work'?

A good civil engineer will do that as early as possible, rather than produce pages of engineering drawings blindly. A good civil engineer will calculate the stresses and load factors on a bridge as soon as possible, rather than assume things will work. A good architect will hand a sketch of a building to a client as early as possible, and ask 'do you like it'?

All of those are tests, and the engineering processes in those domains are put in place to ensure that that testing happens as early as possible, before mistakes are too costly to undo.


A test is only a test if it can fail.

All the projects I've seen, even the ones where there are "minimum viable projects" have too much pressure from above (or have already been sold) so they aren't allowed to fail.

So I agree testing should happen as early as possible, but too often they're not real tests because failure isn't allowed.


Civil Engineering still has its mishaps I had to reverse engineer a program to analyse soil density to prove that we (top 5 consulting engineers) where not responsible for abridge falling of its supports.


The points made in the article linked here are interesting, but I now recommend anyone interested in improving the software industry simply steer clear of Bob Martin. Sometimes he says interesting or relevant things, but most of the time he just seems to be a professional troll these days. As far as I can tell he has a high profile but little relevant experience or qualification to make all these grand pronouncements or to justify insulting those of us who think other tools or techniques or processes might be better, so I suggest that for anyone interested in the field of high reliability software, time would be better spent studying books and papers written by people who have real experience and demonstrable results to support their arguments.


It is very important to have QA people the engineers respect. What makes a QA person respectable? Strong understanding of the business logic. Clear communication skills. Strong awareness that they cannot build the system, but a strong understanding of how systems operate. They work as hard testing as engineers work building, and deliver their findings efficiently. They can accurately evaluate the magnitude of an issue. The analogy I think of is like an offensive line to a quarterback. The engineers get the headlines, but without a solid QA team that doesn't happen. If QA is doing a good job, make sure everybody knows it.


Or, fire your QA team, who are insanely expensive manual test script running human robots.

Professional software developers have control of their code and don't shift the blame for their quality issues onto other teams.


Part of being a professional is listening to the experts and, even more importantly, paying attention to the data. This applies to criticism of the software industry as much as to the industry itself. I think what rubbed a lot of people the wrong way about Uncle Bob's piece is that even as it called for greater professionalism none was evident in the article's own construction, and that seems rather hypocritical.


My hunch is that tools like TLA+ and Lean[0] are bringing down the total cost of developing more reliable software.

And developing software that is reliable shouldn't be relegated to the safety-critical applications. There are ways that software failures can still cause significant harm even without putting human lives at risk: security errors allowing bad actors to steal personally identifying information, funds, or otherwise disrupt services that could harm people. This costs people their livelihoods, affects insurance rates for everyone, etc, etc.

I think the software industry needs to be more accountable for the influence we're having over the lives of the public and in order to write more secure, safer systems we need better tools to check our specifications, constrain our implementations, and help us manage complexity.

[0] https://leanprover.github.io/


"I know there are tons of programmers churning out low-quality code."

People repeat this kind of sentiment all the time but is there anything to back it up? I know we've all run into to random bits of code we considered "low quality". But that can often be attributed to code that was written a very long time ago when programming practices were much different. At the time it was written it was likely high-quality code. It's like saying that a Gallbladder removal surgery from 20 years ago was "low quality" because it wasn't done orthoscopically, that technology was barely used at the time.

I'm starting to think that "tons of low-quality coders" is an industry myth to explain a variety of unrelated phenomena. The modern-day equivalent of the boogeyman. But let me know if I'm wrong.


It is my continued theory that Bob Martin is a double agent, whose mission is to discredit TDD in the wider software community by using an infuriating, backlash-inducing writing style.


Unfortunately he failed at that and people started to believe him.


Is there anybody worth paying attention to out there pushing TDD?


Interesting that I was already downvoted but nobody bothered to reply. If anybody is wondering, that was a honest question. I don't believe in TDD (unless coding against a standard), and that is the main reason why I am interested in reading actually relevant arguments for it.


Bob Martin famously tried to write a Sudoku solver using TDD, he gave up after a few blog posts...


Do you have a link to that? This is too funny to pass up.


Just to clarify, it was Ron Jeffries who tried the TDD Sudoku solver, not Bob Martin.

You can find links to all the the blog posts, plus some commentary here:

http://ravimohan.blogspot.ca/2007/04/learning-from-sudoku-so...

I try not to be too hard on Jeffries in this case. Although he failed to write a Sudoku solver, expressing the requirements in the form of tests can be a decent way to start in cases where you know some valid outputs for a series of given inputs.

I tend to solve Leetcode/Hacker Rank problems this way, and it works pretty well. The catch is that the tests won't tell you what algorithm will best solve the problem. And there's of course no guarantee the tests you've thought up will cover all edge cases.


I largely agree with the article, with one caveat:

> ...we "just" have to specify the states and behaviors that are not safe and prevent the software from getting into those states.

In general, the number of ways how things can go south tends to infinity, while the desired outcomes are easier to enumerate. I think it would be safer and more feasible in most cases to identify states and behaviours that are valid than the other way round.


Reading the paragraph about "illegal states" that should be avoided in software, I thought of "design by contract", what never really took off. Microsoft .NET has a really nice implementation called "code contracts", but it doesn't seem to be used that often. However it would be a simple and powerful way to improve the software quality.


Background. When I first heard of Uncle Bob, I watched a couple of conference videos.

Man, the guy pissed me off. He seemed quite strident in this "software craftmanship" schtick.

As it turns out, I had written an article on all the horrible ways companies implement Agile. It's gotten to the point that I cringe whenever I hear the word "Agile", and I consider myself an Agile Technical Coach. Orgs just really suck at trying to do better. Most of the time it ends up in a micromanagement death march.

Bob read this. It pissed him off.

So Bob and I met online by pissing one another off. We commented on each other's works, and over the years, we've become friends. So I speak both as a coder, a consumer, friend, and a fellow talking head.

Bob means well, but with a large audience people tend to read into his work things that aren't there. This is the HN effect: with a large enough audience nothing you say or write will be immune from misunderstanding. He also tends to overstate his case from time-to-time, like we all do. Hyperbole is a useful rhetorical tool.

I restate his thesis in my own words as such: The number one problem in software today is hidden complexity, mutability, and state. When programmers enter a new domain, they naturally tend to "help" follow-on programmers by creating abstractions on top of whatever complexity they find. This increases, rather than decreases the hidden-state-mutability-complexity problem. It gives the appearance of being useful, but in fact can do more harm than good. Focusing on the tools instead of doing a good job gives this wonderful rosy picture of progress when in fact you're headed in the other direction.

It's not that tools are bad. It's that our natural inclination to add in abstractions easily leads to code where it's more important than ever to thoroughly test exactly what the code does. If we focused on the testing part first, the tools part wouldn't be an issue. But instead we focus on tools and schedule pressure, and this leads to total crap. We buy the tools/framework because we believe that schedule pressure forces us to work "at a higher level" but instead that same pressure, combined with the cognitive diffculties of adding yet more layers to the problems leads to a worse state of affairs than if we had simply skipped the tools to begin with.

I'll never forget the shocking wakeup I got as a developer when I realized I am a market for people selling me stuff, and these people do not have the interests of my clients in mind. They only have to sell me, not provide value.

And yes, you can argue that there's a requirements problem, but whenever something goes wrong, isn't there always a requirements problem? Nobody ever asks for a system that's broken, so whenever a system is broken, you can say "But you never told me not to do X" and be correct. The fact that requirements are a problem is tautological.

I stood before a sea of programmers a few days ago. I asked them the question I always ask: “How many of you write unit tests on a regular basis?” Not one in twenty raised their hands.

Bob's right. When you see results like this, stay away from as many tools as you can at all costs. You don't give hand grenades to infants, and programmers who aren't testing don't need faster ways to complexify the system in non-intuitive ways. The mistake this author makes is not realizing the reasons unit testing and TDD keep getting more important year-by-year. The mistake Bob makes is not diving down deep enough for some readers. "Craftsmanship" is a fine label, but there's a reason we need this stuff aside from just wanting to be professionals. If more folks understood the practical and pragmatic reasons for the zeal, there'd be less confusion.


What responsibilities does an "Agile Technical Coach" have and what did you do for the business? (I'm curious because in most of the industries I've worked in, the teams have been extremely skeptical of Agile coaches/consultants)


in most of the industries I've worked in, the teams have been extremely skeptical of Agile coaches/consultants

For good reason. I don't blame them.

The way I see it, there are very few of us around, although tons of people advertise using this term.

I'm the last of the general contractors when it comes to IT consulting. From where I sit, it looks like a vanishing breed. I'm a full-stack technical lead who has had tons of experience in different industries and technologies and ended up, somehow-or-another, training organizations in how to use teams. That's where the Agile comes in.

The work is this huge mix of technical and organization levels. A recent contract had two weeks of executive/director-level work setting up a roadmap for change, followed by a "dog and pony" show with the usual slides and games, followed by a deep-dive with a team of leads where we set up the entire production and CI/CD/DevOps pipeline/stack and learned TDD/ATTD and the rest of it while writing code for an upcoming project. (This was hands-on technical work along the lines of "how to automate cloud deployment using AWS/Ansible", "Ping-pong pair programming in Java/Javascript/Angular", and "Architecting a build pipeline for a team, program, and org")

I prefer the technical stuff, since so many coaches can't actually do the work (and I used to be a show-off). The fact that most coaches in many cases can't do the work sucks big time for the client. But it's not just technical. All of it is important. Unless you get the execs straightened out they'll screw up your org change without even meaning to. Unless somebody is running interference/coordinating with middle management nothing ever happens. And the place to start with the devs is with the leads/architects. Get them actually coding something. Oddly enough, there are a helluva lot of leads and architects out there that can't code. They need to get up to speed and to understand what success looks like. If you miss any of those levels, it's not going to work. So wether I want to or not, I end up working at all org levels, and the best title I've got for that is "Agile Technical Coach"

I've been bugging Bob about doing some Clean Coder material on backlogs, since they seem to be the thing that crosses all org levels and they're constantly a mess. Bob keeps hammering on doing things right when I think it's much more important to focus on doing the right things before you worry about what kind of craftsman you are. I've even pitched him on a couple of ideas. Who knows? You might see something there from me in the future.


Thanks for the response. Sounds like your work is kind of an development/efficiency/automation/process-improvement SME that works with all levels of employee, from the devs to the CTO, which requires understanding of both IT systems/dev and business.

I've seen co-workers with a similar set of responsibilities but they called themselves "Corporate Efficiency Analysts," with the better ones coming from tech backgrounds (sometimes having acquired an MBA after years of development), and the poorer ones only having "general business" experience.


Yep. There are way too many Power Point Rangers and Six Sigma Astronauts in that bunch, however. It's tough finding a good label.

The key here is that software is everything. It's not just an add-on to some orthogonal business model. Any more, software is the business. The tech skills are just as important as the rest of it. Most companies today are like a group of medieval princes whose job is to correspond with other princes -- but none of them can read or write. They kind of know what they want, but for whatever reason, they feel that it's beneath their dignity to actually make it happen. The way we treat development? It's as if we were to start hiring people to write our emails for us. And then complain at how correspondence is always so difficult.

Sorry for the rant. Thanks for the chat!


Lots of talk here about better tools and better coding practices and better discipline, but what about better languages? Some languages make it impossible to make the kind of errors that other languages allow. And I think the languages will just keep getting safer and safer.


I'd sum language up under tools for the purpose of this discussion.


Incentivization and transparency combined is also a good method, which should be applied when possible in addition to other methods. So in case it is possible, safety-critical code should be open source, and there should be very high rewards for detecting serious issues.


>Safety-critical software systems, which are the topic of the Atlantic article, are held to shockingly high quality standards.

Doesn't appear to be the case here: http://www.safetyresearch.net/blog/articles/toyota-unintende...


I'm sure you can find specific cases where it isn't the case for critical systems, but for the most part, I've found it to be true (having worked in the nuclear power, power electronics, electric grid, and ICS/SCADA industries). Sure there are cases where software fails, but in my personal, and anecdotal, experiences, software serving critical infrastructure is held to a much higher standard.


Curiously, in the list of possible future tools for more reliability type systems are absent.

Looking at the authors bio: "I love solving tough problems with Python and PHP".

Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems (barring social ones). If you want a property to uphold, formulate it as a type. The compiler verifies.

If formulating the type is to laborious, make a weaker version and do tests it instead.

Personally, I don't understand people arguing against type systems on principle. Sure, they are type systems that suck (e.g. Java) or are otherwise very verbose and thus straining. But that doesn't have to be the case, it's not an intrinsic property of type systems.

There has to be something I'm missing since there are so many people arguing against types, notably Bob Martin himself.


> Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems (barring social ones).

That's the thing though, the "social problems" are the important problems.

I think it's a given you should use strong type systems in safety critical software — Haskell, for instance. But look at the quote from Dr Nancy Leveson:

I've been doing this for thirty-six years. I've read hundreds of accident reports and many of them have software in them. And every someone (sic) that software was related, it was a requirements problem. It was not a coding problem. So that's the first really important thing. Everybody's working on coding and testing and they're not working on the requirements, which is the problem.

Lack of strong typing does not seem to be the issue. Lacking domain understanding/logical errors is the problem.


> That's the thing though, the "social problems" are the important problems.

And yet, we still seem bogged down with bugs and vulnerabilities caused by non-social problems. So use types to get rid of those problems, and then we can worry about the social problems.

> Lack of strong typing does not seem to be the issue. Lacking domain understanding/logical errors is the problem.

Types help with this too. Formalizing a property that is inconsistent, underspecified, or over-constrained is a natural part of designing a type to represent that property.


So formalizing things with type systems is great. In fact, I'm a huge proponent of it.

The problem still remains, you may not have a sufficiently well developed requirement set to be able to specify things in your type system.

I'll give an example: It's possible in Haskell (really many languages) to create a type SafeString which can be constructed from a standard String and does all the necessary escaping for the target (let's say it takes in a username and works to prevent sql injection attempts, or removes HTML tags, or whatever might be appropriate). And for all practical purposes it behaves exactly like a string otherwise. Any time you want to make a SQL query or generate some HTML you know that you can use a SafeString and it'll (barring errors in implementation) work without causing you either security risks or producing incorrect outputs.

That's fine that we can do this, if it doesn't make it into your system design you won't know that you need to do this.

The type system itself doesn't tell you that your types are underspecified. It needs to come from the requirements analysis (in this case, requirements as viewed through a security-oriented perspective). But type systems allow you to encode that specification once it's been made.


> That's fine that we can do this, if it doesn't make it into your system design you won't know that you need to do this. The type system itself doesn't tell you that your types are underspecified.

Indeed type checking alone won't tell you that, but my point was more that type-driven design has a much better chance at revealing such underspecified requirements.

If you resist the urge to be lazy by assuming that your domain types map perfectly to built-in types, like String or Int32, and instead assume that your domain needs its own types, you centralize the validation logic to create domain types, which can then be easily audited and tested.

So you should create a distinct SafeString or SqlQuery type as a natural part of modelling your domain, even if the semantics of String appear sufficient.


I'm curious how OP would prevent the Therac incident through type systems...

Key quote: "Previous models had hardware interlocks in place to prevent this, but Therac-25 had removed them, depending instead on software interlocks for safety."

Or even better how type systems could work around data integrity errors in RAM without ECC.


> Or even better how type systems could work around data integrity errors in RAM without ECC.

You realize that you can do ECC in software too, right? You just need the statistical bound on a double fault to be similar to that of the hardware case.

Type systems can be used to do this by defining an interface for the data, letting you swap between normal datatypes and ECC ones seamlessly. (And in practice, you only need to define the base operators for your new type.) You can do this with ducktyping ECC versions of objects in say... Python. But the type system helps ensure that you've built the full set of components necessary and it properly links. You have to do this with ad hoc IDE tools in Python.

Similarly....

> The defect was as follows: a one-byte counter in a testing routine frequently overflowed; if an operator provided manual input to the machine at the precise moment that this counter overflowed, the interlock would fail.

That actually sounds like exactly the situation that a type system would be used to prevent -- partly because from a type theoretic perspective, that whole situation sounds like an obvious defect.

Further, the result was because of an obscure combination of key strokes that went untested. The use of total functions (enforced by the type system) lets you verify (by computer) that no combination of input can transition to a bad state.

So... Did you intentionally pick two cases where type systems are the obvious solution to the problem?


You need to read Leveson's full report. It details quite clearly what role software played.

If you're focusing on software and avoiding bugs you will at best have good quality software, when the goal is to have a safe system. SW is just part of the picture.


I'm not arguing against hardware interlocks.

I'm saying that's a weird example to pick of a case against type systems (eg, by asking how they'd have helped prevent that issue), since in that particular case, it actually sounds like a type system would have caught the software glitch that caused the issue.


What stops your program data from being modified or tampwered with without ECC ram, bypassing your checks?


I'm not sure I understand your question, but the one that most pertains to ECC would be preventing corruption of the routine performing the checks --

It varies greatly by the method you use to ECC in software, since they all feature different mechanisms.

In the case of the one I had in mind, the kernel is relatively small (so errors are unlikely), and once loaded, it's stable against bit errors. There are probably better answers, I'm not an expert on software ECC.

(Tampering is a different issue -- if they can tamper with the program, ECC won't save you from them writing garbage to memory. So that a hack which can write memory can bypass memory safety isn't really interesting.)


I meant tampering via a side channel attack (e.g. forcing a bit flip that ecc would protect). It doesn’t matter if your data types have referential integrity if a hacker could disable that protection.

What I am arguing against is saying a typesafe language obviates the need for ecc memory.


> What I am arguing against is saying a typesafe language obviates the need for ecc memory.

Which isn't what anyone said: I claimed you can do ECC in software and that type systems assist with that by making a clean interface and verifying the full implementation. (Since ECC is really just a lift of normal functions.)

There's little difference between using 9 bits in hardware and 9 bits in software, except that you need to cleanly load your ECC code onto CPU cache and you're using CPU instructions per byte loaded. The reason we do ECC in hardware is efficiency on an operation we're doing on literally every byte.

Lots of systems, eg harddrives, use ECC in their actual formats too, because you can do more complex ECC at the software level and not merely the extra bit. Sometimes, this software ECC is more efficient than hardware based ECC would be, because we're okay with less than 12.5% redundancy as long as we can still correct the errors we expect to find.


So because a tool can't prevent all possible faults or problems, that tool isn't worthwhile?

I see it like this: we have a number of different tools to minimise problems (various types of testing, theorem provers, type systems etc). Non of them catch every problem and many of the tools overlap in the types of problems they do catch, but if you use many of these techniques together, then you minimise the surface area of issues that can slip through.

Very few things, in computers or "real life" are completely fool proof, but that doesn't mean that they are not worthwhile. That's a call that must be made on a case by case basis to weigh up the risk of faults, the damage done if a fault does occur, the cost of preventing said faults, your budget and time available etc...


OP said "Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems (barring social ones)."

Pay attention, please!


OP hasn't said that it's the answer to the hardware problems as well.


One needs to consider the safety of the system as a whole in order to build a safe system.

Leaving aside social problems for instance is a bit like saying that one can build perfect software, with the small issue that it can't be used by humans.

Leaving aside hardware is even more suspicious, because the two are inseparable in a system.


Maybe I'm recalling things wrong (the incident is very old and I last read the details before some HN-ers here were born), but at the heart of the Therac-25 issue was flaky code, written in a language without any type enforcement (assembly language). What's worse, the code was juggling concurrent events with interrupts and whatnot and was laden with race conditions (probably a phrase not even in the developer's vocabulary).


Pages 42-end of Leveson's report explain the fixes put in place and the contributing causes.

Software bugs are a contributing factor, but the report warns against focusing myopically on the bugs.


Sounds like the Therac incident was caused by a concurrency error. Type systems can prevent this via type-level ownership systems (a la Rust) or safe concurrency types (a la Haskell’s STM).


There is the school of thought though that type-driven development, by forcing domain understanding to be expressed in terms of types, helps to create common domain understanding among the different stakeholders.


I've been doing this for thirty-six years. I've read hundreds of accident reports and many of them have software in them. And every someone (sic) that software was related, it was a requirements problem. It was not a coding problem.

That part put me off, because while awareness of problems with the requirements themselves is much lower than IMHO it could beneficially be, the casual dismissal of coding errors as another cause of failures makes me question where this "expert" has any idea what they're talking about at all, thirty-six years or not. There is a reason that Tony Hoare called inventing null references his billion-dollar mistake!

Edit: Having watched the original presentation by that speaker, in context and without any emphasis or commentary added by anyone else, the above criticism is unfair and I apologise for it. I think the subsequent discussions of it here may be putting more weight on that line than is justified, though, as you still have to have enough other mechanisms in place to avoid the coding errors making it into production before Dr Leveson's comments would apply.


>the casual dismissal of coding errors as another cause of failures makes me question where this "expert" has any idea what they're talking about at all, thirty-six years or not

They don't dismiss them casually. She dismisses them because of actual experience on not having seen those matter much for 36 years, in all kinds of scenarios where she was explicitly looking for such things.

>There is a reason that Tony Hoare called inventing null references his billion-dollar mistake!

Which is neither here nor there, as in the domain we're she works, they don't allow those mistakes to go through to begin with.


"casual dismissal of coding errors as another cause of failures"

It'd not casual dismissal. What she says that despite how good a job developers do, not understanding the basic domain where the software operates will lead to a failure in production environment.

Which, really, makes sense. Requirements drive the test envelope. Generally, people try to test the product they are releasing - but if they do not foresee a testable scenario then that is not tested.

The best way to let bugs slip through is not to test against them.


> Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems (barring social ones). If you want a property to uphold, formulate it as a type. The compiler verifies.

I'm a massive Haskell proponent and I'm calling you out because you're doing typed programming a massive injustice with this kind of overinflated claim. There is no known type system that can get anywhere near solving "all software reliability problems". Flexible type systems like Haskell's are really useful. Don't overpromise.


So where does it fall down? Excluding the social ones I mentioned.


Can you prove the correctness of a sort function in the type system? I don't know any mainstream language where you can do that. You'd have to go to something like Agda, Idris, Coq or F* which are hardly practical languages (currently). Beyond that there's an entire universe of complexity that we have no idea how to prove (practically) in the type system. Haskell types encode simple properties. They're extremely useful properties in practice but they don't come anywhere near "everything" we would want to prove.


What do you think about clojure.spec approach?


Those aren't proofs at all, and the docs even say so.


> I don't understand people arguing against type systems on principle

I believe these are the people who relate type systems to languages like C++ and Java, and dismiss type systems as a possible solutions because they've only seen the poor ones.

Back when I wrote a lot of Java code I mostly used TDD to write software that works. The type system didn't help that much. Robert "Uncle Bob" Martin dismisses type systems as "reducing the incidence of textual errors in a program" [0]. I doubt that Robert Martin is very familiar with very strong type systems such as in Idris, Coq, Haskell, etc.

Many programmers (most I know actually) don't learn these languages as they aren't useful to get a better job, earn promotions, etc.

[0] - http://blog.cleancoder.com/uncle-bob/2017/01/13/TypesAndTest...


Well if one can't get a job programming Idris, Coq or Haskell and they're not used in OSS infrastructure software either, what's the point of learning them?

Then they literally have zero direct effect on improving the safety of anything. One would be better off learning TLA+ and coding in C, or using Frama-C or Misra etc.


> what's the point of learning them

I personally enjoy learning new languages and I can always apply the concepts I learn in other languages that are used on the job. Not judging others who don't enjoy that, though.

It's more surprising that there's no real incentive from companies to write better software. People seem to accept a whole lot of shit when it comes to their phones and computers, and are happy to pay for it.


Type systems are not enough. I'm not sure they're even a good starting point for safety critical software, but they might be good for e.g browsers.

You're thinking at code level, people like Leveson, Knight and others are thinking at system level. The end goal is to have a safe system, not e.g avoid null pointer exceptions through optional types.


I'm not sure you're familiar enough with what a sufficiently powerful type system can accomplish. A full dependent type system is sufficient to statically prove that code meets a given specification. We're not just talking about ints and strings and optionals when we talk about type systems for correct software. See Curry-Howard correspondence.


Do I need to know more than what you said - that they can prove that code meets a spec?

This can work awesome for small programs and clear algorithms, but as is hinted in the article, one has to first create a correct specification.

I gave an example in another post, here's another: unforeseen interactions between two components built to spec.


I completely agree - most often the spec is wrong. For example, it doesn't matter if you have a completely correct implementation of Euler angles for a control system. If gimbal lock wasn't considered in the spec, you are going to have issues.


> I'm not sure you're familiar enough with what a sufficiently powerful type system can accomplish.

A sufficiently powerful type system can compute any partial recursive function at type check time. That's pretty much the upper bound.


The upper bound is arbitrary code, and validating arbitrary proofs.


> I'm not sure you're familiar enough with what a sufficiently powerful type system can accomplish.

At what cost?

I'd estimate it to be at least 10x development cost.


Optional types are a good example of how types make a system safer.


I think they're great! I am pushing to use them in our C++ code base and they empirically do improve safety by transforming UB into a nice exception.

But they won't prevent a programmer from using MD5 for RSA signature verification.


But if all of these things are correctly typed, then it would be visually more likely for someone to notice that the algorithm doesn't return the type it says.

Also, C++ static types are rather weak. Its optional types are useful, but its not like in SML where the compiler ensures that you handled all possible cases.

A good type system can't catch all kinds of errors though.


>Personally, I don't understand people arguing against type systems on principle.

Even Uncle Bob wasn't, I don't think. He just said that certain languages had type systems with a level of strictness that he considered too extreme.

>Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems

No, not all, only certain classes.

With haskell, for example, that comes at the expense of development velocity and performance (or at least, the ability to reason about performance).

>There has to be something I'm missing since there are so many people arguing against types, notably Bob Martin himself.

I think the key part people miss in the "more strictness" argument is that stricter types lengthen development and if your biggest risk is errors in specification (the article makes assertion and in my domains, that's been my experience too), so you are risking a vast amount of wasted time spent on perfecting the wrong thing.


> With haskell, for example, that comes at the expense of development velocity and performance (or at least, the ability to reason about performance).

The (depending on perspective) weaker ability to reason about performance in Haskell stems from laziness, which itself has nothing to do with type systems. It's clashes with side-effects are solved _via_ the type system (The IO type).

Regarding development velocity: _The developers_ are choosing how much time they spent on proving things to the type checker.

Sure, writing a function that requires of its argument the type { n % 2 == 0 } or something is more labor intensive than just using Int and runtime panicking on invalid values. But you can choose do just that instead!

There is no decree that on has to be as precise in their types as possible, no matter how much a given type system is able to express.


With haskell, for example, that comes at the expense of development velocity and performance (or at least, the ability to reason about performance).

That seems a little unfair. The biggest challenge with reasoning about performance in Haskell is usually its non-strict semantics. That issue is only incidentally related to Haskell's type system being significantly more capable than those of many other programming languages.


> If you want a property to uphold, formulate it as a type. The compiler verifies.

Consider two issues with this:

1. How do you verify that the property to uphold is indeed the same as formulated by the type? This is easier for simple types, but can get complicated real fast once you start getting into higher kinded types. Ideally you want the 'formulation' of the 'property to uphold' to be specified in the domain of the problem itself.

2. The compiler can only verify what it builds - but we don't build and deploy large systems all at once. We just build and deploy one subsystem - i.e. a single program at a time - into an already running system. So what about verification for coupling between different programs? (The Internet is dynamically typed).

To solve #2, we might need some kind of run time negotiation and verification (and once you have that, perhaps it can that be used within each 'program' as well?).


>Type systems to me seem to be (I'm not exaggerating) the answer to all software reliability problems (barring social ones).

Yes, but they increase delivery time problems and developer time problems.

In the end, the industry has decided getting something out cheaper and faster is worth more than getting it out correct code-wise but later or more expensively.

Which might make perfect sense -- what's correct depends on business needs -- and some layers might be totally irrelevant (e.g. program design) as to that.

>Personally, I don't understand people arguing against type systems on principle. Sure, they are type systems that suck (e.g. Java) or are otherwise very verbose and thus straining. But that doesn't have to be the case, it's not an intrinsic property of type systems.

No, but not liking type systems might be an intrisic property of some programmers.


>Yes, but they increase delivery time problems and developer time problems.

That's purely guesswork. Seriously! My observation is that as the size of the project grows, type system switches from hindrances to actually speed up development and save developer time. Ever try to grep weakly typed system for name definition? Yeah, you don't have that problem in strongly typed system. Lots of other reasoning are easier to do with types than without.


> Ever try to grep weakly typed system for name definition?

For what purpose?


To understand a codebase?


I haven't had any problems jumping around large codebases in smalltalk, clojure, or JavaScript.

I've had a lot more trouble understanding large Java codebases.

I thought they might be talking about difficulty refactoring.


> Yes, but they increase delivery time problems and developer time problems.

Only in the first day or so. In my experience, that time is gained again several times over in the first medium refactoring.


> No, but not liking type systems might be an intrisic property of some programmers

I wish they'd stop writing shitty, ad hoc type systems into our production code, then.


Division by zero is not a type problem, though it could be solved by creating a 'integer but not zero' type

I'm sure if you spent the day creating types your software would be very reliable. It would also probably not do much

"Relying on the power of type checking" is just wishful thinking that breaks down in the face of any moderately complex problem

The problem of course is not the type system, the problem is people creating types that contain one int/string only when a boolean would suffice, or 10 types of almost similar containers because for some people "ifs are forbidden"

Also reality likes to mess with well-oredered hierarchies


That's a social issue, people with diverse point of view, even if myopic will talk and create together because they can share and communicate about the issue at hand.

Uncle Bob and others don't see what's good in having a bit of logic prover to look at a codebase.. well too bad. 10$ they will in time realize it's pretty pretty and pretty useful.


> There has to be something I'm missing...

Some of your replies have already touched on a few points, so I'll add a new one: type systems can be more than a lot of work, they can be wrong. You do allude to "formulating the type is too laborious", but it's more than that.

For example, let's take the perennial question of object-relational mapping frameworks. "Object" isn't a great name for it because the real goal is usually a specific syntax (method calls) and type-safety features (make getID return an Integer, not a String). But there's a problem. "ID" is not the name of an attribute. It's a token in a SQL query. In the database, "ID" can be an Integer today, a nullable String tomorrow, and deleted next week. The type of the underlying data comes from the schema, not the application code. So there is inevitably a mismatch. Unless you understand there is some dynamism there and model the code that way.

The same thing goes for strongly typed HTTP or SOAP clients. Also anything that parses input on the fly.

There are absolutely places where the application can decide something has been validated and then apply that information to some data with a strong type, but there's a tradeoff there, too. You end up requiring that sort of validation-and-tagging in all sorts of places where it's irrelevant. Compare creating an XML DOM with using xpath, xquery, etc. You could create a giant graph of all the nodes in your XML document. Or you could just provide an expression that does pattern matching. There are places where either is handy, but the latter is almost always more flexible.

Now, there are limits. Generally, well-typed data is easier for compilers to understand and optimized. That is, strongly typed data structures are important. But, honestly, most code isn't brand new data structures.

Anyway, there's certainly balance to everything, but people need to keep in mind that type systems are a hammer like anything else. They can be misapplied harmfully, too.

Caveat: It's possible to use generic programming and higher kinded types to work around many of these limitations, but I'm writing this blurb to people who are "application developers" or otherwise are big on object-oriented programming. If people are interested in how type systems can be more flexible, we can talk about that too. My point is that they overwhelmingly aren't used that way.


Uncle Bob's product is Uncle Bob. If any software gets written, that's entirely coincidental.


Thank you. He is all about creating a cult of personality. The opening of Clean Code, with its pictures of other respected people and quotes and then the “what about Me” is all about putting him on the same level as all those folks. So, appeal to authority, and then run with “be a human linter”.

Also: can we stop calling him “Uncle Bob”, because it’s frankly kinda creepy and paternalistic.


I took several deep dives into a project that he apparently wrote a good bunch of code for, the test automation framework FitNesse. Besides having major conceptual and architectural shortcomings, this piece of software was also easily among the biggest piles of crap I have ever seen, code-quality-wise. Granted, this was 5 or 6 years ago, and it seems to have improved since then, but I still consider it sub-par.

So whatever's coming from this guy, I now immediately see this ugly, unreadable code in front of my mental eye when I just hear his name. Quite a fail, considering he called his company "Clean Coders".


The other day I stumbled across a video of his, explaining monads in Clojure, and I completely agree with you. He's a good salesman, he sells himself and his consulting, and that's about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: