Most of his stuff is excellent. ComputingFailure.com is a rip-off. It is a compilation of articles (not by Glass), mostly very shallow from newspapers about dot-com business failures, the only thing about computers in the book is that the businesses used them.
InclinedPlane's comment already says it, but I think it's worth putting more strongly. The OP is nothing but a synopsis of Glass' book Facts and Fallcies of Software Engineering, which is remarkable precisely for the fact that it does provide supporting references for every one of its claims. Glass' meticulousness in this respect is nearly unique in the space (Steve Mcconnell also comes to mind) and he deserves to be better known for it.
That being said, the effect of all those references (on me anyway) was more to highlight the weakness of the research literature than to make a convincing case. But that's not Glass' fault. He does a great job of reporting what the literature says.
It's quite concise (224 pages), but it's chock full of quite excellent advice. Each one of these points (and many others) is fleshed out in a separate chapter that gives a good deal of background, clarification, and supporting evidence.
RES1. Many software researchers advocate rather than investigate. As a result, (a) some advocated concepts are worth less than their advocates believe and (b) there is a shortage of evaluative research to help determine the actual value of new tools and techniques
to a researcher friend of mine every time he advocates the use of Lisp in commercial projects claiming that such a great language will just inevitably bring more productivity to the team in comparison with such a poor language like Java (or any other Blub.)
There have been a few studies (too few, to back up your point) of actual software productivity. And I am one of the guilty ones (not that I am a researcher--I am a practitioner). One that I recall says that there is more variance by programmer than by language.
And look at who ycombinator chooses--it is more about the people than anything else.
Wow. I feel like I need to go smoke a cigarette after reading that list. I also want to print off 10,000 copies and throw them like confetti around several of my previous jobs.
"in a room full of expert software designers, if any two agree, that's a majority!"
"P2. Good programmers are up to 30 times better than mediocre programmers, according to "individual differences" research. Given that their pay is never commensurate, they are the biggest bargains in the software field."
P3. Good programmers are defined as those programmers that produce good products. If you focus only on coding speed or analytical skill, you may miss out on the best programmers.
P4. If you are paying your good programmer less than you know s/he is worth, count on them leaving soon. You will deserve the loss.
"Good programmers are up to 30 times better than mediocre programmers, according to "individual differences" research. Given that their pay is never commensurate ..."
Why is this a given? What are the forces which prevent the pay getting commensurate with ability? Is this unique to programming profession, or is this widely observed in other fields too?
How many run of the mill $50k average-or-less developers or programmers do you know?
Now, how many $500,000 programmers do you know? Outside finserv quants & algos, not many. But THEY can make millions and tens of millions.
Unfortunately our industry tends to grant incremental compensation increases against geometric performance & value increases. Which is true in a lot of engineering fields. And the same can probably be said in design fields as well.
RE4. Even if 100-percent test coverage (see RE3) were possible, that criteria would be insufficient for testing. Roughly 35 percent of software defects emerge from missing logic paths, and another 40 percent are from the execution of a unique combination of logic paths. They will not be caught by 100-percent coverage (100-percent coverage can, therefore, potentially detect only about 25 percent of the errors!).
Given the fanaticism for TDD today, this is a fact well worth remembering.
TDD is not about finding defects, its about minimizing work (you code to pass tests and no more - as a side effect, you should always have 100% coverage) and about preventing working code from being broken through bug fixes, refactoring, or extension because your existing tests will then fail. Its a guarantee that what works now will still work later - or the test will fail, not the test will find bugs.
I wish all TDD proponents understood this. Coverage != Tested. 100% coverage does not mean you have have exercised all possible execution paths through the code.
Also, passing all tests does not mean there are no bugs. Assuming you have tests for all your requirements, passing the tests means only that software can perform those requirements. It does not mean that it will be bug-free, especially if the users try to do things the requirements didn't anticipate.
TDD has the great benefits you mention but it is not the magic bullet so many of its advocates make it out to be.
From the context he seems to just be talking about code coverage as a metric, not TDD (though maybe he goes into more detail in the book?).
TDD claims to lead to higher coverage but I guess they'd claim to help you avoid missing logic paths too. Since you start with tests and then write code so that you've always got high coverage. If you've not written a test for it then you can assume that it doesn't work. The alternative is to
write a bunch of code, then add just enough tests to keep a coverage metric happy.
I agree with the general thrust, but can we avoid pseudo-numeric language, like "for every 10% increase in $NEBULOUS_THING_PROGRAMMERS_FIND_ANNOYING, $HARD_COST_NUMBER goes up by 100%". Really, we can do better.
There was an article about this on HN a while back, but I can't find it right now.
He gives a summary here of good points. He has often given very practical advice about software engineering.
However, a minor quibble: REU2. Reuse-in-the-large (components) remains largely unsolved, even though everyone agrees it is important and desirable. Not everyone agrees about reuse-in-the-large as being desirable. In Coders At Work, Knuth goes so far as to suggest taking apart other "reusable" libraries and rewriting them. Rumbaugh (of the Three Amigos) has said that reuse-in-the-large is overrated as a goal.
Yep, I'm only going to pay attention to the ACM's stuff rather than IEEE. Look at the obsession with percentages and pseudo-mathematical formulas like this: "User satisfaction = quality product + meets requirements + delivered when needed + appropriate cost."
I get my first 0-point comment in nearly 2 years for that? You really think "frequently forgotten fundamental facts" was unintentional/necessary? It could have just as easily been:
"Fundamental facts" is a stupid phrase anyways since "fundamentals" is basically the same thing.
This was obviously a case where the author was thinking, Ooh, I will look clever and poetic by starting off my blog post with a pointless tongue twister.
"I don't expect you to agree with all these facts; "
Well, not unless you provide links to references.
In fact, here's one fact that was left out: merely asserting something in a technical article does not make it a fact. It's opinion until you back it up with data.
I wish he had a higher profile (similar, say, to Spolsky and Atwood, for example)....