Hacker News new | past | comments | ask | show | jobs | submit login
Logical fallacies in software engineering (artur-martsinkovskyi.github.io)
242 points by gentleterror on May 26, 2019 | hide | past | favorite | 145 comments



> Appeal to authority (argument from authority, argumentum ad verecundiam) – an assertion is deemed true because of the position or authority of the person asserting it.

You have to be careful with this one. When someone has enough credibility, sometimes you need to question yourself if you disagree. If you are playing chess and Gary Kasparov comes along and tells you what your next move should be, you can disagree with him, and you might even be right, but given his expertise in chess, his move is most likely going to be better than yours and it's worth considering (unless, of course, you are chess master at his level). The same thing is true with software. Some people are really strong engineers with really strong expertise in a specific domain. If you follow their advice within their domain of expertise, you might not come to the optimal solution, but you'll probably come to a good enough solution. Of course, there are exceptions, but in my personal experience, I've seen more people mess up by not following experts' advice than by following it.


Good points. I feel software engineering as a field is growing a lot slower than it could, for a couple of reason.

The first is what you mention: everyone ignores lessons from the past and try to relearn everything from scratch, reinventing the wheel and best practices over and over and over, making the same mistakes again and again. Some communities also heavily encourages concepts like how experience doesn't matter, best practices aren't a thing, etc.

The second (and part of the reason the first is a thing), is the amount of people acting as experts that really aren't. Even big names and pseudo-celebrities in the field frequently have no idea what they're talking about. Once people realize this, they start questioning everything.

And thus, we're stuck in software engineering stone age. Being unable to separate the good stuff from the bullshit is holding us back a lot.


May I add to this with a third: difficulty in creating a strong system of mentoring?

Many companies have a formal mentorship program but it is often in name only (at least in my experience at 3 companies).

In a formalized apprentice program, a worker can feel okay about not knowing things and asking the person for whom they are apprenticing. Despite formal mentors, I believe many junior devs do not have a person that they feel comfortable not knowing things. Moreover, plenty of senior engineers do not make good mentors (either don't want to or don't have the strongest of communication and mentoring skills).

You'll have a crop of engineers with 0-2 years experience that don't feel comfortable not knowing something because they feel it can be a sign of technical weakness. Perfectly valid questions go unasked and a young engineer may spend 3-4 days trying to answer something that his mentor could help him/her with in 15 minutes.

Great knowledge isn't transferred optimally in this scenario.


Oh man, I relate to this feeling tremendously. I am a junior dev on an apprentice program (just shy of 1 years experience to date) and the times I have struggled with something because I let my pride, or fear of bothering people with inane questions, have been the lowest points in the last year. I always get to the otherside wishing I just talked to a senior dev - because more likely they could have saved me a stressful week.


Few random thoughts as someone who mentored many junior engineers:

- don’t care about your pride. You’re here to learn, so do everything you can to learn. Imagine even the worst possible scenario - people get fed up with you - you’ll go and find different job (benefits of being in a booming industry). Your old coworkers won’t follow you but knowledge will

- don’t think about all the time you spent looking for answers yourself as wasted. Some if it was, but you learnt also tons of other things that may not be relevant now, but will be in a future

- ask your mentors not only for direct answers to your questions but also for reason and/or how they came to that conclusion. Most important skill for you to learn is how to think about problems and how to find solutions, not the actual solutions to few problems you came across


For whatever it might be worth, here is my advice to juniors about being mentored:

It mostly is true that the only stupid question is the one you didn’t ask.

Please don’t ask the exact same question twice. Take notes if you need to. Asking for clarification or following up for more detail is fine.

If you have many questions and your mentor has other responsibilities, try not to keep interrupting them all the time. Discuss the best way to balance your needs and theirs, perhaps finding a time/place/medium where you can have a larger discussion and deal with several questions at once.

Assuming your mentor has accepted that responsibility voluntarily, they’ll probably be happy to help even with the basic stuff — we all had to start somewhere — but will appreciate it not taking over their whole day.


Also - make sure they understand your goals instead of your hyperspecific question - a lot of time can be saved by avoiding the XY problem.

http://xyproblem.info/


In my experience people with great knowledge generally have an urge to pass it on and they appreciate self initiative taken by the person learning. I have mentored a few people during my job and generally feel that my time was wasted only if they don't take initiative to learn themselves. Many a times I leave a crucial thing out of the answer and ask them to try to find for themselves as teaching is not about giving a ride but showing the way.

During the years I have found that it's not helpful to go into too much detail when teaching, as the people who are just starting won't be able to appreciate it and it may actually overwhelm them.


> In my experience people with great knowledge generally have an urge to pass it on and they appreciate self initiative taken by the person learning.

I wish my personal experience was the same, but it has been the polar opposite. In my current and previous roles, there is always too much work, which causes management to pressure development for deliverables, which results in juniors getting pushed aside. It's a terrible cycle and I have no idea how to improve it.


This is not necessarily the opposite. It is perfectly possible that the knowledgeable people you've worked with actually wanted to pass it on. They just weren't given the time to do it.

You should be angry about overtight deadlines and companies who live by those, not to the actual people. This also points to a possible solution for you: switch companies.


It's much worse than 3-4 days of wasted effort. A bad design can get so baked in it's nearly impossible to correct.


As a counterpoint to that, in some cases technology is changing so rapidly that things which were best practices no longer are and vice versa. Things that we take for granted now, like CI/CD, were not even technically possible 20 years ago. If people weren't constantly challenging best practices as new technology came out then we'd still be pushing our CRs through change acceptance boards.

Or think about "the cloud," "infrastructure as code," "immutable infrastructure," all that stuff that's considered best practices now couldn't have even been conceived of when we were spinning up new servers by driving to the data center and dropping a new 1U in the rack.


Good point, absolutely! The line is really thin between industry changing stuff (CI, cloud, distributed version control, type systems that don't suck, etc) and reinventing the wheel through arrogance or just not knowing any better (yet another testing framework, screwing up some well studied patterns :cough Java Optionals cough:)


The graybeards among us might chuckle at the thought of calling it "distributed version control" when the CI system requires a centralized copy of the repository acting as a source of truth in order to work.


CI systems only need the "centralized" copy as a reference point of what the current master is. Everyone gets their own entire copy of that repository though. If the truth suddenly changes it's known. I also know that if the copy in the "centralized" matches my own then the CI is also running exactly my code. If the centralized machine fails I can fail it over to any copy on any machine. That seems pretty distributed to me as compared to the original "distributed" version: e-mailing source code around. Version control like SVN and CVS were centralized in that me having a copy of the source and you having a copy of the source indicated nothing about whose version is correct. Moreover a failure of the central spot in the system is fatal unless you had backups as a form of pseudo-distribution whereas in today's version control a failure on any one machine is irrelevant to the preservation of the system.


I'd add to that the predilection for hiring young engineers and overlooking older ones. While it's true that older ones can get stuck in past practices, they also recognize things that have been tried before, seem like good ideas, but just don't work for non-obvious reasons.

For example, implicit declaration of variables in programming languages.

A strong team will have a mix of older and younger engineers.


Language design is its own thing with its own body of knowledge and experience. Of course, often times new languages were designed by those who could engineer their compilers and interpreted, leading to lots of mistakes that would be more apparent to a seasoned language designer.


Mentorship at the highest level (principle to senior and below) is desperately lacking. I vow myself when I reach that level to pass it along the best I can, but something I desperately searched for when I was starting.

I pair programmed for 3 months (which I consider apprenticeship / live action mentorship) with someone who was a way better engineer than me at my first "career job" out of college. I learned more in those 3 months about software engineering than in my previous two.


I'm writing the book on senior+/lead to junior/intermediate mentoring with my company. I've been floating my notes and mentoring guidelines around the department heads and other leads. I love it, personally. Department heads and managers see the value but are reluctant when they see that I'm willing to spend months with a junior or that I might myself out there for hours a week with an intermediate. If I come across a particular interesting performance problem, or something like that that is challenging and requires a certain type of experience to get into, I'll schedule time to pair with 1-3 of my team members on it so that they can hopefully benefit from the experience. The real challenge, though, is that not everyone wants to mentor. I personally think that it's critical and a core part of my responsibility but most of the other senior+/leads where I work are not engaging in much meaningful mentorship with their teams.


> I vow myself when I reach that level to pass it along the best I can, but something I desperately searched for when I was starting.

Mentoring is one of my favorite thing. Finding people who want to be mentored is...challenging though. Probably a chicken and the egg (one of the other posters mention how people are put in a situation where they can't be open about not knowing things), but so many entry level engineers just pretend they know everything.

Completely agree with you though that this kind of dynamic would drastically improve the industry.


This is a salient point. I've personally found that most junior/intermediate people want to be mentored but it should always be done carefully and with consideration. I've had a couple of junior/intermediates, though, who, like you are saying, just did not want it. I think that most of them have moved on from the company but there is 1 person who is still around and who is now the type of person who has 5 years of experience but it's all the same year.


> I learned more in those 3 months about software engineering than in my previous two.

Do you mean you learned more than you did in 2 months of school? Did you drop "years" as the last word? Did you mean it wasn't a great experience because you could have learned what you did in 2 months instead of 3?


Part of the Stone Age is that the good stuff is already out there in books nobody reads, rather than fancy Medium posts.


Do you happen to have a list of such books off the top of your head?



Do you have any book recommendations?


Yes.

This is only one of these "must-read book list", what you can do is search several of them and see which ones are quoted by almost everyone!

About "everything" I would rate the following books very highly, for their sheer usefulness:

- Thinking Fast and Slow (really do the exercises!)

- then Neuroleadership which describes the purpose of brain structures, it's somehow wider than kahneman

- A Mind for Numbers (include very useful data points for knowledge workers)

- Distinction: A Social Critique of the Judgement of Taste

- Cutting Through Spiritual Materialism

About programming I would rate the following books highly, but at the same time it seems a bit fruitless to me to read about programming, rather than spending more time reading about psychology or sociology:

- the Economics of Software Quality

- Peopleware

- Pragmatic programmer

- Mythical Man month

^you'll see that these books are hardly kept secret, it's just that people quote them but don't actually read them, or so it seems.


Part of the problem is the concept that programming is "engineering". It explicitly is not, it doesn't have a defined set of criteria for evaluating design or ensuring results.

"Doing Agile" doesn't qualify as "engineering", CMMI level <5 isn't engineering. Even CMMI level 5 is only engineering in a very small domain. Most shops claiming CMMI level 5 are only qualified in one particular aspect of software development.

So we're still in the "master/apprentice" and "journeyman guild" days of our profession. But Universities and Colleges are desperate to sell their 4 year "software engineering" degrees, so cover up the lack of rigor with a mountain of bullshit.

So the first step to really establish software development as an engineering discipline is to stop fooling ourselves. Being able to "refactor my FactoryManagerFactory class" is not engineering.

When a programmer has to sign off and take liability for any failures like a PE does on a bridge is when we'll be able to say we are "engineers" with a straight face.


I would argue that we have people who already are software engineers. People who know the machine their code is running on very well, and use this knowledge to fully optimize for that specific machine (embedded, console development).

They take what they are given, and engineer a solution that fits that piece of hardware as well as possible. No bullshit FactoryManagerFactory.


This is a bit of the "fallacy fallacy" issue. Fallacies are mental shortcuts. Fallacies are wrong when you say "X is true because Y" where Y is something like an appeal to authority. But their use does not imply "X is false because Y is a fallacy". The move isn't good because Kasparov said it's good; Kasparov said it's good because it's a good move. Which means that if you asked him, "Why?" he would give you some reasoning, which is the real Y for why X is true. Granted he might appeal to authority by saying "Because my intuition likes it", but that's a good reason to think deeper on the move in order to come up with a grounded argument


> The move isn't good because Kasparov said it's good; Kasparov said it's good because it's a good move. Which means that if you asked him, "Why?" he would give you some reasoning, which is the real Y for why X is true.

Yes, if you have access to Kasparov, you can ask his reasoning and that will help you make a decision. Some moves in chess are only good moves if you get the next five moves exactly right, so they aren't suited to beginners.

But in real life, we don't usually get access to experts. I can look up Kasporovs games but I can't ask him his reasoning. I can look at code that Dennis Ritchie writes and see how he did something, but I don't have access to him to question him on his reasoning. But he's a very good programmer with deep knowledge and a body of work to show for it, so chances are that if he's doing something some way, it's probably not a bad way. On the other hand, if I'm writing something in C--and I'm no expert in C--I aught to give it serious thought if I'm torn between two options and I'm not choosing the one that Dennis Ritchie went with. There's a good chance that I'm not seeing something that he did.

And yes, even the best people make mistakes. But unless you have good reasoning to the contrary, you probably are not going to do better than experts in most decisions.


> Some moves in chess are only good moves if you get the next five moves exactly right, so they aren't suited to beginners.

This is important. Similar issue exists in StarCraft: good builds for pro players aren't necessarily good builds for regular players. If you can't macro behind double pronged harass, don't do it. If you can't watch your ramp to block adepts shading in, don't sit a probe at the top of your ramp

Don't say "I do this because experts do this", instead study it to understand why they do. Otherwise you're going to end up cargo culting

Better to do suboptimal well & with understanding than try do optimal poorly & be confused when it doesn't pay off


I come across this one all the time, but it's usually in the form of "I know you're wrong, because person X made an argument that I don't understand, but this misunderstood summary of that argument says you're wrong".

Happens all the time with the old Donald Knuth chestnut about "premature optimization". Half the time I hear it, it's from people who don't want to optimize at all. http://www.joshbarczak.com/blog/?p=580


That Knuth quote has to be one of the quotes that is most often taken out of context to suit a current narrative.


I think this fallacy is the most widely misunderstood fallacy of all. In an argument between experts, appeal-to-authority is indeed a fallacy. If both people are indeed experts, they should be able to judge something purely on its merits. This is similar to the scientific method where debates are settled by logic and evidence, not seniority.

But if either party in the debate isn't an expert, then appealing to authority is the rational thing to do. If you yourself aren't an expert, then you lack the expertise needed to evaluate any arguments purely based on merit. You'd be far better off relying on expert advice, even if the person you're debating with has suggested something different and is more knowledge than you.

It takes a lot of objectivity and humility to know which fields you are and aren't an expert in. Overestimating your skills may feel good, but will only slow your progress and lead to bad decisions. The vast majority of the time, you're better off following expert advice.


>But if either party in the debate isn't an expert, then appealing to authority is the rational thing to do.

It might be the "rational thing to do" but it's not a logical argument -- which is what makes it a fallacy.

Fallacies can often be the logical thing to do. E.g. an "Ad Hominem Fallacy" might save your ass when the other person puts forward a perfectly logical argument, but to serve their own selfish reasons to your detriment. An "Appeal to hypocrisy" can serve to highlight when the person you debate with wants you to stop doing something (because eg. they say its bad) but keeps for themselves the benefits of keeping doing to (and thus gets the upper hand on you). And so on.

Still, like those things "Appeal to authority" is not a logical proof based on the content under discussion (and axioms, logical steps etc), but a "because X says so" on the status of person X (eg. their expertise).

That's what makes it a fallacy. Whether what that person says is true or not, or even whether it's more likely true than not, is irrelevant.

In either case, it's not a logical proof (in the formal sense) nor a definite proof (the person with the expertise could for example be lying to further their interests, or mislead for some other reason).


We're arguing over semantics here. Most people interpret a "fallacy" as something one should refrain from. Ie, "appeal-to-authority is a fallacy, therefore we should refrain from it during the course of any discussion".

I'm making the point that often times, appealing to authority is most likely to produce the correct answer.


>I'm making the point that often times, appealing to authority is most likely to produce the correct answer.

That's true for most so-called fallacies. Instead they are evolutionary shortcuts to getting the correct answer (at least at evolutionary kind of situations).


This is not true. There is a difference between logical fallacies and judgement based on probability. Circular reasoning, strawman arguments and actual appeals to authority are not the same as taking into account the likelyhood than an expert is correct.


How does committing the strawman fallacy help in producing the correct answer?


The word "fallacy" as used in the OP seems to me to be a shorthand for "belief or opinion not backed by solid observational evidence and/or reasoning". It doesn't seem to apply to the case of constructing an argument in the logical or philosophical sense.


"Appeal to authority" is a fallacy in the case where someone asserts authority where they have none, e.g. taking a scientist's advice on a subject because they're a scientist, even though their specialisation is completely unrelated to the subject at hand.

Deferring to relevant expert opinion is really the only thing you can do for the vast majority of things, since you can only be an expert in so many things. Unfortunately, "appeal to authority" is sometimes interpreted in a way so as to discredit authority in general.


That’s not what it means at all. Of course relying on false authorities is bad.

Like all “fallacies” it’s not that they are useless, it’s just that you have to be aware of the traps. Even the best experts in an area often disagree, so you can’t logically deduce that expert says X means X is true.


You're right that the actual fallacy is using an appeal to authority as proof for a statement - that is wrong.

However, it's perfectly fine to defer to an authority as a working hypothesis when you're not proving something, though - unless that authority does not have relevant expertise.

I think my point is probably better worded by https://yourlogicalfallacyis.com/appeal-to-authority

> nor is it reasonable to disregard the claims of experts who have a demonstrated depth of knowledge unless one has a similar level of understanding and/or access to empirical evidence.

But that does not dismiss that you're entirely right in that

> it is entirely possible that the opinion of a person or institution of authority is wrong; therefore the authority that such a person or institution holds does not have any intrinsic bearing upon whether their claims are true or not.


> > Appeal to authority (argument from authority, argumentum ad verecundiam) – an assertion is deemed true because of the position or authority of the person asserting it.

> You have to be careful with this one. When someone has enough credibility, sometimes you need to question yourself if you disagree.

This is true in all disciplines, and yet the fallacy exists—because there's a difference between "an assertion is deemed true because of the … person asserting it" (emphasis mine; a fallacy) versus "an assertion is given more weight because of the person asserting it" (a perfectly reasonable shortcut, as long as the assertion is still given some vetting).


This is a bad proposition and I'm afraid a wrong one. When you disagree with an expert, the expert should be able to convince you why s/he is right or at least able to articulate the reasoning. It's not a chess game where you are an opponent to the expert and the next move is supposed to be secret. The relation between you and the expert is of collaboration, training and dissemination of knowledge. If an expert fails to articulate the reasoning, I take that as a sign that advice is merely because "it would work because I saw it worked before". That often means the expert failed to analyze and learn from his/her success and is just proposing the status quo. It is often better to question this kind of expert advice and that's how progress has almost always been made throughout human history.


You also have to be careful about who the authority is and how they got to be an authority. Gary Kasparov obviously got to his place via a metric which is hard to argue against; you don't know how a senior dev got promoted to be a senior dev. I know a senior "engineer" but really a dev who's rewritten this one piece of software, first in go, then in c++, then in go, over the course of a year. There are no unit tests and no integration tests. Documentation is spotty. A second senior who has a PhD in engineering and no go experience started work and she immediately found several race conditions and complained that the code was written in a fashion that was provably incompatible with concurrency.


>If you are playing chess and Gary Kasparov comes along and tells you what your next move should be, you can disagree with him, and you might even be right, but given his expertise in chess, his move is most likely going to be better than yours and it's worth considering

If you already had a plan to win the game, but have no idea how to continue Kasparov’s line, then it doesn’t matter how strong his move is, you’re just going to be shooting yourself in the foot by making it.

In technology though, things are different. You can precisely quantify the strength of a chess move, you usually can’t even imprecisely quantify the strength of most advice about technology. In my experience, experts tend to be no better or worse at separating their own personal opinions from more objective truths.

In any case, the appeal to authority shouldn’t be necessary. If an expert is right about something, they should be able to explain to a non-expert why they’re right.


yes an no. chess is a very old, well studied game where one can improve w/ work over time. The rules don’t change and people normally agree on what are good/strong moves vs weak moves.

now, if the size of the board, how pieces move and the goal of the game would change every 3 months if Kasparov would come in and tell you that your next move is wrong it would be a pretty hard sell. Also imagine this Kasparov fellow is also into manufacturing a certain type of piece that he wants you to aggressively use to “develop” your game.


No. Absolutely not. If the person is actually an incredible authority they will have no trouble making a compelling argument or providing strong evidence.

This is never confusing unless you lack objectivity.


> If the person is actually an incredible authority they will have no trouble making a compelling argument or providing strong evidence.

This is false in many contexts.

For example with Chess, a super GM may tell you your move is bad because they have calculated a ton of possible continuations from your move and you end up in a losing endgame. If you can't follow their calculations and/or don't know enough endgame theory, you won't be able to tell if what they are saying is correct.

Similarly with say cryptography. If you don't understand enough about the field and state of the art. You may not be able to understand how some encryption might be able to be broken.


That is called bias. Again, this matter is simple when objective. People often give into appeals of authority and other fallacies because they lack the critical nature to make original decisions and instead defer to name recognition.


If this were true, then developers would have no problem selling things like unit testing or reduction of technical debt to business folk. Sometimes it's hard to explain something to someone who isn't on your level with a specific skill.


If you lack the ability to craft a persuasive argument then provide strong evidence. To an objective outsider giving into logical fallacies would make your argument weaker, not stronger.


It's just a logical fallacy because it can't be used as a concrete building block to advance a logical argument forward. Doesn't mean it's not useful, it just means that you're no longer logically air tight.


it depends greatly how Kasparov tells you what to do. if it's "do this because I'm kasparov", or "do this, because your next move could be this and it gives you this opportunity". same with software. someone telling to do this because I'm senior engineer is appeal to authority.


If I were playing chess in the park and Gary Kasporov walks by, sees a move that I'm about to make and says "that's a mistake, I'd recommend against that", then I'd be an idiot not to strongly reconsider my move, whether he explains himself or not. Sure, he makes mistakes, but Kasparov is an unimaginably better at chess than me and the chances that I make a mistake and don't see something that he does are much much greater. With chess though, it's hard to overate yourself because you can see exactly where you stand. With software engineering though, it's very easy to overestimate your abilities.


The "nirvana fallacy" is definitely one I commonly see on Hacker News. The ease with which people jump from "thing x is imperfect" to "thing x is worthless" is mind-boggling. This is usually coupled with familiarity bias, where the flaws of any existing technology are discounted vs. those of a new technology.


Also frequently combined as "Thing X is known to be imperfect, so thing Y which we've never tried before must be better" D:


This touches on another thing, which is “Thing X is known to be imperfect, Pat says thing Y is better. Nobody else on the team has tried Y but we have all these aphorisms like ‘no silver bullet’ that lead us to think Y can’t be that good.”


When somebody presents me with a new shiny solution to a hard problem describing it as a silver bullet, I immediately become suspicious. So, it depends a lot on how good Pat says that Y is.


Do these people actually say, “this is a silver bullet” or is that your interpretation? Does this mean is pat undersells you on Y you might be more receptive?


It's not about underselling, it's about showing awareness that there are always tradeoffs. I'm much more receptive when they did their due diligence about those and the risks for the project.


Y is not necessarily better than X, but perhaps you should look to see in case perhaps Y is actually better. (Or maybe in some ways it is better and other ways not so much better. Or maybe it is better for what they are doing but not you.)

Also, sometimes something is, although not perfect, can be improved without breaking or making it too messy, but, sometimes not.


A fallacy I've noticed often in sw engineering, and might fall under the nirvana fallacy definition, is the assumption that there is one best tool for doing things- the one that supports the largest amount of use cases.

An metaphor I like to use is that of the software engineer buying a tank instead of a car: fits more people, drives on all terrain, basically indestructible, and in case an enemy is blocking the road you can always shoot it from the turret. Of course, it's horrendously expensive, wildly impractical, and you have to demolish your garage and rebuild it to fit it in, but hey, that was the best choice.


I dont really take it this way. for me the 'nirvana fallacy' is that if we design a bespoke system, in the limit we will save work and end up with a better product than if we found a tank and a car, welded them together, and fixed them up.

whats not mentioned in the article is the 'never coming up for air' problem. maybe building a custom ruggedized car is worth the time investment - but the problem is that once your people get into that mode they always a new and interesting direction to take it, and never end up installing the seats and the taillights.

i still believe that an 80% implementation of some idealized system generally wins over 'just mash together some stuff to make a demo and we'll deal with the fallout later'...but since no one else does anymore its kind of moot.


>A fallacy I've noticed often in sw engineering, and might fall under the nirvana fallacy definition, is the assumption that there is one best tool for doing things

AKA "We must use git." :-)



But a tank is so much more powerful.


Not to mention the regulatory issues.


If everyone just buys tanks, eventually they'll have no choice but to change the regulations.


I'm CONSTANTLY lamenting this exact tendency https://twitter.com/ardave2002/status/1128827634375249920 and now if you don't mind, I'm going to tweet-quote the way you've presented it here.


I had a few double takes where "flawed" was used as a takedown for entire concepts and their backers. What in this world isn't flawed?


Sometimes a helpful antidote to the nirvana fallacy is to choose a solution which doesn't preclude perfection.

It's one thing to make a decision which blocks you off from ever solving the problem the really correct way. It's another thing to not achieve perfection now but leave open a path to going back and improving certain parts later so you eventually get there.

For example, maybe one component of your code uses an inefficient algorithm, but you isolate that so that later you can create a drop-in replacement that is optimal.

For me, at least, it's mentally freeing to just take a little time to verify I haven't painted myself into a corner. Once I know that, I'm satisfied and can more easily move on to build the imperfect system I need to build now.


1000% agree. My team is in the middle of building a new system, and it’s gotten to the point where I’ve found myself saying the phrase “let’s not paint ourselves in a corner” 2-3 times a day. An additional benefit is that you get to testing your solution in the real world sooner, shaping your perception of perfection (and therefore the list of needed improvements) in production rather than on paper.


The concept of "fallacies" is somewhat overdone, to the point where I'm tempted to coin the "fallacy fallacy".

They are used far too often to shut down some debate: Oh, you want to do this yourself? Not-Invented-Here-Syndrom! Oh, you want to buy from a reputable vendor? Appealing to authority!

I believe the "appeal to authority" fallacy is especially misguided: it's impossible to verify every statement we rely on from first principle. At some point, when the New York Times has an article highlighting the benefits of vaccinations, while @TheyAreTryingToKillYou241234 on twitter says the MMR vaccine will cause your kids to grow a second head, it is perfectly fine, or even necessary, to consider the source of some information as evidence in evaluating it.

This mechanism, of evaluating sources of time and establishing trust, is so pervasive we tend not to even notice it. You are far more likely to hand your car keys to your spouse when asked than to a random stranger.


Fallacy fallacy exists: assuming that because a reasoning contains a fallacy, its conclusions must be wrong. They're simply unproven.

https://en.m.wikipedia.org/wiki/Argument_from_fallacy

https://existentialcomics.com/comic/9


Appeal to authority is a fallacy exactly because you can't trust New York Times just as you can't trust @TheyAreTryingToKillYou241234, but you do. An authoritative opinion is not an evidence.


Trust is not a binary thing in this context. It is OK to believe that one source is more credible than another, and as such is more useful for guiding an opinion, while still being aware that it might be wrong or misleading. This is inevitable in a society that deals with too many complex issues to be an expert on everything yourself.

An extreme example: I do not, as far as I am aware, know any of the contributors to the most recent IPCC report, nor am I qualified to independently assess its correctness or able to reproduce its underlying experimental evidence. Does this mean I should not be concerned about the effects of global warming? No, because in the absence of any better sources, the IPCC report is the most credible material I have available.

The key thing here is that I am trusting the authors to be authorities in their own field of expertise. The appeal to authority fallacy is more like trusting Prof Sir Cumulus McCloud, who is a Nobel laureate for his research into a mechanism for global warming and the lead author of a large section of the IPCC report, on the best way to make cupcakes.

(That is, while it is fallacious to assume absolute truth based on authority alone, granting some credibility in proportion is not unreasonable. A different type of fallacy, sometimes called appeal to false or undeserved authority to distinguish it, is relying on authority in one field to establish credibility in another.)


Setting impossible standards only keeps you believing what you want to believe and what you already believe. It's logically equivalent to blind faith, you believe what you want to believe rather than what has most (even if imperfect) evidence.

You are forgetting that your status quo belief also has to come from somewhere, and its quality of evidence is probably worse than those alternatives you are attacking for not being good enough to change your belief. If it was better you would present it.


So how are you supposed to make an informed choice on vaccination? You can't trust the NYT, apparently, and probably not the New England Journal of Medicine, either?

Do you just count the number of sources supporting the different views? That would seem to be some other, equally bad, form of fallacy.

Is there any sort of evidence that is valid in your critical eyes that can be transmitted electronically? Or does anyone faced with the choice to vaccinate their children have to run their own randomised clinical trial? That would seem prohibitively cumbersome, and also logically inconsistent because you would have to find a few thousand people willing to participate in your trial who shouldn't trust you not to poison them, by your logic.


In simple terms I'd say there are authoritative opinions and there is evidence presented by authorities. You may trust some authorities to provide you with all the necessary evidence to form an opinion, seek evidence from multiple independent authorities, etc. and form an informed opinion. But you cannot trust an authoritative opinion in a newspaper. Newspapers exist to make people trust them, to appeal to authority so they can influence people's opinions arbitrarily any way they see fit and sell that influence, pretty much guaranteeing you are not exposed to an informed opinion.


I don't quite the distinction you're making. A paper being published in a reputable journal is almost by definition relying on its authority. Even if they allow you to download the underlying data, it won't contain any personal identifying information of trial participants that you could verify.

> Newspapers exist to make people trust them, to appeal to authority so they can influence people's opinions arbitrarily any way they see fit and sell that influence

That just reads like some conspiracy theory. Newspapers get people to trust them by being reliable. I don't need to evaluate every single story they run when I have done so with some subset and usually found them to be correct.


> Even if they allow you to download the underlying data, it won't contain any personal identifying information of trial participants that you could verify.

Yes, that's a problem. That's why you need multiple independent authorities to get to a usable level of trustworthiness of information.

> Newspapers get people to trust them by being reliable.

If you are an expert in any subject newspaper reports on, you should be able to find plenty of evidence that it doesn't report anything how it actually is from your expert opinion. Best case it reports one tiny thing correctly and ignores all other facts important to make an informed opinion.


> ...solutions to problems are rejected because they are not perfect...

I have seen the flip side as well... dismissing an idea as an instance of "too academic" or "perfectionist" problem solving. It's actually pretty hard to take the time to understand someone's thoughts well enough to differentiate between a complex but real flaw and an aesthetic preference.


> Some engineers may use such fallacy, exposing all the technicalities of their job to cover incompetence and justify failure in rolling out feature in time.

This is certainly common, however I see more often that engineers are simply not convinced by logically sound arguments that are in fact correct and true, so you’re forced to use other methods such as authority since they’re not operating completely on reason. Despite the stereotype of engineers being logical, it’s more they’d like you to see them that way, but are really maybe just 2% more logical in their reasoning than anyone else.


>This is certainly common, however I see more often that engineers are simply not convinced by logically sound arguments that are in fact correct and true, so you’re forced to use other methods such as authority since they’re not operating completely on reason.

Can't speak to your case, but the reality is that most SW engineering principles are opinions and not facts - even simple ones like whether we should expend effort on making code more readable. As such, there can exist multiple logically sound arguments that give opposing conclusions. Then they start to appeal to authority.

In my experience, people don't appeal to authority for factual things like "your loop could potentially never terminate under conditions X." (fact) They appeal to authority for statements like "In C++, you should never inherit from a class that does not have a virtual destructor." (opinion)


> Can't speak to your case, but the reality is that most SW engineering principles are opinions and not facts - even simple ones like whether we should expend effort on making code more readable.

Winner winner, chicken dinner.

Some of these things COULD one day be facts, but essentially no one has done the research to determine the fact form of the opinion.

"In code of type X maintained like Y, structure Z has been identified to cause a higher defect rate and increased lifetime maintenance costs compared to ZZ" could be a true fact, but seldom do we have any results from rigorous study.

Consistency certainly appears very useful, so appealing to some authorities opinion just to get a consistent outcome is probably a reasonable thing to do in many cases. Unfortunately, many people argue for these opinions by first confusing them for facts and defending them as such.

We'd often be better off-- in matters of code style or similar-- if teams were more ready to use and accept arguments of the form "that as far as anyone knows option X is at least as good, objectively speaking, as anything other option and for consistency sake we might as well adopt it since {our industry, our team's most prolific reviewer, our existing codebase, etc.} prefers it".


Good example, was that from Effective C++? Widely repeated, yet wrong.


>Widely repeated, yet wrong.

Not right, either.

Personally, I wouldn't inherit from a class that didn't have a virtual destructor unless I had a good reason. The point isn't whether it is right or wrong. Stating it as wrong is as problematic as stating it as right. The point is that whether it is OK to do it or not is an opinion, not a fact, and should be discussed as such.


you are right, I was trying to say the "never" part is wrong but of course there is a good reason behind the advice; it's not pure bullshit.


I am not following. Sure you can, sure it may work now, but it seems like something that is a recipe for future bugs being created?


It only causes problems if you try to delete it via a pointer to the base class. If your base class doesn't have any virtual functions then it's very unlikely that anyone would ever write code which tried to do that anyway, so it's not an issue.

A base class with virtual functions but a non-virtual destructor is what'll get you into trouble down the road.


What you described is called cognitive conservatism, which is a common defense of Dunning-Kruger rockstars.


"both at perception, dexterity and mind work"

Off-by-one error ;-)


Isn't this correct in English? The equivalent on my first language (Portuguese) is very clearly wrong, but I've seen the English version in more than one book.


"Equally" would be a better instead of "both" if they are, indeed, weighed equally, or just skip "both" altogether and maybe add an "as well as" before the last item on the list: https://english.stackexchange.com/questions/22357/equivalent...


"Both" refers to two. If you're referring to the the lack of a serial comma, it isn't "incorrect" in American English but should be, and most style guides recommend using the serial comma.


> Isn't this correct in English?

I, as a non-native English speaker, would write

"thrice at perception, dexterity and mind work"

(instead of "both at perception, dexterity and mind work"), but I am not sure whether this is overly formal (it is, in my experience, often the case that when you try to translate how you would express something in German into English, you land at something overly formal, since in German the commonly used language register is more formal than in English).


Thrice doesn’t really fit the sentiment of the sentence there. It would be best to replace “both” with nothing in that sentence when moving from two to three items.

“Both” itself isn’t necessary when dealing with two items (not that it hurts, IMO). It could serve a rhetorical purpose if you particularly wanted to emphasize the equal relationship between the two items. E.g., if one item might normally be perceived as significantly more important than the other, “both” is a way to address the surprise or skepticism the reader may feel seeing them placed together equally.


An American native speaker would certainly understand and not raise an eyebrow at using “both” in this context, even though technically it means two not three.

Using “thrice” is sort of archaic and formal (like you mentioned) and people would maybe make a good natured joke about being “fancy” if you used the word “thrice” often in conversation.

Oddly though twice has none of the baggage of thrice, and is used all the time. My four year old knows and uses twice, but doesn’t know what thrice means.


It's definitely wrong. But it is blasphemous to even believe that there can be right or wrong in such matters.


It's definitely not technically correct. "Both" in English unambiguously refers to two things.


One dictionary says "two or more" and even cites a Samuel Taylor Coleridge example with three things:

From https://www.merriam-webster.com/dictionary/both :

> both conjunction —used as a function word to indicate and stress the inclusion of each of two or more things specified by coordinated words, phrases, or clauses prized both for its beauty and for its utility he … who loveth well both man and bird and beast — S. T. Coleridge


I interpret this as using the binary operation "both" in a reduction over a list of three items. Kind of like how you can add a list of numbers through e.g. (PLUS 1 2 3 4) in LISP, although addition is actually a binary operation.


Needs my favorite, "age = wisdom & knowledge".

"Bob's been here for fifteen years and if he says java applets are still the way to go then they must be."


Appeal to authority - mentioned in the article.


It's not strictly identical, as it introduces ageism.


True. But age is used as a good proxy for determining authority.


I'm constantly fighting the sunk cost fallacy with managers, I'm surprised it hasn't been mentioned, or maybe it has been but in a different form?


"Individuals commit the sunk cost fallacy when they continue a behavior or endeavor as a result of previously invested resources (time, money or effort)"

WRT software projects, what's the opposite? When I bring up sunk cost, the assumption is that I'm lobbying for wholesale rewrite. In some (not all) cases I am - take the lessons learned, maybe some algorithms, data, whatever, and rebuild.

The standard rejoinder to that is "what makes you think you're so much smarter/better than the previous team?" Well... sometimes I maybe am. Or... the strength of the current team is just... more demonstrable? Or... there's no one left from the previous team that even understands the previous system, so ever day spent is often wasted in just trying to learn what they did, only to find out... hey - they really just had no clue what they were doing. We've all seen this (and, early on, I likely was in the position of the offending party a few times).

But sunk cost fallacy is a real thing. How do you combat it?


Wholesale rewrites often have huge future costs, so I don’t think dismissing them is a sunk cost fallacy.


That's not what I was suggesting. I was suggesting that dismissing any calls for any change (whether it's whole rebuild or any degree of aggressive change) is often justified with "but we've already paid $x for this project". Couple that with then also complaining about further costs just grinds my gears.

Probably 1/3rd of the projects I've worked on over the last 20 years were software systems that were objectively/measurable poor, and probably half of those warranted a rebuild, and genuinely not 'flavor of the month' justifications. You can't cheaply and safely, for example, run a business on foxpro/db2, relying on one person in a different country with all the source code, running in a virtualized windows XP environment on Windows server 2000, while also wanting to modify your business processes and grow your business. I'm not saying it can't be done, but the costs and risks outweight whatever benefit you've already bought and paid for the last 20 years.


You clearly lay out the future costs of both options in a way that demonstrates you aren't just unaware of how much work will go into the change.

That said, rewriting a system you don't understand is pretty hard to do successfully.


the problems aren't usually technical as much as political or 'business' generally (though sometimes technical stuff presents challenges, certainly).

There's almost always an assumption, I've found, when coming in to existing projects, that they "really" work a certain way and maybe there's just "a bug here" or "some small fix needed over there". Sometimes that's the case, but more often than not, even after those things are 'fixed', nothing actually does what the people in charge thought it was doing. They've sometimes fundamentally misunderstood what they'd required/specced out with a previous team, or, in some cases, the previous team really didn't know how much they didn't know, or in a couple case, they were simply lying for months on end, cashed checks, then left.

"rewriting a system you don't understand is pretty hard" getting a team to successfully spec out what they need is pretty hard, imo, regardless of whether it's greenfield, rebuild, or clarifying expected behaviour in a current system. The big challenge I've come across with "incremental changes to existing systems" happens when it really doesn't do (and never did) what people thought was happening.

"No, this isn't calculating daily compound interest - it's calculating monthly."

"No, the reports aren't able to be regenerated from previous points in time because historical information was never stored anywhere - you simply now don't have 4 years of data you thought you had - the previous reports you were told were being regenerated were just being pulled from a cache."

Fixing problems like above may mean battling through lots of bad code and many many contentious discovery/planning/testing meetings, and... the result is now someone has a 'functioning' system running on codeigniter 2 with hard dependencies on php 5.2.

Years ago I faced a situation with a Rails app which "just needed a few modules finished and a few tweaks". It was done in Rails 1 and had insane amounts of runtime monkey patching. 3 different Rails shops I went to indicated it was not worth the time/effort to try to 'upgrade' what was there. They'd rebuild greenfield and salvage what bits they could from the previous project.

Many of my experiences have been with smaller organizations (less than 50 people - sometimes less than 10), and there's rarely a full team of competent IT people who can review/vet/verify. "Never rebuild from scratch" - it's not bad advice, but it has a lot of assumptions that aren't always true.


> But sunk cost fallacy is a real thing.

The fallacy is a real thing, but that doesn't mean that it is always applicable. Decisions are based on many factors and most of the time, "We've invested a lot of money into this" isn't the only one on the table, even if it is the only one they are saying out loud.


>"what makes you think you're so much smarter/better than the previous team?"

You hopefully learned from the problems they encountered, which they likely didn't know about when they started.


The problem is that most fallacies have a base on cognitive biases. And even when we are aware of them, they are unconscious. So we can't avoid them. But we can try to mitigate them[1].

[1]https://en.wikipedia.org/wiki/Cognitive_bias_mitigation


I'm late to the conversation, but one thing I see causing a lot of unnecessary complexity is applying the same ideas and concepts at different levels of abstraction because it "intuitively" feels right. I'd like to call this "appeal to symmetry" or "appeal to completionism". Example:

* You have a CI/CD pipeline deploying an application artefact though test->staging->prod.

* You update your pipeline to treat the deployment script itself as an artefact that gets promoted as well, because hey — shouldn't our test code also be tested? And it's just a couple of configuration changes.

* You say "well everything else is an artefact so how come our pipelines aren't?" and start implementing CI/CD for your CI/CD for your CI/CD.


https://en.wikipedia.org/wiki/Argument_from_fallacy

Increasingly, I've seen being blindly dismissive due to awareness of usually helpful but sometimes harmful cognitive heuristics to be a source of mistakes in reasoning.

Human reasoning has a multitude of biases and approximations that evolution has preserved in our behavior. They are not always in our best interest, but nor is our alternative behavior when we try to act to contradict them.


Many logical fallacies I encounter as a developer are actively exercised so that a developer can save face or justify their existence through appeals to the ignorance of business leaders.

Some I have noticed as a JavaScript developer:

* Appeal to Popularity, a modification of appeal to authority. Essentially this says if a tool or approach is popular it is correct. The common assertion is that popular approaches must be correct as many eyes would have found and reported any problems. This fallacy also explains why horrendous highly visible bugs can present in exceedingly popular applications for long periods of time, because developers simply believe the strength and wisdom of the community are enough to identify, report, and resolve any issue in a timely manner. Software isn't magic and isn't magically updated or repaired because its popular.

* Invented Here. This is the opposite of the more well known Not Invented Here, which asserts that developers are irrationally afraid to write original code. The common assertion is that any original code whether from you or your coworkers is a horrid mistake, because somebody else has done it better. The common defense is that developers are standing on the shoulders of giants.

* Toolsmith Fallacy. This is an extension of the above mentioned invented here fallacy in that use of tools is good and the use of more tools is likely better. This line of thinking is how we end up with stupidity like left-pad and how a simple 30 line tool could have hundreds of dependencies whose total weight exceeds hundreds of megabytes. This fallacy is also dangerous when it asserts that dependencies don't need to be tested since the assumption is that they are better than original code.

* Student Fallacy. This asserts that the approach first learned is most correct. This fallacy is used as a bias against differing ways to solve a problem so that a developer can maintain confidence and save face when criticized or encounters something they don't understand.

* Big Fallacy. This asserts that when something becomes large or poorly understood it should be dismissed out of hand. This is essentially a bias against reading code, which is the required first step before refactoring any given approach into something more comprehensible. This fallacy completely ignores that fact that a given artifact may do the job perfectly well with great performance.

* Framework Fallacy. This fallacy is an extension of the toolsmith fallacy and basically says a given approach cannot be achieved without a given tool if there is a tool that claims to address the problem.

* Assembly Fallacy. I have not seen this one in a while, but when jQuery was really popular any attempt to write code without use of jQuery was immediately compared to writing assembly.

* DOM is slow. Most people making this claim don't really seem to know what the DOM is. Here is the DOM that modern web browsers use: https://dom.spec.whatwg.org/ This one is highly technical and makes many people sad, so I won't go into this here.

* Open Source is a My Personal Gift. This fallacy asserts that free software is a gift provided for you. Open source software, like any software, takes effort of a person writing software. If you want a new feature submit a pull request. It is astonishing to see the hostility that arises when this is mentioned.


Student fallacy is also called Baby Duck fallacy, because you imprint on the first thing you see. To this day I have no idea whether Git actually sucks or I just saw Mercurial first.


Git sucks on many different ways. It's also great (much better than Hg) on many other ways.

So, you'll see people fully convinced that each one is superior, and both are justified.


Git actually sucks - but it's got the right kind of brain damage to really appeal to me. E.g. most VCSes - including SVN, which I started with - treat branches as just folders. This is a clean, elegant abstraction, with low mental overhead. But I still prefer the mess that is git's branches/tags/annotated tags/stashes, it works much better with my ex post facto treatment of branches, and doesn't require me to constantly reopen things as I swap around between branches. (EDIT: This is in fact the main reason I standardized on git when I decided SVN wasn't cutting it anymore!)

The command line interface is an incoherent mess by any objective standard, and I still have to routinely read docs for it. The GUIs are even worse, and scale even worse than the vanilla git command line. Partial workarounds for it's poor scaling like git lfs or git annex are awkward to use, made worse by github's LFS quotas/caps.


> Misleading vividness – involves describing an occurrence in vivid detail, even if it is an exceptional occurrence, to convince someone that it is a problem; this also relies on the appeal to emotion fallacy. Some engineers may use such fallacy, exposing all the technicalities of their job to cover incompetence and justify failure in rolling out feature in time.

I think experienced co-workers can tell whether you are exaggerating or telling the truth. I found someone has the tendencies to elaborate on stuffs in an emotional way. This is especially common in junior devs. From my experience, most senior devs tend to be more reserved and careful about their words.

What do you guys think?


I think too much experience can also cause this. I see this from time to time when we design new systems or products: everyone is optimistic about the idea but the more senior people you have in the room there's more chance someone paints vivid pictures of how everything could fall horribly into pieces if we meet this one corner case that they saw in a slightly related project 2 decades and 5 companies ago.


No true Scotsman's:

- We followed agile methodology and failed miserably.

- Then you weren't using real agile.


I come across the reverse version, too. I mean, the fallacy of labels - situations just like your example, but where (say) Agile methodology was INDEED not properly followed, despite being branded as such.


Agile is a set of values, not a methodology. You're thinking of scrum or extreme programming or kanban or something more specific.


I really hope this was an attempt at deadpan humor given the grandparent of this comment.


Nope. The Agile Manifesto is a list of values. There's no method to misapply.

The real irony is people claiming to "value individuals and interactions over processes and tools" turning around and telling engineers that the problem is their processes and tools.



Chesterton’s fence is one I come across a lot. People see some confusing or not ideal code and immediately want to remove it, without understanding its purpose. I like stating it as “don’t assume the previous developers were idiots.”

https://en.m.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fen...


The Historian's fallacy cuts both way. Often the legacy code is convoluted because there's a subtlety that isn't immediately obvious.


The real world is far too complex to reject an idea outright simply because the way it's presented checks a logical fallacy box. Unless the person presenting it to you has some ulterior motive please consider that they are taking the time to explain something to you because they genuinely believe you might benefit from it.


If you're interested in logical fallacies in general, have a look at my "Logical fallacy of the day" (via RSS, HTML or JSON): https://tips.darekkay.com/html/logical-fallacies-en.html


I’m not sure how useful fallacy theory is when applied thus. We’re all quite happy to accept induction (well, software engineers probably should be), and so the use of formally incorrect heuristics is not necessarily a bad idea.


Not specific to software engineering, but there's a very interesting book about logical fallacies and the tricks that our brain play on us, Thinking: Fast and Slow, by nobel prize winner Daniel Kahneman


A millenium lasts for 1000 years. Early versions of brains came about around 600 million years ago.


404?



> Appeal to Authority .... When explaining practices or opinions on some subjects of sofware development, project management, operations, e.t.c, people tend to use somebody elses saying, blogpost, conference talk or other claim as a foundation for justification of their own decision.

I consciously do this these days because other people are gullible.

I consciously know "well this isn't balanced", but there is no need to put a disclaimer on that, it just wastes your time and confuses the recipient.

Appeal to Authority works.


Define works. Appeal to authority destroys engineering, scientific, critical thinking and decision making. You can't appeal to authority your way into subject experts, they might even openly criticize you for your bullshit.


>Define works. Appeal to authority destroys engineering, scientific, critical thinking and decision making.

Appeal to authority is a tool. Whether it improves or worsens things depends on how it is wielded.


> You can't appeal to authority your way into subject experts, they might even openly criticize you for your bullshit.

They might, and they might also ignore you without a source because they are so focused on discrediting everyone thats not themselves.

> Define works.

Gets people to more quickly agree with your line of reasoning, by masquerading your line of reasoning as a non-independent thought and simply finding something that agrees with you on Google.

I know that search engines mostly just agree with you. Other people don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: