Hacker News new | past | comments | ask | show | jobs | submit login
Your statement is correct but misses the point (nibblestew.blogspot.com)
442 points by notRobot on April 13, 2020 | hide | past | favorite | 257 comments



That's the problem with being pedantic. Pedants are correct, but they're not curious, and they don't care about learning something because they lack humility (humility is difficult to learn and rarely encouraged).

A discussion, an argument or a debate is not meant to be some sort of confrontation quiz, where people who demonstrate they're right can win an audience. A real debate, argument or discussion is when an audience can listen, and ask questions to better understand what everybody is trying to say. A debate is a distributed way of thinking. Human thought only exists through language, un-worded ideas always die.

This mentality stems from how people views how democracy works. The Socratic method thoroughly shows that having an argument is about helping each other communicate their point of view. And this is exactly how states like Russia are able to undermine democracy, by sowing confrontation and polarization.

What is horrible is when this sort of polarization happens in your daily life, or in science or at work.


The trouble is often the listener thinks the speaker is pedantic when the speaker thinks there's a very legitimate problem the listener is ignoring. Since we're using C(/C++) as an example, I'll take a bite out of the other page that just got re-posted today. [1] Is it "pedantic" to point out that &*p is undefined behavior if p is NULL? Or is it "pedantic" to point out that uint8_t isn't a substitute for unsigned char? Lots of people who see C as something resembling high-level assembly view arguments like this as pointless pedantry, often since they haven't really across cases where the distinction has mattered. Meanwhile those on the other side of the debate feel like they're not being heard and pull their hair out trying to get people to stick to the spec of the abstract machine rather than that of their hardware, because they think (rightly or wrongly) the next compiler version will likely break their code if the existing ones already haven't.

[1] https://news.ycombinator.com/item?id=22848423


You’re on the other side of the table from the pedant in OP. You’d be responding to someone saying “It’s easy to avoid UB in C.” Meanwhile, you grep a couple popular projects and show trivially that accidental UB is extremely common in most codebases. The pedant in OP is saying, “you don’t need the guarantees provided in Rust because a careful C programmer can avoid UB, memory errors, etc.”


Ya.

Your example makes me think of error tolerances, uncertainty, fuzz factors. Sometimes precision really matters.

FWIW, If you were reviewing my code, I'd readily adopt your method, if only to expedite convergence on a common understanding, and to signal I appreciate the input.


Is it possible they would be served by a different standard where C is a high-level assembly? Although admittedly, "&*p" to me reads as 'load p' and therefore undefined behaviour


The literal sequence "&*E" is actually equivalent to E even when E is a null pointer. The spec explicitly points this out.


Why does the spec mandate support for useless bad code instead of banning it?


1. Probably backwards-compatibility.

2. While you wouldn't write &*E by hand, it can easily arise from a macro expansion.


`E` is an lvalue --- a location that only becomes a load/store when it is converted to an rvalue or assigned to. `&` takes an lvalue and gives a pointer to that location, so `E` is eliminated without being converted to an rvalue or assigned to, so no harm done.

The real question if why have "lvalues" at all. If assignment always had to have a pointer on the left hand:

    &a := ..., b := ...
rather than:

    a = ..., , *b = ...
then this problem goes away.


Oh man, I wrote that as a shorthand for something like int& v = *p; foo(&v)... is there a difference in that case (C++ I guess)? I thought it was always illegal to dereference null pointers regardless of the specific token sequence or whether it's C or C++.


Yeah, I thought that's what you meant. C++ doesn't special case &٭. C didn't before C99 either I think. It's not the exact token sequence actually; &(٭E) does the same thing.


I believe C++ does special-case that, as *foo when foo is an empty lvalue and it’s trying to assign that to an rvalue that is undefined.


I love the way this would be just as good an example for the original article as those actualy given.


To me, this demonstrates how one can completely understand something at the abstract level, but so easily "forget" when working at the object level, even when the object level discussions are related to an article about the abstract concept.

The various forms of communication between humans are chock full of opportunities for error, and I doubt there's many people that would deny it, but when a conversation is occurring at the object/instance level, and an instance of one of these abstract errors occur, enthusiasm for acknowledging it generally seems to have diminished significantly.

I think it would be incredibly useful if we could find a way to overcome this issue, kind of like error correction in TCP, but for human communication, or even discuss the notion.

Might it be interesting and fun, and maybe even yield practically useful ideas, if now and then HN was to experiment with taking an instance of one of these abstract problems we discuss every day (this one, nuclear power, climate change, wealth disparity, political polarization, etc), and rather than having our normal casual conversations (the contents of which often appear in thread after thread), instead approach it from an engineering perspective, crowdsourcing various ideas for both the diagnosis/analysis of the problem, as well as solutions (including implementation, where possible)?


@dang -2, again.

Is this not getting a little bit ridiculous? Pray tell, what guidelines have I violated this time, while everyone else is behaving perfectly fine?

Shall we continue to ignore the elephant in the room? It's fine by me, I actually get a kick out of it.


"The various forms of communication between humans are chock full of opportunities for error." Don't let it get you down.


Oh, it doesn't get me down at all, quite the opposite. It's not myself I'm worried about, it's broader mankind.

Take my suggestion at the end - does it not seem odd in the slightest that one of the highest concentrations of powerful minds on the internet seems to be more interested in complaining and pointing fingers, little different than any other social media site, rather than trying something new?

How bizarre do things have to get on this planet before a little curiosity starts to form about how we got where we are today, and if perhaps there's something HN could do to get us out?


I think this comes from a fundamental issue: the participant isn't ready and willing to understand the other participant in conversation. The goal of the participant is to smash and belittle the other person instead of trying to achieve consensus.

For example, observe more consensus-oriented cultures: You'll see people talkting to each other over and over until they chip away at the other partys position and reach a middling consensus to move ahead. It's a very slow process but generally achieves good results in practice. This is also how good functioning multi-party governments work, where parties slowly move towards an acceptable middle ground when building policy.

This breaks down if one of the parties isn't interested in building consensus - in a hyper-individualist viewpoint, the goal is to attack, bellitle and insult the other person to build your own following or ego. In that case, there's no useful resolution of that conversation - the goal of the pedant is to find something to attack and attack it with no wish to reach a productive objective. In that case, the only winning move is not to play. At least until that attitude doesn't reach your organization/government and makes the environment intolerable.


Such consensus seems to be motivated by necessity. A lot of partisanship under the guise of 'we have voting majority'


Idealistic argumentation in favor of being pedantic:

Pedantic is about being correct. There are too many misconceptions and misunderstandings around. * Being pedantic is an effort to reduce them.

So if people are pedantic out of idealism, I do not mind it. Here on hacker news it comes often in the form of a side comment "nitpick this and this is actually Y" Which I like.

It is only annoying, if people are pedantic "to win the argument" which never helps the discussion.

* even in our language. like the word atom, for example. It means undividable, but today we clearly know atoms are dividable. And millions of other examples. So a discussion never leads to anything, if all the people involved are overly pedantic ...


>Pedantic is about being correct. [...] Being pedantic is an effort to reduce them.

At the risk of my comment being pedantic about "pedantic", the context of how "pedantic" is typically thrown around is to alert readers of a hyper-correction that doesn't matter to the discussion.[0]

For example, if I say that some smarthome intelligent light switches sometimes have a 1/2 second lag instead of being "instant on" which is jarring to the user experience, a debater in pedantic mode would then say something like "even a regular dumb light switch isn't truly instant because of General Relativity, it _still_ takes picoseconds to propagate the flick of the switch action to the state change in the light radiation, blah blah blah"

That so-called "correction" is _true_ but not material to the actual type of discussion. The discussion was consumer UI and not physics. If the context is UI, it's actually ok to be a little sloppy with what "instant" means.

The underlying issue is that each commenter cannot write <billions qualifiers> before each statement. Therefore, the qualifier words we omit lets others pounce on a seemingly glaring hole in the argument (i.e. not reading with charity) and reply with something pedantic.

[0] Excerpt from MW dictionary https://www.merriam-webster.com/dictionary/pedantic#faqs:

>Is pedantic an insult? Pedantic is an insulting word used to describe someone who annoys others by correcting small errors, caring too much about minor details,


Ok, so lets be pedantic about pedantic about the argument pedantic ...

I know. Thats why I said:

" it comes often in the form of a SIDE COMMENT "nitpick this and this is actually Y" Which I like."

And "So a discussion never leads to anything, if all the people involved are overly pedantic ..."


I see what you did there.


Made an argument, a bit in the form of "devils advocate"?


I've seen ironic trend of calling someone else's assertion "pedantic" as another way to polarize and shut down debate. That "confrontation quiz" attitude is very real, and it's often a way to avoid actually listening to someone else so you get to be correct.


It's almost like there is a handbook out there somewhere on "how to win an argument on the internet" where people are using manipulative tricks to try to shut down debate and appear as the "victor".


Schopenhauer wrote a good one, his 38 Stratagems from The Art of Controversy, all of them still very relevant. e.g. I see #32 all the time:

If you are confronted with an assertion, there is a short way of getting rid of it, or, at any rate, of throwing suspicion on it, by putting it into some odious category; even though the connection is only apparent, or else of a loose character. You can say, for instance, "That is Manichaeism," or "It is Arianism," or "Pelagianism," or "Idealism," or "Spinozism," or "Pantheism," or "Brownianism," or "Naturalism," or "Atheism," or "Rationalism," "Spiritualism," "Mysticism," and so on. In making an objection of this kind, you take it for granted (1) that the assertion in question is identical with, or is at least contained in, the category cited—that is to say, you cry out, "Oh, I have heard that before"; and (2) that the system referred to has been entirely refuted, and does not contain a word of truth.

The Stratagems summarized in point form http://www.mnei.nl/schopenhauer/38-stratagems.htm

Unabridged https://www.gutenberg.org/files/10731/10731-h/10731-h.htm#li...


Oh fantastic- thanks for these. I see # 6 a lot too, especially generalizing what somebody says (eg: universal health care) into a much larger category (eg: universal socialism) that includes a lot of negative components (eg: deadbeats, corrupt governments, unremovable presidents/generals).


Welcome. :-) That could be #32 also: "I like the idea of universal health care, it's.." "That is Socialism!"


This trick works particularly well when your opponent doesn't know what that category is, but doesn't want to look stupid by admitting it.


This interesting aspect of the situation to me is: how many people have actually read such handbooks, versus how many people behave in these ways?

If hardly anyone has read these books, yet the behaviors can be easily observed in massive quantities, perhaps there is some other underlying cause.


Indeed, this submission is a textbook example, perhaps involuntarily, but still:

1) Pick an uncontroversial good cause: "Excessive pedantry is bad."

2) Associate all the arguments you want to make with the good side and the counterarguments with pedantry.

3) In the reader's mind, subconsciously all your arguments are now associated with the good side.

This is how advertising works, but the method seems to infiltrate (again, perhaps unintended) more and more technical articles.


I had a similar thought while reading this article.

A case that the article itself is "100% correct but [missing] the point" can also be made. Perhaps the speaker nitpicking garbage collectors works in a problem space prone to dangling references, which would make a "pedantic" point actually very relevant, at least to that speaker. Shutting that person down isn't helpful to that person, though perhaps in context a large number of other people would rather stay on topic.

Context and empathy are helpful. In most cases, clarification of context, especially through follow-up questions, can keep conversation healthy.


You're casting too broad a net here. I don't think this article is about pedantry. There is a time and place to be pedantic, just arguably in none of the given examples. The place to be pedantic is to point out sloppy thinking. Sometimes you just need precision, and it's not enough to be "mostly right".


I don't think "pedant" as a personality trait encompasses people, who are only pedantic in necessary context. Your answer thus might be a bit... pedantic? ;)


So I could point out that you started your comment with “You’re”. This statement is 100% correct and it’s pedantic. I bet you can’t argue that(but it’s moot! what has it to do!? Whatever the context!)

I agree with you, the problem is not being able to synchronize with the goal or hierarchical chain of logic behind original statements, leading to remarks that are irrelevant, and I think it’s merely considered “pedantry” here by the looks of it.


Correct vs curious. Brilliant. Thank you.

What's the opposite of pedantic? "Colloquial"? Please tell me.

Whatever we end up calling it, both Adam Gordon Bell and Ezra Klein are superior.

While listening to Abstraction and Learning with Runar Bjarnason https://corecursive.com/027-abstraction-with-runar-bjarnason... I was struck by how AGB is able to adopt Bjarnason's worldview and explore it. While I was busily compiling a list of all the ways I disagreed.

I'd like to become more like AGB.

Ezra Klein routinely does this. He's so cordial, respectful of his guests. He genuinely enjoys wearing other people's beliefs for the purpose of discussion. I got so much out of his interviews with Grover Norquist and George Will. I'm still surprised.

Klein's recent interview by Ben Shapiro (for his book Why We Are Polarized) is a master class in not being baited, staying on message, using measured humor to push back, and somehow engaging one's debate partner in a positive constructive enjoyable manner. https://www.youtube.com/watch?v=pMOUiWCjkn4

I have no idea how Ezra Klein does it. Practice, temperament, experience?

Maybe the key is to just be curious, like you suggest.


The opposite of pedantic would be "not afraid of being wrong", and also aware that truth can be subjective and never absolute.

Science and knowledge are dynamic, not static. It's hard to make science and knowledge more accurate, but it's counter-productive to not challenge old ideas, even if those ideas work well.

Pedantic, in my view, are conservative. Unpedantic is more progressive.


> What's the opposite of pedantic? "Colloquial"? Please tell me.

It's surely not the answer you want, but 'unpedantic'.

If we can extend to more than one word you could describe someone as not being too concerned with the finer detail or nit-picking, or as 'glossing over' minor errors or trivialities.


Bringing up Russia in the context of this discussion would be an example of this phenomenon methinks


A debate is a distributed way of thinking.

This is a beautiful way of putting it, but IME rarely do actual debates actually rise to this level. Typically, they just amp up our confirmation biases and cause people to dig in.

I've had a few beautiful debates/discussions with friends in my life where we took sides on some issue and by the end, had both switched sides. These are great, because we're both cracking up by the end, but they've been incredibly rare. They require a way of exploring topics in a way that doesn't require/trigger ego investment and some adherence to the principle of charity[1].

Hell, I'm practically demonstrating the wrong way to do it in this very post. ;)

[1] https://en.wikipedia.org/wiki/Principle_of_charity


I think you can explain the problems with "pedantic" arguments in a lot more rigid way.

The "technically correct" arguments are ignoring certain information that the OP argues is extremely important: How likely a certain situation is to appear in practice and how expensive or resouce-consuming a certain user actions is.

Therefore, while the facts they state (dangling pointers may happen, people can protect against buffer overruns in C) are technically not wrong, their conclusion "therefore, garbage collectors / buffer protection make no difference" is plain wrong because it ignores the difference that likelihood and cost would make.

I think this can often be traced back to inexperience: The "X can happen / people could do Y" arguments can be make simply with book knowledge - in contrast, you typically gain knowledge about likelihoods and costs from actual experience working in projects.


> Pedants are correct, but they're not curious, and they don't care about learning something because they lack humility (humility is difficult to learn and rarely encouraged).

Those are some very broad claims. Apply whatever interpretation of "broad" you feel is appropriate.


> Pedants are correct, but they're not curious

Pedants are often more curious than you are, as they were presumably curious enough at some point to look up the thing they’re being pedantic about.


This presumes that the person making the first statement was unaware of the pedantic part... that's the point of the pedant "missing the point"


> states like Russia

I wouldn't say Russia has a monopoly on this


Socratic method is not about two equals having discussion where both have different ideas. It is about one person (teacher) questioning another persons ideas. It is fundamentally unequal methodology.


It can be executed by a teacher, but the method and Socratic circles are based around peer-level interlocutors. An initial questioner may exist and that is not regarded as an elevated role, merely a way to gain experience in starting off a Socratic circle.


It can be but it seems to have a strong negative effect.

I've tried it numerous times when I see something as clearly being A but someone else sees it as clearly being B. Once my initial attempts to explain why I see it as A fail, I start asking why they see it as B to understand their view point. The goal is one of three outcomes.

* Find an error in my logic. Oh, it clearly is B.

* Find an error in their logic. See, it clearly is A.

* Find a point of disagreement other than A vs B that explains the outcome. I think X, which is why I see A, but if you think Y, then I can understand why you see it as B.

But in all cases, this sort of questioning a person's logic is viewed as a strong negative of just not dropping a subject. Some instances it may be better to drop it, but when it comes to specs in what I'm building for work, just dropping a disagreement in what the intention of the widget can lead to coding the wrong widget. There seems to be a needed charisma to keeping someone engaged and one that deeply irks me is that part of this charisma seems driven off of factors that we cannot control and which we condemn discriminating upon (meaning there is active subconscious discrimination that shouldn't be happening, but because people don't realize things like how taller people are treated with more authority or how we assume better of more attractive people, the discrimination is overlooked).


You can not make that conclusion from your method. While B has to explain in details every step and fact, A does not. It is assumed that if no error was found in B, A must be true. Therefore, B must be flawless to pass, but A can have pretty much any amount of logical or factual errors in it - as long as they are not directly visible.

> There seems to be a needed charisma to keeping someone engaged

Of course people stop to be engaged the moment they realize what is going on, whether they are able to argue back and move the needle or not. And even when they are able to clearly explain what is going on, they may decide that the most rational move in that moment is for them to disengage without accusing you or furthering conflict.


>You can not make that conclusion from your method.

I think you are missing the general idea and being stuck on the details which I highly simplified. The step after identifying the difference in logic is to work back from there and establish a full chain of reasoning we both agree upon, but I cut that out in my half sentence description.

>Of course people stop to be engaged the moment they realize what is going on

So the people who have an interest in convincing me (as I'm the one building the application) have no desire to do convincing when they realize I'm trying to understand their rationale? Why? I have learned to document it well when it happens, so when the project does fail I can show their manager exactly where I built what was asked for, noted my concerns with the design, and showed where they disengaged from addressing those concerns and desired for me to just do as they asked.


1.) The issue I was pointing out at was that one of ideas has to pass much different standard then the other. You are looking at differences only in one place, not at the other.

B being faulty does not imply A being correct. B having small problem does not imply A does not have even bigger problem.

2.) What I mean is that people conclude that you play politics with rules. You don't put your ideas under the same rigorous test. That is where disengagement typically starts.

Then it makes sense to go either your way or my way, but depending on politics of the situation (who is going to be responsible, who is stronger politically). How I think your idea would be had it gone through the same test should be factor, but if that part runs without you. It does not make sense to play one sided questioning game for the person you are negotiating with.

Your idea can make the project fail too and from the point of view of the other person, was not really tested. And then if you are accountable I drop my idea and let you do your thing. If I am accountable, you did not done nothing to covince me you are right.

Through it is also unlikely that the project would fail of a single reason. So I don't think the documentation would be much of the threat. We write minutes from non conflict non hostile meetings all the time, so it does not sound like much would be changed. Not that it would be used the way you suggest often. Unless other person works in some kind of toxic company, they are fine explaining their reasoning to manager. It is quite likely they discussed your questions even before they decided to disengage.


Well the Socratic method frequently results in aporia or puzzlement for both parties and doesn’t necessarily resolve conflicts in business. When you need to discover the next step or a new direction among cooperating people it’s food for thought, but for negotiating hidden conflicts of client work a more robust method is needed.

Discrimination is a tool, being seen as ugly or short in a position of success implies you overcame difficulty and played right can enhance a specific image or personality aspect you want to use as a technique for influencing clients. That game will always coexist with high level abstracted method like the Socratic.


On a related subject, I do feel like there's been real, tangible moral harm done to our society by reinforcing the idea of debate as a competition in children. When we reinforce certain behavior, we shouldn't be surprised when that reinforcement extends to the rest of their lives.

When we turn debates into something that is "won" rather than something that is seeking a refinement of what we know as truth, we encourage people to use deceptive tactics: to appeal to straw men if they can get away with it, to distract from the issue if they can get away with it, to do whatever they can do to win. That's how you end up with blowhards like Ben Shapiro: people who are more interested in winning a debate by any means necessary and reinforcing their existing belief systems then discovering truth. You get a thrill out of "victory," but it is less than hollow when it's won by appealing to fallacious arguments: it actively works against our ability as a society to understand the concept of truth. And it is so easily abused by people with malicious intent.


As someone who grew up in different debate-club-esque environments, I agree wholeheartedly. I had to unlearn quite a few patterns of thinking and behavior—patterns that had been positively reinforced by most adults in my life—when I grew up.

For me, the most damaging effect was the separation of debate and decision making. In real life, if I advocate for a position—be it where we eat dinner or where my child goes to school—and I'm convincing, things actually happen. We have to go to that restaurant and my kid has to attend that school. Debates happen to inform a decision all parties are trying to make together.

As a kid, the opposite was true. Debate was about proving my intelligence in exchange for praise from my teachers, parents, and peers. It was an athletic competition in which our positions were our jerseys, in that we took them off after and went home.

I was reflexively argumentative for a decent period of time as a young adult before I realized the damage it caused to my relationships and how unproductive and dishonest it was.


I think it's Richard Dawkins who uses the word discussion to contrast with the word debate (used in the combative sense).

In a discussion, several parties with different perspectives work together to make progress toward truth, whatever that means in the given context. If one or more parties has their opinion changed, that's seen as productive.

In a debate, several parties plenty a points-based game (perhaps even literally), and if someone is seen to be changing their mind, they lose.


In my anglophile European country, it pains me that more and more politicians are copying this English-style two sided debating whereas our own tradition of less flashy but plural and when necessary detail oriented style is slowly being replaced.

A huge pity.


An obvious thing I’d barely noticed the full impact of. Thanks.


I can see your point but I disagree with it. There is nothing wrong with having a "winner" in a debate competition so long as you have good judging standards towards what constitutes winning. Winning should be about having a well thought out and defended argument. It should be about being able to address the points someone makes with relevant counter points. It should be about being able to articulate your position well enough to convince someone else that it's the best choice. Competitive debate also serves to make you look at both sides of an issue and be able to understand them well enough to make arguments for either side. You have to argue successfully on both sides to win a competition. I think teaching children to be able to weigh both sides of an issue is something we could use more of, not less. I think a well structured debate competition can serve as an educational opportunity to steer kids away from the sort of behavior that you are advocating against.


I think the very fact that you want winners and losers is the problem. That shouldn't be the goal in having a discussion with another person.

Focusing on the mechanics and methods has left you blind to the outcome, as demonstrated:

> You have to argue successfully on both sides to win a competition

Who, in the real world, would use a skill like this? Lawyers who defend corporations who poison populations? Murderers? Sleazy politicians?

It might just be that you have demonstrated the very thing being discussed: Talking past the issue at hand. The OP isn't about the mechanics of debate. It's about talking past the issue.


Being too argumentative can be as dangerous as being too agreeable, bother excess don’t cultivate an open mindedness to learning.


Debate clubs barely exist in the UK. We still treat discussions as competitions here.


Lots of programmers forget that they should also be engineers, and not only computer scientists (or wannabe computer scientists).

I can still remember how during the first programming class I took at uni 20 years ago the teacher told us that "engineering is all about compromises and especially about what compromises you should choose to make or not" (in that context she was talking about memory vs processor usage), and while I managed to barely pass her class that piece of advice has stuck to me ever since. Looking at things through the compromise lenses (for lack of a better phrase) would eliminate lots of misunderstandings rightfully pointed out by the OP in this article.


That's a good perspective. What's really lacking from so many of the "bad" comments is the context of the original comment. If you drop the context of someone using Python to process a moderate amount of text on a large, non-real-time system, then project your own context where it doesn't work, then sure python is not a good choice. If you can't agree on a basic context, or choose to force a context on them, then there's no compromise. Like in your teacher's example, you can't compromise between memory and cpu if you have no idea which you care about more.

They're mostly of the format "I like spoons because they can pick up liquids!", then the reply is "no forks are much better because they have tines to stab with!", without anyone saying if they want to eat soup or spaghetti.


>If you drop the context of someone using Python to process a moderate amount of text on a large, non-real-time system, then project your own context where it doesn't work, then sure python is not a good choice. If you can't agree on a basic context, or choose to force a context on them, then there's no compromise.

There may also be past traumatic experiences that are sought to be avoided. How many times does it take for some management to grab your local working python solution, slam it into production, and then complain when it falls apart? Even with a single such instance, you begin to look at things with the view that the context may be shifted away from the intention.

In such a situation, you are fully aware the context is locally with 1,000 files and not in some production environment with some 1,000,000,000, but you are rejecting the notion that those contexts can be so completely separated because a past instance showed such separations can be undone. Once bitten twice shy.


That's the eternal struggle in software engineering, I'm afraid. In my experience, there's no product that performs better with a more complete feature-set than the MVP from two years ago that exists only in the CEOs imagination, and was never really used by anyone. It takes a lot of knowledge and experience to understand why the very good solution that I run off my laptop once every week or two isn't at all the same as one that could handle hundreds of thousands of non-experts running it every minute.


This is spot on. As the old saying goes, "you can't have an argument about euthanasia with someone who thinks euthanasia is a musical group."


Every reviewer of their performances says, "They're killing it!"


The failure of engineering in the presented hypothetical examples is a failure to understand the use case.

I say this because I remember a class I took in college where I built a chat application and the TA chided me for using a hashed message ID instead of using random number generation. I was concerned that eventually there would be collisions and that would lead a message to either be undelivered or delivered to two parties (depending on implementation). The TA said "yeah but that's an infinitesimal risk."

The problem with his view was that the intended user group would eventually see this failure occur. It's not an avoidable edge case, it's a bug that will impact the general use case. The failure could occur on an important message ("I'm gonna be at work late, can you pick up the kids") or a sensitive one ("I can't believe you cheated on me with our babysitter"), and the user would have no idea what happened.

The article mentions NSA hacking and aggressive computation on a small microprocessor, but those issues are both easily avoidable by the atypical user. Are you a spy? Don't rely on HTTPS. Are you building an embedded system? Learn how to use C++ instead of Python.

There is plenty of room within engineering to have sound practices that use computer science to mitigate edge cases. A problem arises when those edge cases don't affect the intended user, or are easily and obviously mitigated by the user choosing a better tool.


If the chance of a random number collision is low enough, it becomes likelier that a message will be undelivered or double-delivered due to hardware error (a "cosmic ray", though more likely thermal noise or radioactivity). Quantum computing in particular hugely relies on the observation that an infinitesimal chance of error is as good as a zero chance.

And of course, hashes can collide as well.


So would you accept that your solution is sound, from an engineering perspective, but could be considered overkill for a college class?


A sufficiently large and properly generated random number is as unlikely or less likely to collide than a sufficiently large hash.

However, using the random number is strictly simpler to implement, therefore it wins out from an engineering standpoint, unless you do not have access to proper RNG.


Depends where you go to school.


If the chance of a collision is literally infinitesimal then the failure will not occur, period.

In practice the chance may be just extremely low but then so is the chance that your hash algorithm causes a collision. In fact, that should be more likely for the same number of bits.


Absolutely. In software engineering (a somewhat different field from computer science), the answer is almost always "it depends".


Hence the classic joke:

----------

A man in a hot air balloon realized he was lost.

He reduced altitude and spotted a woman below. He descended a bit more and shouted, “Excuse me, can you help me? I promised a friend I would meet him an hour ago, but I don’t know where I am.”

The woman below replied, “You are in a hot air balloon hovering approximately 30 feet above the ground. You are between 40 and 41 degrees north latitude and between 59 and 60 degrees west longitude.”

“You must be an engineer,” said the balloonist.

“I am,” replied the woman, “How did you know?” “Well,” answered the balloonist, “everything you told me is technically correct, but I have no idea what to make of your information, and the fact is I am still lost. Frankly, you’ve not been much help so far.”

The woman below responded, “You must be in Management.”

“I am,” replied the balloonist, “but how did you know?”

“Well,” said the woman, “you don’t know where you are or where you are going. You have risen to where you are due to a large quantity of hot air. You made a promise which you have no idea how to keep, and you expect people beneath you to solve your problems. The fact is you are in exactly the same position you were in before we met, but now, somehow, it’s my fault!”


I had a real-life example of this joke years ago when I was hiking in the Carpathian Mountains with, among others, a mathematician:

"There's no landmark in sight. Where are we even?"

"In Ukraine"

"Yes, I know that - what kind of reply is this?"

"It's the most precise answer I could come up with."


Gotta admit, I only knew the joke in a variant where it stops after "you must be an engineer" to make fun of the exact kind of attitude the OP bemoans.

Looks as if, instead of taking the advice to heart, the suspects tried to make the joke into a "no u" instead.

I think there could have been better uses of their time.


I prefer the "Microsoft" version...

"Now I know where we are, we are right over the Microsoft campus."

"How did you know?"

"Everything they said was completely true and totally useless"


Aka "You are in a helicopter."


By the way, those coordinates are in the Atlantic Ocean, with no land in sight.


This comment will play extremely well on this website, which engineers frequent. And it's admittedly a great joke. But only tangentially relevant to the author's point.

The author's point is closer to the old adage of not shooting yourself in the foot. That is, however true your statement is, if it denies further thought it becomes a useless point.


So it's 100% a good joke, but misses the entire point.


The joke is making fun of the "engineer" as well as the "manager". The engineer's pedantic, irrelevant response that does not invite further discussion is exactly why the parent article reminded me of this joke!


This comment makes me think of Theranos [1]. The founder was a master of manipulation and business-speak, and created a multi-billion dollar (valued) start-up based on an impossible and non-existant product, and so strongly blamed the engineers/scientists beneath her for not delivering on her unrealistic promises, that one of them committed suicide.

[1] https://www.imdb.com/title/tt8488126/

Fake-it-'till-you-make-it form of management (FiTYMi?) is becoming more and more common.


How many theranos employees died by suicide? There were up to 800 at peak, so one suicide might be within the normal base rate.


You might want to do a little basic homework before you post a comment like that.

https://en.wikipedia.org/wiki/Suicide_in_the_United_States

from which we learn that:

"The annual age-adjusted suicide rate is 13.42 per 100,000 individuals."

Doing the math to figure out how plausible it is that "one suicide in 800 might be within the normal base rate" is left as an exercise.


You quote the annual age-adjusted rate for regular people. What's the rate among startup employees? Surely in startups where the future is particularly highly uncertain, employees feel a lot more pressured than those in boring companies. Ceteris paribus, why would the suicide rate be the same as that of the regular folks?

I think you would need to do a subtler analysis before commenting in the condescending way you did.


You misunderstand. I'm not taking a position on whether or not the OP's hypothesis was reasonable or not. All I'm saying is that before you take a position you ought to at least do some basic homework.


Considering Theranos operated for over 10 years and most of their employees were white men of a certain age, it seems entirely possible to me.


A fair point. But did they have 800 employees for all of those 10 years? (I don't actually know the answer to that. I'm not taking a position on whether or not the OP's hypothesis is reasonable or not. I don't know. The only thing I'm saying is that before you take a position on a question like this you ought to at least do some basic homework.)


For anyone curious, the base rate of at least one person committing suicide based on these numbers is `1-((1-(13.42/100,000))^800) = 10%`. I suspect the suicide rate for engineers (who tend to be white, middle-upper class men) is somewhat different from the national average so take these numbers with a grain of salt


You’re right; upper middle class white males are the most vilified and attacked and hated class of people in the world for whatever reason; I wonder if this makes that demographic more prone to suicide.


Men are nearly 4x more likely to commit suicide than women, and "white" is 1.5-2x higher than other races [0], so while I'm not seeing combined stats it's pretty certain that that combination is higher than the general population.

[0] https://afsp.org/suicide-statistics/


I know; I wasn’t disagreeing. I was guessing the reason.


You might want to read again the numbers you just posted and change your tone to a less self assured one, because you're very obviously wrong. Theranos operated for more than ten years with hundreds of employees, so the numbers you posted confirm the gp's point.


Maybe it's just the way I look at the world, but I find it ~beautiful in a way when instances of the very phenomenon discussed in an article can be found within discussions on the article.


> you're very obviously wrong

About what? (And you might want to go back and re-read what I wrote before you answer that.)


Case in point for OP. Possibly technically correct, misses point entirely.


The one suicide was the former chief engineer that was called in to be a witness in litigation directly related to the company's failure to produce the product and lies about it.

...so it's not some random data point.


I see this incredibly often in medicine-- particularly from nurses, midlevels and patients when they don't have great relationship with the physician. Myself included (although I try to have a good relationship and communication I don't bat 100). People tend to latch on to what they know or think they know, and it's very hard to get them focusing on the more important issue that may be at odds with that. As an example-- I trained at a place where a patient disagreed with his doctor's assessment (something about lung fluid analysis). The patient cited a well-respected journal article that guided standard of care. But, the patient didn't realize something. The doc treating him authored that article.

We see stuff like that routinely-- evidence gets misinterpreted and people get angry because their sources (but not their interpretation) may be excellent.


> I trained at a place where a patient disagreed with his doctor's assessment (something about lung fluid analysis). The patient cited a well-respected journal article that guided standard of care. But, the patient didn't realize something. The doc treating him authored that article.

Sounds like the article needed revising.


At the same time, it goes the other way too.

I cannot believe the number of doctors that I visited when I first got GERD, and all my questions were shot down while they prescribed PPI upon PPI. It only made my condition worse by the year.

Finally, I decided enough is enough, and did my own research, happened upon naturopathic materials, and experimented in a logical fashion (eliminating one trigger at a time, trying a new natural cure at a time) till I chanced upon probiotics - specifically Yakult which works best for me, and have been living a normal life since then.

The kicker is that if I had taken this approach from the start, my stomach and quality of life would have been much better from the start.

Sure, it's anecdotal, but it's a counter-example to your assertion which, frankly, smacks of derision for the thinking patient. You have to remember that doctors, like those in any other profession, come in a spectrum of qualifications, capabilities, and empathy.


Physicians are human and subject to their own biases and mistakes. I could go on about how helpful patient/nursing/midlevel input is and has been to me personally in my career. But the point I'm trying to make here is that the shallow vs deep thinking shows up in medicine at obvious points and causes problems.


We also have our own biases. That physician has probably seen 100s of patients who have self-diagnosed something that turned out to be wrong, whereas the physician is usually correct about diagnoses. In this instance, the self-diagnosis was correct and the physician was wrong, but I would bet this is the exception, rather than the rule.


The rationalist community has a kind of meta-rule that contributions to a debate should be at least two of {true, necessary, kind}. For necessary you can often also read useful.

Once you're out of high school debate club, it's a good principle that if the only thing in favour of a statement is "it's true", err on the side of not saying it.


This is terrific. Any chance you have a link to such a meta-rule?

Is LessWrong one of these rationalist communities? Here's their relevant FAQ: https://wiki.lesswrong.com/wiki/FAQ#Site_Etiquette_and_Socia...

The reason I ask: My local political party recently adopted a geek-style Code of Conduct. Huge help. But as you can imagine, our policy debates remain suboptimal, sometimes getting rather acrimonious. Neither our bylaws or Robert's Rules of Order say much about playing nicely. Maybe some experience, advice from rationalists would help.

(HN's guidelines would also be a good start for my purposes. https://news.ycombinator.com/newsguidelines.html)


https://slatestarcodex.com/2014/03/02/the-comment-policy-is-...

LessWrong is one of those sites, but for sheer elegance of writing (in my very personal opinion) you can't go much better than Scott Alexander.



That is true, but I suppose the issue arises from people who cannot make an accurate assessment on the "necessary" front.


Interesting that truth isn't a requirement, especially given we're talking about rationalists. In what situations do they welcome false contributions?


I think the idea is that it's forgivable to make mistakes, as long as you aren't also being unpleasant.


Edit: Sorry, I just realized I misread something. What I was pondering was what "necessary" meant in this case, but I think my brain is tired right now!


From https://slatestarcodex.com/2014/03/02/the-comment-policy-is-... :

{necessary, kind}:

> Recognizing that nobody can be totally sure what is or isn’t true, if you want to say something that might not be true – anything controversial, speculative, or highly opinionated – then you had better make sure it is both kind and necessary. Kind, in that you don’t rush to insult people who disagree with you. Necessary in that it’s on topic, and not only contributes something to the discussion but contributes more to the discussion than it’s likely to take away through starting a fight.

{true, kind}:

> Annnnnnd sometimes you might want to share something that’s not especially relevant, not the most important thing in the world – but if you do that it had better be both true and kind. No random interjection of toxic opinions that are going to cause World War III. No unprovoked attacks.

(personal comment - the recent HN thread on castle Guedelon, where a bunch of experimental archaeologists are building a medieval-style castle, was perhaps not "necessary" or "relevant" on a tech forum, but it was kind in that I very much enjoyed the half hour I was reading about it. Since there were no false claims about castles on that page, I'd count that as "true and kind".)

For good measure, {true, necessary}:

> Nobody can be kind all the time, but if you are going to be angry or sarcastic, what you say had better be both true and necessary. You had better be delivering a very well-deserved smackdown against someone who is uncontroversially and obviously wrong, in a way you can back up with universally agreed-upon statistics.


I'm not sure what you mean by 'mistaken unpleasantness', but getting at the truth, however unpleasant, is a far higher and more widespread norm in the rationalist community.


A statement like "I am happy to see you!" should fit the description of necessary, kind and untrue.


> In what situations do they welcome false contributions?

I think the point was more that being quiet is an option.


Quoting from the SSC policy:

>Recognizing that nobody can be totally sure what is or isn’t true, if you want to say something that might not be true – anything controversial, speculative, or highly opinionated – then you had better make sure it is both kind and necessary. Kind, in that you don’t rush to insult people who disagree with you. Necessary in that it’s on topic, and not only contributes something to the discussion but contributes more to the discussion than it’s likely to take away through starting a fight.


My guess would be when you are stating opinions.


Just a guess, but -

1. Good-faith efforts at contribution where a person happens to be wrong? (happens to the most conscientious of us)

2. Instances where the speaker has made their confidence level explicitly known? ("Just a guess...", "I am not a lawyer, but...", "I'm not certain, but my understanding is that..." etc)


Well because I can imagine a lot of things that are true but not necessary nor kind. Likewise, I don't know how you would classify opinions as true or false.


True!


> at least two of {true, necessary, kind}

Are "necessary" and "kind" lies acceptable?


Since they are necessary, there's no option to avoid them anyway ? (otherwise they wouldn't be necessary). So you'd rather accept them.


Not an Engineer? Because I used to find myself in meetings, trying to express why its a colossal mistake to introduce application runtime libraries into a kernel, driver or critical engine. I was on the other side of this argument at times like that - expressing the obvious and true, perhaps unkindly. And getting nowhere as I could be dismissed as a pedant or as naive (by newly graduated 20-somethings).


That would be "true" and "necessary", no?


Thanks, yes, but it it was also thankless. You get marked as a 'pendant' and 'uncooperative'.

Being patronized by the OP was neither true, kind nor necessary. It is another misdirection of the sort they describe, this one attacking the (imaginary) opponent with a strawman or false dichotomy? Anyway, I'll shut up now.


"Not a team player"


This is great. I would extend the things discussed in the article.

1. Not everyone participates in a conversation for the same reasons. Some are here for rational purposes, others emotional purposes e.g. Getting an answer vs feeling accepted. Neither of these are wrong but being aware of that makes empathizing and filtering easier. We are social creatures and nothing we do happens in a vacuum.

2. Being right and useful is a core part of many peoples identity. If we can't contribute in a relevant way and be right, we'll often contribute in an irrelevant way so that we can be right. Our reasons for participation are largely emotional.

3. Communicating purpose is hard. Giving people the information and context required to decide when and how to act appropriately is hard. We often skip it e.g. in the discussion covered in the article at no time was the purpose covered. Why are we stating these things?

It's not culturally standard and perhaps next to impossible to do but starting a conversation by:

  - Clearly stating the purpose.
  - Clearly stating your view points.
  - Considering what you want to get out of the conversation.
  - Considering what information, arguments or data would be needed to change your mind (if you are willing to change your mind).
Would help make discussions be more effective.

I wrote something about this a few years ago if anyone is interested to read: https://medium.com/@ioverthoughtthis/how-to-have-an-effectiv...


This article could be titled "Will people less smart than me shut the fuck up already? Part N of Infinity" I haven't been a fan of previous ones and this is no different.

People will always be wrong. And it's easy to say it's because they're not thinking about something clearly enough, because very often they aren't. I'm sure it's cathartic to complain about them on the internet, but it won't change anything. So either ignore them or help them.

Having productive conversations rather than just satisfying the human urges around disagreement is a life skill. Understanding how to identify when conversation is destined to go nowhere and exit it effectively is a life skill. This article could have chosen how to build and exercise those skills in the face of this kind of wrongness. But it didn't and neither have most the comments. It chose to pontificate on a root cause.

It doesn't do anyone a service by writing hypothetical stories about how and why every person wrong in a particular way came to be wrong. Yes, it's 100% correct that those assessments could be useful in finding a way to conversationally bride the gap, but in practice they're just used to make someone feel superior to the other person in addition to being right. It's a first step down the path that ends in political memes of the form "$OTHER_PARTY only believes in $BAD_IDEA because $INSULTING_BULLSHIT."


The problem is some of these debates have consequences that you would be forced to deal with (choice of language, framework, architecture etc.) and needless pedantry can be a huge blocker to adopting a better solution to the requirement at hand. At that point, either you can "exit" the debate and deal with the choices, or you can "exit" the situation in search of better co-workers. Both of them have real costs to you (which are not your doing).

I don't know what kind of life skills would help there. If you have any thoughts, I am all ears.


The kind of situation you describe happens... sometimes. Most of the time when you run into this kind of behavior, though, it's not in a situation with real consequences. It's somewhere like HN. (Have I seen this here? Yes. Have I done it here? Let's just let that question pass, shall we?)

What do you do then? Call them on it, for the sake of the rest of the people in the discussion? Ignore it (don't feed the troll)? Call them on it once, then just walk away?

What do you do when you are the one doing it, and someone calls you on it? Do you double down, or do you realize that you were being a pedantic jerk, and stop? Maybe even apologize?

And how do you tell where the line is between "valid nuance" and "pedantic irrelevance"?

There's plenty of room for life skills here. They may not help you in every instance of this kind of behavior, but they can help in many of them.


I don't think it's really about pontificating about why, just why, other people who aren't me are stupid, it's about identifying patterns and maybe coming up with a way (like you are talking about) to combat (possibly involving teaching) these things. Without putting it into words you just see the pedantic correction fucking up your conversation and don't really have words or a coherent understanding in the moment to get why it was a useless thing to say. Now with this article some people will know to identify those situations to correct them, and some people will identify the behavior in themselves. That's all good, and isn't "ugh dumb people exist"


I believe the difference here is between categorical and empirical thinking.

In each case the person replying is saying, "the principle is violated by this counter-example, so the principle does not hold".

The author of this article should be saying, "this isn't a principle, its a heuristic whose utility depends on context".

Ironically the author of this article misunderstands the point these replies are making and straw-mans them as much as he believes he's straw-manned.

They're not failing to capture relevance. They're taking a statement phrased categorically as a categorical principle, when it was an heuristic one.

This comes down to how people reason -- more mathematically-inclined people are often categorical thinkers; trying to reason in terms of universal principles & iterating on those until they are universal.

The goal of the person replying isn't (necessarily) to undermine the OP; but, to helpfully offer a counter-example to evolve the principle until it holds.

It is a good strategy to arrive at general criteria for when, eg., GC is useful. In fact, this article is an exercise in that because it replies to these counter-examples.


The author makes the point at the end that it's important to grasp the wider context.

The wider context is that people aren't expected to care so much about the specificity of their phrasing all the time.

The mathematically-minded would serve themselves better by ascertaining the category of the conversation and context at hand.


I've always assumed that somebody must have gone through classifying bugs like this in faulty debate (e.g. based on ancient Greek debating theory, or law, or logic as taught in philosophy?). By using well-defined terms to refer to the bugs, it would be way easier to call them out and mitigate them - as the amount of time wasted in FOSS collaboration with stuff like this is unbelievable.

Some egregious examples that spring to mind are:

* Trying to disprove a point by finding an edge case where it's not true - even if that edge case is entirely irrelevant (a bit like the complaints in the original post)

* Trying to disprove a point by exaggerating it to an extreme and saying that because it doesn't work in the extreme case, it must be false.

* Taking a point out of context and proudly proving it to be wrong... when taken out of its wider context.

* Making an entirely unrelated point which happens to be true, as a way of trying to prove an earlier point.

I guess the second one is a form of hyperbole (but is that the right term)? The third one could be called out as "taken out of context", and fourth is "irrelevant"... but it still feels like life could move on faster if there was a clearer taxonomy for these 'bugs'. (One could even provide the taxonomy as tags/reactions into your communication system of choice - like a more nuanced version of "+5 Insightful" or similar).

Edit: perhaps https://en.wikipedia.org/wiki/List_of_fallacies is what i'm looking for, although it feels... cumbersome.


Yes, of course: "Fallacies".

But note there's also something I like to call the "Fallacy Fallacy": People read the wikipedia list and consider them some sort of universal natural law.

Case in Point: "appeal to authority". This can indeed be fallacious, within the context where these were formulated, which is probably academic publishing. If someone says they've created an HIV vaccine, it would be very wrong to start using it without clinical trials just because the inventor is some famous researcher.

If you try to apply the same to daily life and online debate, you're going to wither and die within a month.

Example: you're experiencing chest pain. What do you do? You see a doctor! Do you subject them to a quiz on heart disease, and do you re-run every trial that contributed to your treatment? Of course not -– it would cost trillions and take many lifetimes. Instead, you trust your countries certification scheme, and you trust the scientific establishment. Both are authorities.

When you return to your favourite restaurant, you are extrapolating expected future performance based on past experience. in other words: you trust them. Or in even otherer words: you consider them the authority on Fried Chicken and rely on that to guide your decisions.

There are only two ways to gain an understanding of the world around you as needed to do anything: direct experience, and what others tell you. The fraction of required knowledge that you can individually verify from first principle is probably somewhere below 1%. For everything else, you need to rely on other's direct experience, by continually fine-tuning individual measures of authority by comparing to direct experience where possible, and integrating with other sources where not.


Appeal to authority is not not deductively valid and thus it's logical fallacy.

On the other hand very little can be learned with deductive reasoning at once. Deductive chains in debate are very short and shortcuts must be used.

Authority can be used to weigh or prioritize evidence. If you are not doing serious research into the subject, authority is one heuristic that works extremely well.

For example, when two relatively ignorant layman debate in the internet and have limited interest or effort invested to get into the truth. Finding out that something is mainstream view among authorities in the field should end the argument for the time being for the lack of more information. Unfortunately the discussion usually ends with questioning that particular field, then the progress of the society as a whole. Scientists have been wrong before so .. they can also be corrupt ...


Logic means two things: “Reasoned thinking” and “a strict set of rules for proving things”

When someone wants to disagree, it’s easy to just sub out the definitions and say something isn’t logical because it doesn’t follow formal methods.

Believing what someone says because they’re well-respected about a topic isn’t strict, but it’s really useful.


right, yup, I guess I was grasping for Fallacies. However, reading through Wikipedia's list of them, I'm finding it fairly hard to find the right ones to apply to my examples - and if I used this nomenclature to try to keep FOSS collaboration productive, I think I'd lose the audience pretty rapidly (or get lost in a meta-argument about fallacies). Hence wondering if there could be a simpler taxonomy for the most common types of bug which pedants use to derail discussion (unintentionally or otherwise).


Most of these issues (yours, the OP, and others here in the comments) are all covered under the Principal of Charity[1]. Unfortunately, this is the internet and people would rather feel like they won something (unsure exactly what), rather than have a real discussion where theirs or someone else's mind can be changed. While not perfect, I do find HN to be a better place for discussion for most topics.

[1] https://en.wikipedia.org/wiki/Principle_of_charity


Reminds me of one my all-time favorite bits of movie dialog, which I keep in my mental back pocket as a sort of reminder:

“Am I wrong? Am I wrong?”

“You’re not wrong, Walter, you’re just an asshole.


I think part of the problem is that school and college teach us to say "no" to something that's not always true. We're told a statement that's mostly true, and asked, "Is this true?" When we say yes, the professor says "No!" While a normal person, one with common sense, would say "Yes". This creates a habit of unnecessary attention to detail, nitpicking, and grasping at straws to come up with some reason why the other side is wrong.


Very true. Two very frequent argumentative errors in online comments are:

1) refuting argument with counterexample.

2) refuting causal argument with alternative cause.

Very few things in the world is universally true without exceptions or universal. Words all or everyone do not imply universal quantification (∀) in normal use of language.


Sorry, I can’t resist being pedantic on a post about pedantry. The symbol you used translates loosely as “there exists an x”; I’m pretty sure you meant the upside-down A (“for all/every x”)


Maybe the poster meant that the words don't imply ¬∃¬


True. Corrected.


> grasping at straws to come up with some reason why the other side is wrong.

It's an almost combative mindset to discussion. Where there is a victor, and a loser. Someone being right is almost a full stop to the discussion; they've won, so what more is there to say?


Dunno. I went to two EE schools, and in many classes teachers stressed that to get results we need to make assumptions that are only partially true, to approximate, to skip over [irrelevant] details, etc. And this was very deliberate and stated upfront.

And, I still like to nitpick. :) Hmm. Maybe it's because you have to be aware of the details/limits to be able to figure out what approximations and simplifications are reasonable.

Maybe that's where the value of nitpicky mindframe lies.


>I think part of the problem is that school and college teach us to say "no" to something that's not always true

It's mostly that we fail to frame the context of the statement when it's true and when it isn't.

In security stuff we often talk about threat model. The article mentions anonymity via HTTPS, which is never the case. You can remain anonymous from someone by using one-time pseudonym in Telegram. You can't remain anonymous from Telegram server without anonymizing your IP and phone number. You can remain anonymous from server with Tor, throwaway phone and prepaid SIM, but to remain anonymous from the NSA you need more steps like buying the SIM and phone with cash.

Things are complicated and we often assume the conditions without explicitly stating it. Some people think from the perspective of word case (next Snowden on a run) and some from their own (often privileged) position. Some argue from the viewpoint of a Chinese dissident, someone from western cryptoanarchist.

Risks are more often than not invisible so assessing them is extremely difficult. Security is often about margins, and I see a LOT of incompetence, e.g. encryption needs to be 256-bits and then key management (especially authentication) is hand waving.

IMO the fault is the upvoting culture. Back in the day when we used to have separate forums, someone who would post a question, could immediately be asked a followup question e.g. "what is your threat model". This helped set the context for all readers.

Once we moved into discussion with comments, we lost the story part, and people started to post "right" i.e. popular answers for upvotes without really focusing on relevance, like the author so eloquently puts it.

What doesn't of course help here is often the conversation is only part of larger puzzle. Maybe the NSA is part of OPs threat model and they didn't know it. E.g. if they're a sysadmin of large company. They're not a terrorist, they just manage systems of interest. That's where the conversation forks. And the more nuances the original discussion has, the faster the story is lost.

Furthermore, there's the endless amount of forum trolling that tries to drown insight amidst noise, memes, pop culture references and other thought terminating clichés that make you immediately turn on your heels, and look for insightful discussion under the next news story.

Then there's grass-roots marketing, disinformation, shilling etc propaganda, all which is specifically designed to have plausible deniability that makes it impossible to weed out.


I think there is a name to this: it’s called dialectic/sophism. These arguers have no interest for the scope of the discussion, they just want to win the argument. I think it helps when you set the scope of the discussion clearly, at the beginning. edit: typo


"They just want to win the argument" is a good summary of what I understood from the article. There are people I regularly encounter who have to inject their opposition or correction to anything anyone else does or says, which feels like they are attempting to prove their superiority or correctness (but instead proving their failure to recognize the context of the original statement). Couching their point in the style "What about X?" instead of making a counter statement would be a more useful communication method IMO. (Edited to fix grammar)


From the article:

"The one thing that school trains you for is that being right is what matters. If you get the answers right in your test, then you get a good grade. Get them wrong and you don't. Maybe this frame of mind "sticks on" once you leave school, especially given that most people who post these kinds of comments seem to be from the "smarter" end of the spectrum (personal opinion, not based on any actual research). "

A similar sentiment by Paul Graham here: http://www.paulgraham.com/lesson.html


This is true, but as a counterpoint, most math and science exams in high school give you partial credit for trying to reason through the problem even if the final answer is wrong. I had one physics teacher who would sometimes give full credit even if the final answer was wrong due to an arithmetic error. She was awesome; I still think of her as an inspiring figure.


It appears that https://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good is a very relevant concept here, and is the principle behind the rebuttals to the arguments in each section.

I feel that arguing for perfection is (arguably) part of a broader problem with internet discussion - and discussion in general. I've personally termed it "sniping" but there may be an existing agreed-upon term.

Essentially, this is when a commenter takes one flaw or issue in a larger and reasonably sound argument - and runs with it. In context, focusing on this flaw is "missing the point" as the rest of the argument does not rest on the assertion being "sniped". But - if you make enough noise about it - you may be able to confuse people into thinking otherwise, and making the argument seem fundamentally flawed.


I love snipers too.

I have some broad idea about a topic I am not experienced. I expose it and I have immediately 25 different objections.

I examine the objections and there are 5 really good I had never thought about.

That is amazing, the fastest feedback loop in the world. The only problem is getting affected by the feelings people expose when they reply. But with text is easy.

I do that on real life with Masterminds. You have to train people to be respectful with other people's ideas(they are not naturally). If you do, it is one of the most powerful tools in the world.


Although it gets tiring after some point, I very much appreciate all the "snipers" out there: They taught me a lot.

I often do training for my customers (cloud technologies) and they expressed appreciation for me exposing the limitations of certain technologies / techniques.


The Recurse Center (formerly Hacker School) has among their social rules one that discourages "Well-Actually's". Quite pleasant.

> A well-actually happens when someone says something that's almost - but not entirely - correct, and you say, "well, actually…" and then give a minor correction. This is especially annoying when the correction has no bearing on the actual conversation. This doesn't mean the Recurse Center isn't about truth-seeking or that we don't care about being precise. Almost all well-actually's in our experience are about grandstanding, not truth-seeking. (Thanks to Miguel de Icaza for originally coining the term "well-actually.")

https://www.recurse.com/manual


Interestingly, this particular flaw in reasoning (dare I say "fallacy"?) is far more prevalent in the the tech community. Being guided by Vulcan-like logic applied to completely neutral facts isn't just considered ideal but even achievable. Emotions, OTOH, are considered a weakness, as are any product considerations that cannot be expressed in SI units. Design/UI/UX are confused with "making it look pretty" in the same way that what's considered art is usually interior decorating and any actual art is considered pointless at best and another postmodern abomination at worst.

So it's no wonder that they'll frequently believe "I found this one remote outlier" to be a valid refutation of some argument. Because, in principle, it is: If you don't caveat for black sheep, Taleb is going to smack you over the head. And if you list black sheep, I'll spray-paint a sheep red and consider myself extremely smart. Soon, we'll have names for 16.7 million colours so you'll be on the losing side of this argument forever.

The "liberal arts" have their own weaknesses. Engineering can put a man on the moon or a nuclear bomb on Japanese soil and it's pretty well obvious that someone got done with their todo list this week. That other culture doesn't lack in achievements of similar magnitude, such as the rule of law or democracy. But these are hard to measure, are achieved gradually. It's near impossible to assign individual credit and therefore harder to implement a meritocracy.

Yet because they continually live in this sort of murky dialectic discourse, the humanities are far more comfortable with complexity. The tech community even more than the natural sciences has a fundamental problem with getting even close to considering the best possible opposing argument. Considering how great their believe in both the power of "rationality" and their ability to use it is, it's not even funny to see how lawyers, for example, tend to wipe the floor with them. Witness any discussion of law here or among tech-twitter, where dozens of ideas will crop up how to outsmart the law by being "clever" and owning a dictionary ("the contract said "Dollars", but didn't specify USD. I'll just pay in CAD")


I recently received an email from my father with a link to a shady article about COVID-19. It was fairly long and contained lots of extremely technical medical facts. The article was difficult to immediately disprove, because it contained so much factual, yet irrelevant information.

I was able to explain to my dad why it was logically flawed, but it was impossible to convince the author that their article should be retracted. When I identified the source of the author's factual medical information, it was clear they had simply omitted the opening statements from a research paper that established the hypothesis was likely not applicable to the problem at hand. When I found the researcher, they were in the awkward position of defending their research from people attacking the factual parts of the derived article.


I think I'm a great programmer because I know the difference between 100% correct and 99% correct, and refuse to call something 100% correct when it's not.

I think I'm a great senior developer because I know when to compromise.

Notice that I can do both of those things at once without conflict. It's important to understand potential problems with a solution, as well as problems with alternative solutions, and pick one that seems the best at the time.

Many people on the internet are unable to do one or the other, and also feel the need to weigh in on a conversation for their own needs.

But not everyone weighing in is wrong. When someone says "You should always X", you will inevitably get someone saying that's false because it is false, and it's important to understand when to break the rules.


Neither statement have an agreed "context" from which the discussion is taking place, and flipping the statements may make the other seem irrelevant.


Not sure if you were serious or not but your comment seems like a nice example of what the article talks about


I don't think he's agreeing or disagreeing with the article, just providing a possible explanation for this behaviour. The two people arguing are ignoring the context, which is common in many debates. Especially with politicians - shift the context a bit and you're right even if what you're saying is unrelated to the original point.


The article chose pretty good examples, they are all of the form, "generally true statement" with a "specific, or even completely unrealistic, edge case disagreement". They all require no context to understand the exchange.

Debates, especially political debates, are purely about winning the debate. Context in a debate is particularly irrelevant.


I'm not sure I agree with "edge case disagreement". TLS, for example, has shown it's limits (e.g. CA hijacking) and was improved (e.g. CT). However, someone had to find those edge cases and discover the limitations of stated assertions: "2010 TLS was safe under the assumption that the CAs are not hijacked."


Ok. So, are you saying that we should stop using HTTPS and instead use HTTP? If you are not, what is your point? Is it not under the 100% correct but missing the point label this article is talking about?


Well, you could use HTTP over an IPSec tunnel with a pre-shared key (obviously distributed face-to-face), and that would have been resistant to a CA being hijacked.

However, nowadays, I believe with CT HTTPS is really safe. But again, someone had to nitpick on the security limitations of HTTPS for CT to be invented.


Could you enlighten me as to why my comment isn't relevant to this post or how I've missed the point?

I've read the post again in the best faith that I can give and I think I understand the point, I just thought it would be intersting to discuss the idea of "context" in conversation.

Instead of seeing the replies as pedantic seekers to logical truth they may just be misunderstood individuals who is talking with a different context. If we assume my comment as the first comment in his examples, I am equally at fault for not communicating my context clearly.

Unlike the author I don't find such interactions as frustrating, and hopefully I've effectively explain why in this comment.


Yes, this is certainly part of the problem; the "blind men discussing the elephant". We all have a different set of experiences of programming, but not all of us have a good sense for where the boundaries of our experience are. Something you think is ubiquitous may not be.


What the author is saying sadly also applies to his own arguments.

And this is even worse:

"Dangling reference out of memory errors are rare (maybe 1 in 10 programs?) whereas regular memory bugs like use-after-free, double free, off by one errors, etc are very common (100-1000 in every program)."

In my opinion, this is a weak statement, it does not match my experience at all, and I've been programming at a professional level for about twenty years.


How have you seen dangling reference OOM situations in the real world?

Speaking for a python / Django stack, there's a very concrete reason that these are rarely seen in my experience. Code never lives long enough for it to matter. In any practical situation, workers are recycled at a furious rate. uWSGI will recycle a worker for many reasons, or for no reason at all.

There are plenty of apps out there where, if you changed the settings to make the processes fairly persistent, they would OOM on a regular basic. This might be the vast majority of Django apps out there.

But that's just it - no long trusts python enough to keep it running in the background forever. So OOM like this is a non-issue.


When code lives long enough. If your process is short living, you don't need any form of memory management at all, just allocate and terminate the process when done.


Please tell me your comment is satire, yes?

That you're making the author's point for them -- that you're responding not to the point of the article, that you're 100% technically correct, but "missing the entire point" of the article? :)


What is the point of the article? To insert absolutist "truths" about a whole range of issues while disallowing anyone to disagree lest he is a pedant?

Calling out outrageous statements is not pedantry.


When someone makes a pretty soft general statement, pointing out small details that are on the other side doesn’t disprove them and doesn’t help anything.

I see it all the time online. Someone makes a statement, and then someone implicitly or explicitly disagrees and offers up a detail that doesn’t actually refute anything. When I get hit by these I ask “what did I say that you disagree with?”

Take the first example. The main point is GCs save time. A bad refutation is pointing out the downsides that exist. Of course there are downsides.

A good disagreement is something like “Garbage collected languages don’t end up saving time in the long run. They’re so widely used now that most projects start with them, even the ones that grow into huge applications. And since the developers aren’t used to handling memory directly, relatively rare bugs can spiral out of control and slow down a whole company.”

That’s not an argument I would make, but it’s a decent one that refutes the actual point of the statement.


I think the case would have been much stronger if the author didn't mix his argument with his opinions about GC, Python, etc.


It is more immaturity than education, though. Age makes most people appreciate diversity and act collaboratively, instead of antagonising.


I think you picked up some downvotes for saying it's age, rather than experience, that gets people to appreciate diversity and act collaboratively.

I hope that the irony of that (and my own comment), in regards to this article, is not lost on those who downvoted you.


Thanks for this, it seems a lot of discussions miss the point.

"You can achieve the exact same thing in C, you just have to be careful."

Of course you can. You can also drive at 160km/h safely or use an axe to chop onions in your kitchen "you just need to be careful" (with the subtle implication of "I can be careful, can you?". Narrator: no, you can't, you aren't as careful as you think)

And the other favourite of mine "HTTPS self-signed certs are useless, you're not authenticating the origin" while forgetting about the whole encryption thing. Yeah.


We're at a point in the diversity of tooling in software that subjectiveness is a genuine reason to choose one thing over another. Nitpicking fine points of performance is a very common way to miss the point. I could totally achieve the same thing in C, I just don't want to.


>> "HTTPS self-signed certs are useless, you're not authenticating the origin" while forgetting about the whole encryption thing.

The problem is that not validating the origin can make you vulnerable to a MitM attack. If the attacker can place themselves between you and the server, the attacker could intercept the real certificate from the origin server (containing the real public key) and then instead of forwarding you the origin server's certificate, the attacker could forward their own malicious certificate to you (containing the attacker's own public key) but pretending to be the origin server. Because the attacker's certificate would be self-signed, the browser cannot automatically authenticate that the certificate and public key provided actually belong to the (also self-signed) origin server and not some random attacker.

Then when the encryption begins, there will be encryption (that's true) but the encryption will happen between you and the attacker's machine and this is why it defeats the point.


It does not defeat the point if your attack vector is closer to your connection than closer to your server (e.g. untrusted connection). And in this case you can be your own CA and verify your own certificate.

It does not defeat the point because your regular HTTP server can be MitM in the same way so not having self-signed HTTPS because of that is moot.

So again, you're missing the point.


>> And in this case you can be your own CA and verify your own certificate

If you want encryption for yourself only when accessing your own website, sure you can add yourself as a CA in your own browser and then you also get he benefit of authentication... That is a very niche use case though; not a typical/general use case. Also where the attacker is located doesn't matter. If your browser doesn't have the origin server's CA added in its certificate list then the attack can also be done in close proximity to your machine (e.g. WiFi endpoint). So while not always incorrect, your argument is generally incorrect.

>> It does not defeat the point because your regular HTTP server can be MitM in the same way so not having self-signed HTTPS because of that is moot.

For a typical use case, you add almost no security by having a self-signed certificate compared to using plain HTTP. The only difference between an attacker who is able to carry out a MitM attack over HTTP vs an attacker who can do the same attack over self-signed HTTPS is a few additional lines of code in the attacker's software to proxy the fake certs; basically they can break it just as easily just by having better software. It doesn't matter at all where the attacker is located; whether closer to the server (like an employee of an ISP) or the user's machine (like near a WiFi endpoint).


The second statement isn't 100% correct because it's not 100% precise. Truth doesn't extend beyond precision. This is actually taught in school in physics classes when your answer is treated as entirely incorrect if it doesn't contain error estimation.


There are two ways we can interpret these examples [0]. The first way is to interpret them as two propositions understood in the general context. I believe this is the interpretation that the author had intended. In this case, the objection is just crappy because it fallaciously attempts to refute "usually p" with "occasionally not-p". It is a failure to reason correctly.

So what's the cause of the fallacious inference? Well, in this particular post, the author chooses to focus on the case where the objector fails to appreciate (either though inability or through malice) the general context in which the original statement is being made. If I were to offer a diagnosis for what the author is aiming at, it is a lack of prudence [1]. Prudence, of course, presupposes humility [2] which some of you have mentioned.

[0] The second way is to understand that these examples in a specialized context that hasn't been articulated. Take the Python example. Let's say we've been given an excerpt from a conversation that had taken place at a company that specializes in embedded microcontrollers. A charitable reading would actually lend credence to the objection.

[1] Prudence in the classical sense as cardinal virtue, not its modern corruption.

[2] Humility also in the classical sense, not as some corruption of modesty or lowliness.


This is genuinely one of the best posts I have ever read on here. Completely correct, very direct.

> In the real world being right is not a merit by itself

A lesson a lot of programmers could learn.


This reminds me of a Rate My Professor hall of fame nomination I read years back. I've forgotten the phrasing so the below is heavily editorialized except the punchline:

> <Professor> could present everything there is to know about convection, thermal transfer through metals, phase changes in liquids, and at the end of the day you would still have absolutely no idea how to boil a pot of water.


In the spirit of dialectic (thesis/antithesis/synthesis) it is precisely context which determines the "truth", "relevance" and "correctness" of any claim - to assume it a-priori is a framing tactic.

The premise "Garbage collectors are nice and save a lot of work." is true precisely for the set of problem-domains (read: contexts) for which GC is the solution. Outside of those problem-domains, "GC is not always nice and can create a whole lot of work." is also a true statement.

If you are arguing for any one of those two view-points without acknowledging the validity of the opposing view, then that tells us way more about your problem-domain than it does about the utility of GC.

When you are using a ruler to measure a table, you are also using the table to measure the ruler...


> If you are arguing for any one of those two view-points without acknowledging the validity of the opposing view, then that tells us way more about your problem-domain than it does about the utility of GC.

Your argument sort of reminds me of the "no free lunch" theorem in optimization: "any two optimization algorithms are equivalent when their performance is averaged across all possible problems".

It's true, but it's largely irrelevant. Not all problem domains are created equal: some are commonplace, some are obscure, and some are so convoluted as to be completely inapplicable in the real world.

I think a constructive rule of argument would be to state upfront when you happen to be standing in the "obscure" or "completely artificial" camps.


> I think a constructive rule of argument would be to state upfront when you happen to be standing in the "obscure" or "completely artificial" camps.

From my perspective, my job that feeds me and my family, doesn't become any less important to me, just because some troll on the internet "believes" it is "completely artificial"


I am standing in the camp of my own actual problem-domain.

The "obscurity" and "artificiality" of which is largely irrelevant to me having to tackle it.

If you are arguing a view-point outside of your own problem domain, then perhaps you should be upfront about holding a theoretical/academic view?


I assume you're back to garbage collection rather than dialectic. The key phrase I'd take away from your post is "to me".


I was addressing this point:

> I think a constructive rule of argument would be to state upfront when you happen to be standing in the "obscure" or "completely artificial" camps.

Obviously it is "to me". On whose behalf do you speak?


I think you're assuming too much good faith on the part of the person making the argument. Sometimes people do find themselves legitimately standing in the 'obscure' camp (or even, say in academia, the 'completely artificial' camp). But it's often the case that a debator adopts a stance in one of those camps, without acknowledging where they're standing.

Sometimes it happens because of pedantry and sometimes because of incompetence, but it happens all too often.


All the same - "obscurity" is determined by context.

If you happen to be in the "obscure camp" (without knowing it) in which GC is a problem, not a solution - it sucks for you having listened to the OP.


>I think you're assuming too much good faith on the part of the person making the argument.

Naturally. I subscribe to the principle of charity and I practice it.

If you are just using me as a pawn in your argument, shame on you.


Yes, this is the ego. It's not about the information or the knowledge, but about being right and your opinion.

I'm sure we're all guilty of this sometimes, but it still makes comment sections a bit useless sometimes, because it's not discussing the subject itself.

This is why I love upcoming new technologies that everybody hates, they still have to proof themselves by using rational logic and there isn't much arrogance/ego surround it yet.

It's not good to think too much about yourself and how things reflect to groups of people and yourself, a lot of projection.

Anyway, iPhone rules! Android sucks. Happy Easter.

PS: I teach languages and a lot of people get stuck with looking bad. The ego is your enemy. Most good language learners just practice a lot and don't worry too much about looking bad.


When I see people do this, I just think of the scene from the big Lebowski:

Walter Sobchak: Am I wrong? (Belligerently arguing a bowling rules infraction at league night)

The Dude: No you're not wrong.

Walter Sobchak: Am I wrong?

The Dude: You're not wrong Walter. You're just an asshole.

Walter Sobchak: All right then.


I really felt this article.

Even when you take the time to outline the context, people still are eager to point out that extremely unlikely but possible situation where your statement may be false.

And then 99% of the meeting is debating how their objection is irrelevant.

So infuriating.


Well, that happens for some of the comments, but not for all of the comments. You just have to write your comments well. If we just all wrote our comments well it would show the author how his post is pointless /s


Being correct is really important sometimes. The blog's last example really hit home to me. For my last job I had to process six million records a minute on a shitty embedded CPU and Python (normally my go-to) completely barfed when I got to the domain-specific logic.

While pedants on the Internet can be frustrating, we shouldn't generalise when there are many situations where we should be both precise and relevant. HN encourages curious discussion and I think that's an apt description of the discourse that I enjoy most here. Precision can generate curious conversation.


Pedantic point: a dangling reference is the same thing as a premature free. If we free(p) but then the pointer hangs around in the program and we use it, we have used a dangling pointer.

What this seems to be talking about is "semantic memory leaks": objects that are not reclaimed because they are reachable, and this reachability is unintentional.

For instance, suppose we process a stream of real time data which exhibits locality. Some items appear in it and are likely to appear again, but eventually disappear forever. We can put items into a cache to speed up processing. But if we don't expire the cache, it could grow without bound; the items that will never be seen persist in it, and new items keep being added.

That's a bug in the algorithm, not in the memory management. Memory management has nothing to say here; it has no idea about the parameters that drive the cache size or replacement policy.

Implementing cache expiry in this scenario has little or nothing to do with manual memory management. Manual memory management cannot just blow away any object it feels like based on it age or recency of access.


The article alludes to an interesting insight that developers tend to quickly disregard messy reality in favor of idealized theory. Developers are particularly bad when the argument necessarily involves aspects of human psychology.

There is a very big difference between what SHOULD happen in theory and what WILL happen in practice - Arguments related to human psychology should never be discarded outright.

Saying that a particular tool is not suitable because the tool is too complex and there are too many junior developers in the company can be a very strong argument.

Or saying that a tool adds complexity to the project and that this adds unknown unknowns which adds risk is also a strong argument.

Or a common one I use when discussing statically typed languages is that it's incorrect to assume that people will use types correctly; statically typed languages prevent developers from mixing invalid types based on their definitions but it doesn't stop developers from using the wrong type definitions (wrong abstractions) to begin with!


I think the real issues is communicating the correct point in a diplomatic way that wins people over. Lots of people might be making decisions on a feeling rather than a precise facts. This article reaches a bit too broadly from specific examples. There are lots of counter examples where you neeed to be correct, eg corona response, airplane design etc.


I personally don’t find some of these examples problematic at all. A discussion doesn’t have to be so directed (relevant in a defined context) to be useful or interesting.

The first example might just end right there with no further discussion if the context is not expanded.

Discussions are beautiful like that, they start somewhere and can end up at a completely different place.


People are naturally competitive and often antagonistic. But what are those discussions really about? In order to decide on a suitable programming language for a project, more methodic approaches are useful, like a list of requirements with ratings for each candidate. Debates usually just let the more stubborn and irrational participant win.


I was somewhat disappointed to find a 2020 time stamp on this article. It could've been written pretty much word for word 20 years ago, and it still would've been reshashing mostly resolved arguments. To be clear, I think the problem is with the people still making the tired arguments, not this article.


A discussion with started with the first statement and response, and then went on with the statements they listed under "ignores all the larger context of the issue" looks pretty decent to me.

It might turn out that the second person thinks one or more of the bullet points is false, and both sides might learn something.


I agree! I love playing devil's advocate. People tend to focus too much on "who is right" and get annoyed. For me, it's about learning. It's about challenging my and my peer's assumptions and finding what are the limits of the assertion.

The article is a good invitation to reflection, but an alternative title could be "let's discover the relevant context together".


It is much easier to play devils advocate then having opinion you advocate for. And it is overly easy to use devil advocate tactic to make it hard to keep original topic, to force people to go to lengthy tangents or otherwise prevent them to actually make the statement they were aiming for.

If I intend to discuss complex enough real world problem in good faith, especially one with unclear parameters, devils advocate basically makes it impossible.


I agree, but I would put it differently. It is a lot easier to advocate a conclusion with full control over the assumptions, than discovering the boundary assumptions where the opinion no longer holds.

For example, it is easy to argue for "HTTPS is more secure than HTTP". But what I am often interested in -- as a curious and skilled IT person, and not a lambda user -- is what attack models does HTTPS defend against. Remove assumptions one by one, when does HTTPS offer equivalent security to HTTP.

This is not just nitpicking, but extremely relevant for our profession. TLS has gotten under quite a lot of fire lately, so sooner or later, someone somewhere will discover those hidden assumption. I rather it happens on a public forum.


Reading this part about garbage collection:

Being able create things on a whim and just drop them to the floor makes programmers a lot more productive than forcing them to micromanage the complete life cycle of every single resource.

I realized... It would be REALLY nice to have a garbage collector for the physical things in my house. Even some sort of tried-and-true algorithm to direct me would be nice.

EDIT: (this actually started as a sincere though off-topic comment on my part)

I have observed during this "shelter in place" that one way to be organized around the house is to be around the house (in this case 24/7).

And now I have 3 algorithms to think on. I think it might be nice to be nagged periodically with a certain request from a certain direction. Part scheduled cleaning, part LRU, part where does that live, etc.


It is possible! Unlike in the programming world, true parallel garbage collection has been mastered for centuries in the physical world. However, there is of course no free lunch. You have to pay people to come over and clean your house.

Failing that, you can "write your own" by simply allocating a block of time every day to cleaning up. Doesn't get more "tried-and-true algorithm" than that.


Imagine though (all we can do in the current situation) that the garbage collector comes in at seemingly randomnpoints during the day and just as you're about to use the restroom goes in and locks the door behind them to clean the restroom.


At first I thought you were talking about your mom/spouse but then I realized it's a computer analogy :)


Even some sort of tried-and-true algorithm to direct me would be nice.

I actually heard about this recently -- you throw out everything you haven't used in the last year.


A physical garbage collector? I've actually got one. It's my dog. (Only works in the kitchen, though. Maybe it needs debugged?)


A GIGO queue? :)


There is an algorithm for this in the form of an old saying: "a place for everything, and everything in its place".


This reminds me of times I've left thorough comments on HN, only to have a single point of it debated. As if picking at that single point defeats all others. Seems to happen a lot to others I've noticed. I wonder if this has a name...


I find posts like this to miss the point of one of the most important aspects of coding - that of technical leadership. It is the role of whoever is providing technical leadership to drive to clarity among all the participants in the conversation. This is not easy as the logic put forth by both sides is often slightly misaligned - hence this and many other like it.

If nobody takes responsibility for making sure this alignment is achieved, yeah, you'll get lots of 'missing the points.'

Sometimes this is called 'management,' and if you get really good at it you can even make it to 'upper management...'


The article falls into the same error of misrepresenting opposing views.


Explain? To my mind, the article explains a problem, that two opposing views can both be technically correct, but one view is often useful and second, disagreeing opinion, is just contrary.

Your comment seems to fall into the second category: just contrary disagreement.


Explain? To my mind, the article explains a problem...

The article seems to explain a problem, but what it's really doing is expressing some opinions, misrepresenting opposed opinions as strawmen and going meta to further attack them.

The criticism of the contrary opinions is not wrong, it's technically correct, just misses completely the point since it doesn't respond to the strongest version of the objections, just to the weakest ones.


These are not straw man arguments. I have actually dealt with all three of these examples in my career. The python one is particularly relevant to me, but I’ve had push back on running our API over HTTPS “because if they want to hack you they will anyway so why pay the extra cost”.


Not three, four. No idea about Python text, I don't know what the https fuss is about either. But the other two things are not how I've seen them discussed, and I've seen them many times.

I'm in favor of GCs and guards. What I'm against is that some languages removed the option to manage memory manually when it makes sense and disabling guards once the code is debugged. Those were the terms and, in that context, the objections make sense: there are cases when it's important to know in advance that the GC'd won't get in the way.

Java was the start of all that crap, with all sort of restrictions: every function must be a method, you can't precompile, no pointers, everything GC'd...


Not the OP, but I agree with them: the appropriate response to

> Processing text files with Python is really nice and simple.

is not something silly about C, it is "It is, but the packaging system is an effing nightmare, have you tried Ruby?"


I will almost always get a least one developer making exactly the argument in the example whenever I suggest Python, so their point stands. When I proposed Django, getting a Rails counter proposal doesn't fall into the "100% correct but misses the point entirely", it is entirely on the nose.

So your counterpoint is ironically similar to the articles point.


>This is sadly common on Internet debates

Internet debates can be really good in very small, focused or closed forums where people have time to learn each other over time. Some mailing lists and tiny groups are phenomenal.

Debates that happen between strangers in a single point of time don't have the minimum requirements for a good debate. Without shared context, history and commitment to the debate it's hard to get any results even among people who try hard.


The thing with the “correct” example posts listed is that they’re not completely right. Stating an opinion, and then a fact right after only makes them partially factual and will always open room for more debate. “Python is trash, it can’t handle the processing of big files like other languages can” isn’t actually proving the opinion that Python is trash


Yes, sometimes it is simply a matter of scope. If you exclude or minimize some factors you will rightfully come to a different conclusion. So you most certainly have to agree on which factors are relevant and to which extent. Otherwise, every participant of a discussion might find a local optimum, where he (rightfully) thinks that his viewpoint makes sense.


Maybe it’s just me, but it seems that all the problems cited in the article are too much people with only a hammer seeing only nails


Are there any synonyms or similar terminology for this type of behavior? I generally know it as "talking past each other".

https://en.wikipedia.org/wiki/Talking_past_each_other


The article focuses on an issue that is very widespread on technical forums, and one that I find irritating. (It is a bit better here thanks to the good moderation, but you can still find examples among the comments for almost every article.)

To be specific, I mean comments a) that point out a flaw in an argument, often on narrow technical grounds, b) where a charitable reading of the original argument indicates that the writer would very likely have been aware of this flaw, c) that do not provide a revised version that would be correct.

To me, such comments come off as dismissive - even insulting -, and because of this they almost never result in a good discussion.

Most often, such comments are also facile: It is easy to point out facts that hold in edge cases, but in software, as in every engineering discipline, any real world decision involves trade-offs. And as a practitioner, I find it very useful to learn how others have made trade offs on real world problems, exactly because edge cases are often not informative. Whoever comes forward to discuss their solution on such problems deserves praise, because they know that their decision is necessarily not perfect, and can be criticized. A good critique then amends the original statement to provide a better solution instead of dismissing it because it is not universally applicable.

This is not about pedantism (which is itself a highly subjective insult that never does any good) - a good comment on a minor point can be enlightening with the right context, and when respecting the original context.

I do not think this kind of destructive behavior is currently well covered by the HN guidelines, so here is a proposal of what I would add (@dang)

"If you want to point out an incorrectness in a statement, do not assume whoever made the statement was not aware of it - this often comes off as dismissive. Instead, attempt to provide a synthesis of the original statement and your criticism that would be correct from your point of view."

Using this guideline, here is how I would rephrase the examples from the article: - "Garbage collectors work in many cases, but I have spent many hours dealing with dangling memory reference issues, so in this and that case I recommend instead..." - "Note that TLS does not provide sufficient protection against nation state adversaries because... If you have to deal with this threat scenario, I recommend ..." - "In performance critical scenarios, you can avoid paying the cost of bounds checking as long as you..."


On HN you can click [-] to simply hide any statement that's beside the point ;)


Anyone knows a non-technical version which conveys the same point as this article?


To the https one... If the NSA stealing your server is a legitimate business risk, I suggest launching your secured server into space, they'll have a hard time sneaking onto your satellite without anyone noticing.


They'll have an easier time sneaking onto your satellite than you will sending out field service.

;-)


Something on the periphery of this article’s theses on pedantry, that comes to mind, is the notion that everything anyone states has to cite some peer-reviewed paper that is published in a prestigious journal; otherwise it’s useless.

“Source?” “Citation?” “Papers, please?”

Nobody is allowed to think aloud, lest they risk spreading disinformation, despite the utter and complete failure of occidental peer review systems (80% non-reproducible rate, etc.).

IMHO, this pedantic behavior also creates a Royal Society-esque gatekeeping environment that is hostile to individual a posteriori contributions, relegating them to “anecdata” status at best. That environment turns out to be also hostile to a priori knowledge creation in general.


The statements the article object to look like informal fallacies: hasty generalization, straw man, availability bias.


the conclusion of the post get's at vervake's idea of relevance as the basis of cognition:

https://ieeexplore.ieee.org/abstract/document/8132809/


"You're not wrong, Walter, you're just an asshole."


This blog post is 100% wrong. It's not about GC vs non-GC in today's universe where we have Rust that solves all these problems and more!


Ah this reminds me of a fellow at work. He's always correct, and the worst type of correct: technically correct.


Cool blog post. Let's go build some stuff.


What is the point of this blog post? It appears to be about arguments in the Internet in general, but its implied point seems to be a rant against low-level programming.


I don't think it had a focus on low level programming. It merely used people missing the difference between the domains best suited for each as an example of missing the point. Often the low level example was used as the example of someone missing the point. But I don't think that was intentional.


Its arguments are perfectly correct, however it exposes the state of mind that has lead us to javascript "apps" eating gigabytes of ram because they instantiate Chrome 3 times.

Yep, there's a price to pay when you're afraid of low level programming. Sadly you make us all pay that price.


That's quite a slippery slope though, from garbage collection straight to 3 instances of chrome.


I took it as an observation on internet arguments and not about the programming concepts that were presented as examples.


Must be more than one kind of programmer.

With a little discipline, the "100-1000" memory issues in 'every program' is a vast exaggeration. I write programs routinely with 0 memory issues. Just match every allocation with a deallocation, in the same module. Be responsible. As a habit, it pretty much cures the problem. In certain programming paradigms where its possible.

So, the author and I inhabit different programming paradigms. I knew that already. Because they rely on a GC and don't mind its weighty issues, I could guess what space they are in.


Yes I already know there exist folks who depend on GC utterly, never giving a thought to important issues of scalability or speed. They write all that laggy unperformant code requiring some huge programming paradigm so they can 'scale' to 100 nodes. Because their code only supports 10-20 clients per mighty processor. Instead of the 1000-10000 that straight code could eke out of a single machine.

What does my son spend his time with, at the average silicon valley startup? (He's been at half a dozen). Chasing down issue in the hodge-podge pile of mismatched services that somebody threw at their solution, than actually understanding their problem.


These two comments are a great example of statements that are (probably) correct but miss the point entirely. Thanks!


Maybe the title? But not the point of the article, which was basically a rant against an imaginary 'fool' that the author tried pinioning with their mighty intellect and specious arguments. Which were largely the same arguments (pedantic, true, missing the point) being 'lampooned'.


Your summary of the article is not consistent with its content.


Wow we read different articles.

The exaggerations, name-calling and chest-beating were the only common thread?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: