Hacker News new | past | comments | ask | show | jobs | submit login
The Silent Majority of Experts (dadgum.com)
131 points by the_mat on Jan 7, 2014 | hide | past | favorite | 37 comments



But without any way of correcting for this silent majority, or any sort of predictable systematic tendency, so what? We don't know everything? There's a lot of things we don't know, and on most topics we hear from only a small fraction of people, both expert and otherwise. Why is this worth pointing out?

(The point of knowing about things like publication biases in science is that they are systematic: once you know about publication bias, you know that estimates are on net, higher than they should be, and this is something you can apply to evaluating science that you read.)


>so what?

I think his target audience here is the {PHP,Java,C++,etc}-sucks crowd. Don't believe you know the global consensus on any technology just because the vocal minority (places like HN) seem to have a consensus. He even closes the article with the following:

> Your time may better spent getting in there and trying things rather than reading about what other people think.


> He even closes the article with the following: > >> Your time may better spent getting in there and trying things rather than reading about what other people think.

But collectively, those people have spent a lot more time on the technologies, in many more situations, than I have used them or am likely to use them. How am I better off ignoring them and throwing away data? How does listening to them make me worse off?

When I was looking at statistics languages, I didn't spend a year trying Stata, a year trying SAS, a year trying Julia, a year trying Matlab, a year trying Panda+Python, a year trying R; I just looked at what people were using and blogging about and opinions on them, and picked R. How would I have been better off ignoring all of the community discussions and picking on my own? What systematic tendency causes the discussions to be literally worse than random noise?


The passage you quote doesn't say "worse than random noise", it says "worse than trying it out". I'm not sure that's true but it is a substantially weaker claim.


>But without any way of correcting for this silent majority, or any sort of predictable systematic tendency, so what? We don't know everything? There's a lot of things we don't know, and on most topics we hear from only a small fraction of people, both expert and otherwise. Why is this worth pointing out?

Because it tells you that the more vocal people are usually full of BS, and you should take notice of what the silent experts and does-of-stuff practice. Who said there's no way of "correcting for this silent majority"?


> Because it tells you that the more vocal people are usually full of BS

How's that? All the more vocal people tell you is that they're... more vocal. Where is the evidence that the more vocal people are correlated with systematically being wrong in a predictable and correctable way?

> you should take notice of what the silent experts and does-of-stuff practice.

And how do you do that when they are silent?

> Who said there's no way of "correcting for this silent majority"?

You still haven't given any way.


>All the more vocal people tell you is that they're... more vocal. Where is the evidence that the more vocal people are correlated with systematically being wrong in a predictable and correctable way?

I'm sorry, if you want a verifiable proof with papers and references I don't have one for you. A lot of things in life don't have such proofs.

But my empirical experience has been that tons of people with no idea what they are doing are ranting on blogs and tutorials and such, where hardcore programmers I know are silently getting far more impressive shit done.

Let's say this: how many of the iOS, the Android or the Windows Phone teams blog, vs those working on the respective platform?

>And how do you do that when they are silent?

Watch them work, work with them or fucking talk to them. That they don't write blog posts like superstar primadonnas and that they don't give talks at conferences and rant on HN doesn't mean that they are impossible to find or unable to speak when found.


> I'm sorry, if you want a verifiable proof with papers and references I don't have one for you. A lot of things in life don't have such proofs. But my empirical experience has been that tons of people with no idea what they are doing are ranting on blogs and tutorials and such, where hardcore programmers I know are silently getting far more impressive shit done.

I hope the irony here is not being lost on you.


More like self-knowledge -- those "silent" people are making far more impressive shit done than _I_ do too.


It's a common argument - "Lots of people use this but don't talk about it! It still matters!". Common Lispers sometimes say that (but we talk a lot. ;-) )

On the other hand, you can't make decisions without data - without discernable activity, it's reasonable to assume a community is dead.


But the non-expert bias online is systematic. Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise.

There are real, pragmatic ways to correct for this bias and get more useful information out of online forums, too. For example:

1.) Look for people who have a solid, independently-verifiable track record who are now just starting ventures that need publicity. For example, Marc Andreesen had an absolutely awesome weblog in the ~1 year prior to founding Andreesen-Horowitz, but now his comments are largely limited to snarky one-liners and occasionally insightful one-paragraphers. Why? There's no incentive for him to spread his knowledge around the general public; his firm already has enough of a reputation to draw the top potential founders. The entrepreneurs he funds get his advice, but everybody else has to make due with occasional soundbites.

2.) Look for people who post only brief, offhand comments, but then follow up on those comments and do the research yourself. Many "silent experts" may have time in between compile breaks to throw in a throwaway comment or correction, but they don't have time to write a long missive. However, if you follow-up yourself and do a bit of Googling, you can take their clues and learn what they were talking about. This is how I found out about Haskell, it's how I found out about writing scalable event-driven servers, and it's how I found out about writing multi-language systems where a scripting language is embedded inside a larger program.

3.) Look for people who can see & acknowledge both sides of an issue. Practical experience teaches you about trade-offs, it teaches you about alternatives, and it teaches you that there are often multiple solutions and oftentimes you need to give up some desirable properties to get others. Blog posts by Internet Fanboyz teach you that there is One True Way Of Doing Everything that will solve all your problems, because that is the only way they've ever encountered.

4.) Similarly, look for people who stay out of flamewars. Folks with real jobs who care about their craft don't have time for that shit, because becoming an expert takes a lot of time. So the folks who do have time for that shit are generally either folks without jobs or folks who blow off their jobs to score points on the Internet.

5.) And perhaps most effectively - work directly with an expert. Start contributing to open-source and understand why the maintainers make the choices they do. Take a job at a well-respected company. Work with the gruff neckbeard at your employer. When experienced programmers have to clean up the messes you make, they have a very strong vested interest in not letting you make any messes.


"Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise."

Anecdotally, this is absolutely wrong. I can think of almost no one who participates online, who isn't better than most people who don't participate online.

Maybe it's an issue of averages, and at the edges this is true - that the average "superstar" spends less time online, but that the average "OK" person does spend time online.

But I can certainly say, the silent majority of programmers, the ones who don't take part in anything except just focusing on their work, are almost always worse. I've seen this time and time and time again.


Availability heuristic.

I think your perspective may be warped by two things: not having worked with excellent engineers in person, and only considering relatively famous devs in your set of people who "participate online" - e.g. you're excluding the denizens of the last 80% of pages on StackOverflow.

One of the most capable engineers I've ever worked with is a guy called Yooichi Tagawa. The guy has an incredible appetite for complexity, as well as spooling up on old codebases and new technologies. But you'll find very little by him online, both because he's Japanese and doesn't use English often, and also he's squirrelled away inside Embarcadero, working on Delphi compiler as he's been doing for the past 15 years or so.


"Availability heuristic."

It's certainly possible. And there are definitely a great many devs who don't participate online but are brilliant, don't get me wrong.

But I still think the average person who does participate online is better than the average person who doesn't.

In retrospect, what bothered me was the implication that participating online is detrimental, when I think it's just the opposite, if for no other reason than that people who do participate online tend to care more about programming than people who don't, and are therefore better.

Also, availability cuts both ways. Don't think about the amazing devs you've worked with, think of the averages. Trust me, I've worked with people of wildly, wildly varying ability, I know what's out there. Both on the amazing side, and on the "can't code worth a damn" side.


The problem is that you're missing a huge swath of deep (as in, learned over decades of practical work) knowledge that exists within the community of engineers -- not just programmers, but all types of engineers -- in the 40+ age range, who have been working and learning and applying in industry since before the social/online boom, and may well be uncomfortable participating in the same way as the current generation tend to. Generally speaking, and I know this is an unfair generalization, that generation tended to have an approach to problems of "let's reduce this to it's core pieces and see what we know, then experiment until we understand it well enough to solve the macro- problem" In other words, the scientific method. My experience with folks who've entered the market in the past 5-10 years is that, when they first receive a problem, their inclination is to search the web for possible solutions and then hack other people's code to address the 80% easy part of the issue, while often completely ignoring or potentially just misunderstanding the more difficult 20%. This kind of behavior is one of the reasons company culture is critical, and employing senior, experienced engineers & technical managers who have at least a partial focus on engendering critical thinking and problem solving skills in their younger team mates.

Point of reference: I manage a team of about 100 programmers, analysts and DBAs in India, Mexico, Brazil, Scotland, and the US.


I think I'd probably agree that the average person who participates online is more skilled than the average person who doesn't. Participating online exposes you to other perspectives and lets you test your experiences against others, after all.

And I wouldn't say that participating online is detrimental. However, I stand by my contention that participating online is not the most effective use of your time if you want to become a better programmer. Programming is. It is a more effective use of time than, say, watching TV (which a fair number of average devs would do), but the information density you learn from grappling with a real problem and trying to solve it with software far exceeds the information density available in blogs and Hacker News comments.


I can think of such people. In fact, i don't think any of the really excellent people i've worked with blog, or spend much time in forums, or use Twitter for technical stuff. Some are involved with local meetups. The rest are just getting on with things.

On the other hand, i routinely come across blog posts or HN comments by people who clearly have much less expertise than them.


> Time spent participating online is time not spent honing your craft, and so there should be a strong inverse correlation between posting frequency and expertise.

I disagree. Many people (including myself) have observed that qui docet discit, and there is little better way to teach yourself than to discuss and debate and work with other people. This eliminates your claimed strong inverse correlation, and the rest of your suggestions simply become ways of finding additional information and not corrections for any systematic bias in available experts.


> there is little better way to teach yourself than to discuss and debate and work with other people

Not the OP. But I know a decent number of people of the sort he mentions. These people do actually discuss and debate stuff with other people, but often in person or in rather specialized forums. They don't have a solid presence online and almost never show up in general forums.


We see this in the diving community too, there are a bunch of forums where people have prolific arguments, usually about which agency or brand is best, stuff that has been done to death and barely matters anyway. Whereas if you get off the Internet and go to dive sites you will find a ton of friendly people who have a thousand dives and are happy to share real knowledge, who never bother to go on the forums.


One place in which this problem was particularly apparent was (is? I haven't kept up with it for a while) the PHP community. PHP gets a bad rap as a terrible language, but a large part of this was the ease of beginning the learning process with PHP; download [X|W|M]AMPP, spin up an Apache instance, and get just dive in. What this led to was a lot of sub-par instruction material from people who didn't have a solid grounding in design principles, or "tutorials" created from people who had little experience.

Despite this, there were clearly experts who could "finely craft" PHP applications. Although Facebook may not be the best example, it is the first one that comes to mind.


I've worked on huge web applications that were super maintainable and fun to work on, solving pretty interesting problems for multi-billion dollar companies... In PHP. I sometimes want to defend the language, but it does have some terrible parts to it, and a lot of the people that use it aren't software engineers. But a lot of them are, and so I just choose to work with them instead of complaining about it all!

On top of that, the latest developments in the language and community have seen some big changes for the better around tooling, best practices, and the like. So I'm firmly of the belief that with the RFC, PSR and Composer/Packagist trio, as well as things like HHVM, we will see things begin to change for the better on this front in the future. Call me Mr Optimistic ;)


There's some sort of point about experts in general which I am not sure exactly how to articulate but I think is relevant. I'm going to try to klutz through it anyway.

There are lots of fields where fundamental theories are relatively weak or have poor predictive powers. Macroeconomics, climatology, nutrition etc. Basically, we don't have real Knowledge. We have a bunch of data and a bunch of theories. Some of the theories that aren't very general or aren't very applicable to real life scenarios are predictive but relatively useless. We know that certain nutrients have some importance. We know that restricting caloric intake leads to weight loss. We know that money supply, inflation and other things are linked together in various ways. The theories don't answer the questions we want then to with any kind of certainty. Still, we sometimes need to make decisions and some knowledge is better than none so we go and find experts anyway. There are people who are experts. They're experts in the study and they are aware of our knowledge in the field such as it is. But they don't have real answers because there just aren't useful answers to be had at this point. All of medicine was like this until pretty recently.

When Darwin published "The Origin of Species" evolutionary biology came to being as a different kind of field. One where the knowledge was real and the theory predictive. The theory was fundamental and strong. Darwin could make claims & predictions with a lot of confidence. Subsequent biologists could keep making predictions and when new discoveries (like genetics) were made, they were found to be consistent with the predictions of evolutionary biology. In fact, if they hadn't been, a careful researcher would probably assume that the mistake was in their own conclusions, not in Darwins. So if Chimpanzees are closer to humans than to Gorillas, we share common acceptors not shared with Gorillas and the distinction between Humans and Apes (if we want to keep Apes as a category) is morphological (which is allowed by Darwin) rather than one of proximity on a family tree.

Darwin was careful. He didn't publish until he was sure. If he didn't been sure we wouldn't have published. There are lots of Darwins in every one of the former type of field. They haven't discovered real Knowledge that can tell them what to do in an economic recession or what people should eat so they shut their mouth and keep looking. They are still experts but their expert opinion is "I dunno." That doesn't register as expertise so we go on to find someone that will explain about Aggregate demand, antinflamatory diets, carbohydrates or something like that.

Development methodologies, executive compensation, distributed companies etc. are in the category of things that we don't have real "scientific" knowledge about. Most business-ey knowledge is like that.

Now it sounds like I'm bashing people who talk about this stuff and I don't mean to be (hence my disclaimer at the start). I'm just pointing out that our knowledge in different fields is different. Joel Spolsky is very insightful in his essays about the Software industry, for example. But there are certain people that are comfortable with anecdote and generalities and assertions that may turn out to be untrue. There are certain people (like Darwin) who are not. If we're talking about development methodologies, the people we here from are self selected. They are comfortable making grand statements, manifestos and such even though they may be wrong.

That's still useful and certainly interesting, but there is a big category of people we aren't hearing from and they are relevant to the discussion.


This comment is sufficiently interesting that I want to grab a coffee with you and talk about it IRL, but you don't have any contact information in your profile so I don't know where in the world you are.

The next time you roll through Austin make sure to look me up.


Dublin Ireland, If you're over digging up ancestors, give me a shout.

And thanks for the compliment :)


I think you are referring (at least partially) to something called the Dunning-Kruger effect[1], where people who know less about something tend to express more certainty about what they know (and thus tend to get more attention as well) than people who actually know more about it

[1]http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect


I think the grandparent poster is referring to something subtly different. It's not that people making the bold pronouncements are incompetent, it's that they may be competent but are not quite as competent as other folks who stay silent, and they're okay with that. Oftentimes an "expert" knows more than the person they are explaining things to - it's just that they may know less than someone else who doesn't bother to explain things.

Somewhat relatedly - there's another phenomena where people may be quite aware of the limits of their knowledge, but deliberately choose to hide it to achieve some objective (usually personal gain, but it could also be in service of some organization or mission), because they know that their audience is more likely to believe confident people. You pretty much have to do this to found a startup, because all startups are inherently risky and uncertain and yet few people will follow you if you seem uncertain.


I think risk is extremely pertinent to this situation.

The (relative) silent expert to those more audibly expressing/pronouncing is also considering the outcomes of releasing information that is highly likely to be correct, and what may come of that information becoming public.

They also take into account the scenario where they may be wrong or only partly correct, and if so what are the long term problems that may result in being wrong (or even in being right - take the heliocentric model of the universe for example). Therefore they may deem the risk too high.

We're still finding out what a search engine monetised via advertising does to the world.


Aristotle said that the rhetorician is a rhetorician not because he absolutely can convince someone of something, but because he is aware of all the means of persuasion.

likewise, there's no guarantee that a doctor can heal /you/. but you go to the doctor because he is aware of the various means of achieving health. this is different from evolutionary biology, where firm predictions can be relied on.

same goes for lawyers; they can't guarantee a victory, but they are aware of all the methods in the courtroom. These are professions and fields of study whose variables are humans. i agree with the parent commentor that the distinction is worth investigating.


That's why I tried to stop caring about the [current] web and embraced the future by reading mailing list or irc channels of actual projects. Going close to where things actually happens, raw.


It's true. I'd label it "indirect knowledge hoarding." But the excuses/reasons are usu. most people are busy or don't feel they'd make good teachers.

There has to be an incentive / prioritization by the person to do something about it. Also, some people don't even think they're experienced enough.

Maybe useful knowledge transfer via teaching or at least reviewing teaching material.


The point that resonated the most with me here was that people need to be given big opportunities to fail without prejudgement, otherwise they'll never build their skillset significantly. Obviously nobody is ever hired in a vacuum, but the takeaway for me was:

remember to judge others on their past behaviors, and nothing else.


This seems wrong to me, and fairly obviously wrong.

If you take a group X, a certain percentage Y hang out in newsgroups and blog. He himself was part of that Y for Forth.

Why would the percentage Y be significantly different for any given language?

The number of bloggers/newsgroup users is like a survey, it's not complete, but it's a very good indicator of how popular a language is. The popularity of a language increases its ease of use by a considerable factor (libraries, help, documentation, etc.).

Hence Forth being a language no-one uses today, which actually contradicts his point, rather than illustrating it as he seems to think. Elizabeth Rather was right, there were people doing interesting things with Forth, but there weren't many.

And for most of us, that's an important thing to consider when using any tool.

That there is a huge number of bloggers looking down their nose at Perl & C++ simply shows there's something wrong with both those languages. It doesn't mean that they're not still useful though, nor does it mean something else solved the problem until a lot of 'I used Go/Rust/Haskell/Whatever to make non-trivial program' posts appear.


> If you take a group X, a certain percentage Y hang out in newsgroups and blog. He himself was part of that Y for Forth. Why would the percentage Y be significantly different for any given language?

There are lots of reasons I can imagine: people trying out newer languages have more reason to talk publicly about what they're doing, since they usually have a vested interest in the growth of that language. Organizations with tight rules on secrecy are often risk-averse in general and less likely to try out new languages. So new languages wind up with users that are more open, and they appear even more active. (I don't know if this is true -- it's just a guess. But your assertion that the percentages would likely be uniform across languages feels unlikely to me.)

I'm not sure the argument is that the percentage who hang out in newsgroups and blog is different, anyway, but rather that looking at newsgroup activity and blogs specifically selects out many of the actual experts. This has absolutely been my experience, and one of the most damaging assumptions I see is that surveying blogs and twitter feeds is enough to get a complete understanding of how people use a piece of software and what they think of it.


I don't find this true at all. Lisp for example has tons of people blogging about it for the size of it's user base. Other things like firmware and DSP almost no experts are blogging. There are hobbyist, and that's about it. Or maybe they hang out in completely different circles.


As the adage says, empty vessels make the most noise.

Or: Those who can, do. Those who can't, talk about it.


Everything popular is wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: