Hacker News new | past | comments | ask | show | jobs | submit login
What you know matters more than what you do (calnewport.com)
138 points by ridruejo on June 13, 2012 | hide | past | favorite | 34 comments



I think this point is definitely true for an academic environment, which Cal is in.

For the average workplace, the ultimate determinant of your success is what other people believe you have done and can do. This is certainly correlated to what you've done, what you can do, and your ability to promote yourself.

In my personal experience, what you choose to do is more important than what you know. I've mentioned a few times that I saved previous employer X $2MM/year, and I did this through finding several problems that seemed like they would be possible to automate, then learning how to automate them-not the other way around.


In my experience, the "learning how to automate them" implies the analogous step to the "technique"-learning being discussed in the article.

e.g. To automate a health care provider's intake and records system well, you really do have to learn a surprising amount of how those people currently do their job. If you understand the job of the intake nurse, the needs of the physicians' assistants, the way records are processed for insurance reasons, the way doctors search and browse, etc., you can save vastly more time and money (to say nothing of improving accuracy and reducing redundancy) than you would with a straightforward records digitization process.


Interesting that the title of the blog post is "What you know matters more then what you do". I would say this thesis is incomplete. The author talks about how him and his colleagues were trying to figure out the secret sauce of a high achieving researcher.

FTA:

I hypothesize two things. First, ultra-learning is difficult but it can be cultivated. Second, it might be one of the most important skills for consistently generating impact. Those who are able and willing to continually master hard new knowledge and techniques are playing on a different field than those who are wary of anything that can’t be picked up from a blog post

I would possibly agree with his hypothesis at the end of the post, but not the title. I think its likely that this person is a high achiever not just because he is learning quickly and thoroughly, hard topics. He is also applying the requisite work and determination required to be a success.

So IMO what you do has to be at least as important as the things you know because that is the only way we have in which to judge your success.


I would tweak his hypothesis:

1. It is now easier than ever to achieve "ultra-learning" because the vast majority of knowledge is available everywhere for free online. The "learn to code the hard way" crowd will disagree, but I would say working knowledge of almost anything can be gained in between a week and a month, depending on how specific the topic (eg creating websites: week, creating a kernel: month). You will NOT be an expert at either, but you will be someone who can imitate. Imitation and application of existing knowledge is extremely quick once you learn how to learn how to learn. I would say the researcher from the article and Steve Jobs are masters of the "working knowledge and apply" school, but the low barrier to entry explains why every day we all read about the "LinkedIn for this" or "Instagram for that."

2. I would define "deep-knowledge" as knowledge that doesn't already exist or isn't already accessible. This is true innovation or discovery. This is the internet, nuclear fission, DNA sequencing. This kind of knowledge is rare because it takes time, persistence, and capability. One has to gain working knowledge and keep digging until you hit the bottom of all knowledge about the topic... and then break out a drill bit and push on, or go to the tallest building in the world and add a story on top.

Unfortunately, our society rewards the first category far more than the second.


See my other comments in the thread to find out about me.

1. This is true. Weeks to month(s) is usually enough to learn enough to roughly be aware of the kinds of problems you can solve with a tool. Finding the right and interesting problem is not quite as easy though. Usually after I know enough to have a feeling about the kinds of problems, I stop learning the new tool. It's not interesting to learn more just for the sake of learning. Once I find a good problem(s), I put months or years in learning the tool and solving problems with it. One of the reasons I moved away from medicine is because you can't learn while solving problems. You have to learn a lot upfront before you are allowed to the "field". This is really justified, but I lost interest.

2. This is not really always the case. There are fields (medicine for example), where true innovation is not always result of "deep knowledge". For example antibiotics, vaccines etc. In those cases true innovation happened because of trying to solve a problem (and quite accidentally), not because someone by his really deep knowledge predicted these phenomena.

I wouldn't try to judge which way is "better". And I wouldn't use the word "unfortunately" for these. Probably we need very much both to progress.


Just riffing on the title which some below say is a horrible representation of the article, but "who you know, is more important than what you know."

As a old timer in the IT industry, I'd simply like to share that the connections that you make with people are much more important than what you know or what you do for your advancement within almost any company. Now you can't usually be totally incompetent, but the stronger your connections are with people above and around you in your organization, the more allowance you ate given for mistakes etc.

What am I really trying to say? Get to know people. Don't stay completely head down in your cube knocking out the best code. Get up every once in a while and build relationships with the people around you.


And it's often easier than it sounds. I observed a few popular people and decided to make mental notes of what they talked about, and I was surprised it was mostly trivial nonsense (who won the game, the car chase on the news last night, etc). It doesn't matter, people appreciate that you are just talking to them.


My take on productivity (it is about productivity after all, right?) is that it matters: a) What you choose to do, and b) How well you do it. The second is affected by the domain knowledge you have. Most people don't leave their domain for their whole worklife (they might have secondary domains because of hobbies), and should become experts no matter what (and therefore do well in point b)). Multi-domain genies like Goethe, Da Vinci (, Jobs, Feynman?) have to constantly learn hard to gain enough knowledge in many domains for them to be successful there. Also, I am convinced that such people can do better in a domain they have mastered because of the cross-domain analogies they can apply.

Still, what you choose to do (point a)) is as important for productivity. I can be a multi-domain genius scientist and still choose to spend all my time recapitulating achievements of other people, instead of developing a new physics model. Or I can spend years making one app completely bug-free instead of writing 10 new ones.

UPDATE: One thing I am really wondering about is: How deep does the knowledge within one domain reach for such personalities? Did Jobs know everything about OS's, or just as much as he needed? My hypothesis is that they all learn just as much as they need to achieve the goal at hand NOW (not hoarding knowledge for the sake of it).


> Multi-domain genies like Goethe, Da Vinci (, Jobs, Feynman?)

I'm sorry but I just can't let this go without comment.

Steve Jobs was not a "multi-domain genius." He had a single, very specific genius, namely gathering lots and lots of ideas and picking the very best one. (IMO the fact that he was an asshole is probably related but not essential.) His entire career was applying this single ability, which he really was a genius of, to multiple domains. He wasn't a genius at anything about those domains per se, just at what he did once he was working in them. I mean, the fact that he had any technical knowledge at all about operating systems or computer graphics already put him in the top tier of "business guys" but that's not the same thing as a deep understanding.

For that matter, let's keep going on your list. Richard Feynmann was a smart guy but all his important work (except superfluidity in helium, I guess) was made exploring a single theme, namely how to put quantum mechanics in a Lagrangian formulation rather than a Hamiltonian. He told the story once of the event that sparked his first interest in physics as a career, and it was his high school physics teacher taking him aside and explaining the principle of least action (the basis of the Lagrangian formulation of classical mechanics) - he basically organized the rest of his career around understanding this principle very, very deeply. The reason he's famous in addition to being smart is that he found a new domain to apply it to, namely quantum mechanics.

My point is that while mastering a domain of application was important for these people, probably even crucial, it wasn't really the engine of their success. They first developed a single talent or interest, and then looked for a domain to apply it to.


"IMO the fact that he was an asshole is probably related but not essential."

I disagree. He did not only gather ideas; he made sure that there were good ones, too. Doing that requires probing deep, and probing deep hurts. I think he also worked on his efficiency by dismissing people who didn't have high ratios of good ideas, and wasn't afraid to ditch a good or even an excellent idea for a better one. That must have hurt those with those good ideas.

Can one do that without being coming over as an asshole? Probably, but not efficiently.

That does not mean he was an all-out asshole, though. For example, from the little I read, Jobs cared for his family. Because of that, I would call him ruthless, not an asshole.

It is a bit like being an world-class athlete. They will do whatever is necessary to improve their sporting ability. You simply cannot become best of the world by thinking "that is her only chance at the Olympics. Let her go; my chance will come", or without sacrificing something or someone in you social life. Does that make them assholes? IMO: no.


Yeah, I agree with you that it's questionable to apply that label to Jobs and Feynman - Goethe and Da Vinci deserve that much more (it's a bit unfair though because humanity's knowledge was a joke at that time, compared to current-day's). I included them with a question mark because the article mentions both of them as being able to dive into multiple domains very quickly.


For perspective, here is my story: http://news.ycombinator.com/item?id=4106825

I am no "genius", but I am multi-domain for sure. Read the above link. As I already explained it there... the depth is quite like "master" versus "grand-master", at least with me. And I don't know for sure, but it's possible that I become a grand-master at one of my domains at some time (I am still young ;)).

I will try to make a comparison with DaVinci. He was probably a "master" of Medicine (Anatomy), and I am sure that really helped him become a "grand-master" of painting. In my case I am a master of medicine, and especially things like genetics, immunology and physiology. Believe me or not, but these things have a whole lot (at least regarding the way of thinking) in common with programming. Especially with how Erlang programs are structured (actor model). Today, Erlang is the programming language I consider my best. I hope, one day I might become a grand-master at programming. Time will tell.

And there is another "secret" if you want to go this way. Try to pick relatively "obscure" domains. Being merely a "master" of Erlang, might not be a drawback for my career, but things are probably not the same with Java (just a wild guess).


I agree with your hypothesis that they learn just enough to accomplish x. That has always been how I do anything. I didn't learn how to write websites that interact with a db, I had an idea that I wanted to accomplish and I learnt enough to do it. This has helped me learn how to tie into a db, create interactive services and perform FFT on thousands of devices to identify bad cables based on their preequalization data.


The commenters here seem to have missed what I thought was Cal's point: What you know matters more than what you do, because what you can do is limited by what you know. Learning new techniques is important because it expands the arena of what you can do.


I think that is a given: Once you've done something, you intrinsically know how to do it.

More interestingly, to what degree does our knowledge play on our ability to see new ideas? Do our brains ignore the seemingly impossible because it seems impossible? Could someone have imagined Facebook before computers and the internet were invented, for example? Do the ideas come first, and then one set out to figure out how to make it work, like as it appears to happen in science fiction? Or are those ideas already based on working knowledge of what is reasonably possible with not-so-far-off technology?


The entire point seems to be "you must know how to do something before you do it". Academics seem to revel in stating the obvious in a formal tone with big words.


That's really a chicken-egg problem, no? We learn by doing (as children), and knowledge is better stored in our brains if it's supplemented with "doing".


> According to my colleagues, this star researcher tends to begin with techniques, not problems. He first masters a technique that seems promising (and when I say “master,” I mean it — he really goes deep in building his understanding). He then uses this new technique to seek out problems that were once hard but now yield easily. He’s restless in this quest, often mastering several new techniques each year.

The Feynman algorithm for looking like a genius!


I am also a computer science assistant professor like Cal Newport so I can relate to what he is trying to say here.

An academic paper in computer science usually consists of applying a well defined "technique" to a well defined "problem." For example applying "Support Vector Machines" to "Text Classification" or applying "Expert Systems" to "Medical Diagnosis" or applying "Particle Filters" to "Robot localization"

The crux of what Newport is referring to is that in practice, many(most?) academic researchers don't deeply learn new techniques that are outside of their immediate research agenda after leaving grad school. They will certainly be AWARE of new techniques and might learn their high level ideas but they won't really learn them deeply enough to be able to improve them or use them in a non-trivial way. It is much much easier to continue exploiting and building upon the techniques they have already mastered than to invest several painful months(years?) to master completely new techniques.

So what Newport is suggesting that you should stop applying your current methods that are easy and very productive for you and make a substantial investment of time and energy to master the latest techniques - even if you don't exactly know how you're going to apply them

A programming analogy: If you're a C++ programmer, stop doing C++ projects and spend at least 3 months until you're an expert in Python. If you're a Python programmer, stop writing Python and invest several months in becoming an expert in Go. (the analogy is not perfect because mastering a new language is probably a bit easier and more fun than the kind of things Newport is talking about)

Here are a couple of thoughts I had on this:

(1) How applicable is this idea outside of academic research? For example, in academia there is a big reward for being the FIRST person to apply a given technique to a certain problem, but outside of research being the 2nd or 3rd person to do something can be just fine. (See: Friendster, MySpace, Facebook). So maybe you can afford to wait until someone has shown a great application of a new technique and only then jump in and try to exploit it.

(2) An opposing but also convincing idea is that it is better to focus your efforts in one area to avoid spreading yourself too thin. Such focus allows you to gain "comparative advantage" and to easily do things that are difficult for people who don't have your deep experience. In other words, it is better to spend your efforts trying to become the world's greatest Python hacker than to jump on the bandwagon of every new programming language that comes out and ending up as a "Jack of all trades, master of none."

My conclusion: I do think it is worthwhile to challenge yourself not to just stick to what you already know (which in my personal experience is VERY easy to do especially if you find that you're very productive using what you know). But you also have to be selective. Life is too short to try to be a master of everything. And there is great value in gaining a very deep expertise in a particular topic or technique.


Switching languages every 3 months does not sound like a way for most mortals to get a lot of concrete things done, although perhaps you will have a payoff of knowledge after several years, assuming you do not already know several languages.

Most mortals will end up with some half-baked newbie projects and a huge long list of languages they have used, apply for jobs and find that any given job is really only interested in one or max two blub languages

Indeed, academia rewards this kind of technique focus, in large part because academia is not much about truth and more about making a name with sexy stuff, connections, writing big grants, etc.


I do not agree with you. See my comment (sibling to yours).


By your account, you took 5-6 years to learn CouchDB, XMPP and Erlang. That is a very far cry from learning a new programming language every 3 months.


And also, Python, Riak, Redis. And these are only the tools that I did use with great success. Others are PHP, MySQL (and other *SQL), Java, C#. Of course, once I found out that they are "no good" for finding new exciting problems to solve, I dropped them, so I didn't become a "master" of them. With the exception of MySQL, which I think I am a "master" of.

EDIT: also, I never claimed that 3 months is the magic number. It was a (wild) guess by someone else.

Second Edit: Also, for quite some time I have been studying C, Haskell, Lisp and to lesser extend some other stuff. I am already quite familiar with the type of problems they shine at. Once such a problem emerges, I will readily jump on it, and will intensify my learning of the tool in question.


It is good to know multiple languages, but I think at some point you are only marginally making your job easier by learning yet another. The article, at least as I read it, speaks to learning something new that makes solving a problem, that was previously out of touch, significantly easier.

Maybe a different programming analogy would be that you know C++, and then you learn some electronics and realize that you can now build a small wristwatch computer using a basic microcontroller – something that would have been impossible with the full-fledged PCs you are used to programming on.


I can confirm that this works for a programmer. I didn't even know that I function like this, before reading the article. I will go even further, and state that it works for everything.

And it's very often better to be a "master" of several trades, than a "grand-master" of one. Of course, if you are the grand-master of a trade, things are completely different, as we all know.

I was once a medical student. But before that, I wasn't attending a high school specializing in natural sciences, but an "English Language School" (I'm from Bulgaria, Eastern Europe). So knowing English made me a better medical student. One day, I discovered the general purpose computer, and I became obsessed with that. Being good at computers, made me even more outstanding amongst my colleagues. I learned Linux, server administration, network administration and I was a god.

I decided to switch careers (computers are so much fun), and I became a programmer. Lacking the education, I started at the lowest possible position - a Junior Visual Basic programmer in a company, close to the end of the world :). And guess what, I was WAY better than my colleagues. I had almost no experience, but I was already a scientist. You won't believe me, but there are so many programmers out there who are not scientists. And I knew Linux, I knew networking. I was learning Python in my spare time, and I started discovering problems, so easy to solve with Python. I was already a god at that company, from day one. Not to mention that I was already learning "web". HTML, CSS, JavaScript, web servers, reverse proxies, "http accelerators", databases etc.

I was very fast to land a python web programming job after a few months of "experience". And guess what? I was already learning "NoSQL" databases in my spare time. I knew how to solve problems with CouchDB. We started implementing stuff in CouchDB. I also became obsessed with XMPP, and started using it to solve more problems. That's how I found Erlang (ejabberd).

I learned Erlang, and I was now able to solve even more interesting problems. The new job offer followed...

Now I write Erlang services (and I utilize XMPP), and I find new problems to solve with my new tools - Redis, Riak, riak_core. And I am exploring more tools. I wonder what will be the next big thing. Haskell? Ocaml? Lisp? C? (yes, I have yet to utilize the power of C). But I am certain it will follow. I can't help being myself.

During all this time (roughly 5-6 years since I program), I never really became a "grand-master", but knowing all these tools is really a new experience of its own.

The single tool that I left behind is Visual Basic (never really liked it anyway). All the others are of great use to me till now. Even Medicine. It was the best one to teach me how to deal with complex systems.


Could you elaborate on what you mean when you say are a scientist and there are many programmers who are not scientists?


Could you elaborate on what you mean when you say are a scientist and there are many programmers who are not scientists?

I presume he's talking about people who probably shouldn't be programming (those that routinely write code the likes of which show up on thedailywtf.com; you know, stuff that assigns variables multiple times in a row). Apart from that, it's also telling that even most "computer scientists" don't know much outside of programming, even for the domains they work in. I count myself lucky that I get to work with people who have PhDs in basic sciences on a daily basis; just keeping up in work discussions requires delving into some really interesting science papers and books. Granted, I couldn't write you Dijkstra's algorithm right away, but that's why I keep the CLR book around.


Correct. I am talking about people who are not very good at logic and scientific reasoning. People who know how things "are" (because someone taught them so in the university) and don't care why. People who blindly follow "design patterns" and use their "hammer", regardless of the screw they are trying to deal with.

People who say "I only learned Pascal at school. Let's use that for the problem at hand" (No offence to Pascal and people using it).

People who know the Java standard libraries by heart (and have an excellent diploma because of that), but can't produce a simple working program, because they lack understanding of the nature of computing.

The above are real examples of people I have worked with.


(2) An opposing but also convincing idea is that it is better to focus your efforts in one area to avoid spreading yourself too thin. Such focus allows you to gain "comparative advantage" and to easily do things that are difficult for people who don't have your deep experience. In other words, it is better to spend your efforts trying to become the world's greatest Python hacker than to jump on the bandwagon of every new programming language that comes out and ending up as a "Jack of all trades, master of none."

My conclusion: I do think it is worthwhile to challenge yourself not to just stick to what you already know (which in my personal experience is VERY easy to do especially if you find that you're very productive using what you know). But you also have to be selective. Life is too short to try to be a master of everything. And there is great value in gaining a very deep expertise in a particular topic or technique.

I've come to the conclusion that, if possible, it's best to be a "jack of all trades, master of one". It may take longer, or you may never truly attain mastery, but being flexible and not just willing, but eager to push yourself to learn new and different things is always a good thing. If nothing else, find something you enjoy enough to get enough skill in to pay the bills and then play with other things in your spare time.

This is particularly interesting to me, because while I find I could become obsessed with something enough to attain deep mastery, I find there are so many awesome things to learn and play with that I don't want to miss out on them. This could explain why I didn't go to grad school (and go on to become a professor, a once dream of mine) and haven't founded my own startup, but instead prefer to languish at a 7-4 day job with the government and sample everything from robotics to music to cooking to search and rescue in my spare time. To be sure, I still push myself at the job (it's funny you specifically mention C++ and Python: I'm finishing up Coplien's Advanced C++ while delving back into Python for the robotics as well as smartphone/tablet programming, and Go has also piqued my interest). The one limiting factor to try and keep me focused is that I try to keep my learning to tools that are open source, so as not to suffer from lockin and platform obsolescence. Plus there's the technical superiority of open source software in general.


You might have difficulty focusing (or a preference not to) but with a day job and so many other interests, you would get a lot of benefit from focusing your programming efforts, because the left over time is so limited.

Which one is the most viable way out of 'languishing'?


You might have difficulty focusing (or a preference not to) but with a day job and so many other interests, you would get a lot of benefit from focusing your programming efforts, because the left over time is so limited.

Well, I do have a bit of a problem focusing; I've got dozens of projects scattered in my home directory that are half finished, it's just the discipline isn't there.

Which one is the most viable way out of 'languishing'?

I get back to my side software projects every once in a great while; once I make something worth releasing, I'll put it out there. Maybe just find some OSS projects to help with; I'm good at cleaning up other people's code. I'm not willing to sacrifice one of my hobbies to go back to grad school or found a startup (both of which would also require moving), but I sometimes fancy I could quit my day job and freelance.


terrible title* (not chosen by OP, mind you), that almost put me off - but a very good article. *it should be more along the lines of: what you can learn matters more than what you can do


Yep, had the exact same reaction.


For the workplace I'd say what you do is infinitely more important than what you know, although there is a clear relationship between the two. At the end of the day, people can qualify and measure what you do. Trying to externally measure what someone knows is pretty much impossible to the same extent. Whether you're known as the guy the broke the build, or the guy that automated the build process, you're known by what you do, know what you know, you know?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: