I see a lot of people do stupid shit because they don't seem to think it will make a big difference. One mistake isn't a big deal, is it? The problem is that it becomes habit, and then you're making plenty of these, and you think it's okay, but in reality a lot of these mistakes have become normal.
I think the most profound thing about the post was that it showed a striking difference between determined practice and directed practice. Just being determined and putting in the hours will _not_ be sufficient to pass a plateau of learning. Sometimes you need _directed_ learning to push you past that plateau.
Applying that to code, I think this is the difference between just programming a lot and thinking you'll get better, and actually reading texts, reading code and talking to other programmers to see how other people do things better.
For example, you can start using more anonymous functions in your code because all the cool kids are doing it, but unless you really understand how to deal with high-order functions and what a map and fold are, you are just going to be doing stupid shit that doesn't really help your code at all.
Simply using map and folds instead of for-loops is just more stupid shit that doesn't really help your code at all. It's not really any shorter or less bug-prone, and it makes your code harder to follow for people who don't understand map & fold.
The real benefit comes from when you understand the concepts behind map and fold. If you realize that fold is nothing but replacing each n-ary constructor of an algebraic data type with an n-ary function, then you recognize that you can do the same thing for data structures besides sequences, like binary trees, graphs, and ASTs. There you're getting some benefit, because there's no built-in language syntax for iterating over those.
data List a = a : [a] | []
[1, 2, 3, 4] = (1 : (2 : (3 : (4 : []))))
foldr :: (a -> b -> b) -> b -> [a] -> b
foldr (+) 0 [1, 2, 3, 4] = (1 + (2 + (3 + (4 + 0))))
foldr (*) 1 [1, 2, 3, 4] = (1 * (2 * (3 * (4 * 1))))
...by extension ...
data Tree a = Leaf a | Branch (Tree a) (Tree a)
foldTree :: (a -> b) -> (b -> b -> b) -> Tree a -> b
foldTree f g (Leaf x) = f x
foldTree f g (Branch x y) = g (foldTree f g x) (foldTree (f g y)
Then you recognize that the GoF calls this the Visitor pattern, because certain stupid languages don't have higher-order functions or algebraic data types, and so you need to make the equivalencies between concrete subclass => ADT constructor, Visitor => function, and object state => return value. Suddenly a lot of modularized compiler libraries (eg. LLVM) make a lot more sense.
Then you realize that a map is nothing but a list-fold where the binary operation is constrained to be the composition of some arbitrary function of the element together with a cons operator:
map :: (a -> b) -> [a] -> [b]
map f = foldr ((:) . f) []
The cool thing about this formalism is that it makes it explicit that f depends only upon the single element of the sequence, and that the ordering of the resulting list is independent of the actions of f. In other words, map can be parallelized. And that lays the groundwork for MapReduce, which lays the groundwork for massive-scale parallel data processing.
What lays the groundwork any massive-scale parallel data processing is a lot of good engineering, being aware of the common execution scenarios and relations to the environment (incl. HW). Plus tinkering with little devilish "details".
Pretty much the opposite of the (true but trivial) concept that ordering of the result is independent of the map function.
TL;DR What you wrote at the end sounded a bit analogous to saying startups succeed because of their idea, while they succeed for a number of things -- mostly the execution and how much other people like the idea in the real world.
Well, any real-world engineering problem is going to have a lot success factors, far more than can be enumerated in a post on an Internet message board.
The general point is that you aren't going to succeed unless you understand those building blocks in enough detail that you can take them apart and put them back together again in new ways. Map & reduce are concepts from functional programming, usually explained in Scheme or Haskell. MapReduce is a C++ framework. To go from one to another, you not only need to understand the nitty-gritty engineering details, but you also need to understand the fundamental computing concepts well enough to translate them into languages and use-cases that they weren't originally intended for.
(Side note: the MapReduce framework actually bears less resemblance to map & reduce than most people think. The "map" phase is a combination map & unfold, because you're not only iterating over the input, but you can output multiple times for a given input element. Think of running a word count over web pages: you have to parse the page and output multiple words per page. And the "reduce" phase is only a reduce within keys: it's really a map between keys, because the output is required to have the same keys as the input to the phase. I've often wished for a separate "re-key" phase, where the values output from the reduce phase could be reshuffled under a different key.)
I don't like the preachy tone but I like the way you disassembled your own argument in the side note there :-)
To avoid misunderstanding, I'm a fan of functional programming too. But the intersection of the MapReduce framework and map&reduce of Scheme and Haskell is mostly in the name and in some core math concepts that are ageless (predate programming and lambda calculus).
> I think the most profound thing about the post was that it showed a striking difference between determined practice and directed practice. Just being determined and putting in the hours will _not_ be sufficient to pass a plateau of learning. Sometimes you need _directed_ learning to push you past that plateau.
This is exactly right. Ericsson and the related expertise psychology literature calls your directed learning 'deliberate practice'. Have you read the _Cambridge Handbook_ or his 1993 paper http://www.gwern.net/docs/1993-ericsson-deliberatepractice.p... ?
Applying that to code, I think this is the difference between just programming a lot and thinking you'll get better, and actually reading texts, reading code and talking to other programmers to see how other people do things better.
I don't think this follows from the article. "Not doing stupid shit" in the article's term is getting better at the basics. When you're describing is directed study but I can't see how it is directed at "the basics of programming", which would have to be something not off-by-one, not using objects before their initialized or whatever is "really simple".
The problem of overcoming bad habits is a really tough one.
In a performance-based skill like Chess or music, you can drill simple stuff to make them perfect. It's hard to see a simple equivalent in programming.
I'm also not sure if there is an equivalent to perfection in programming. All the things that slow me down do look "stupid" on some level but they're stupidity of different levels, from design to variable name to the creation of functions to understanding and avoiding syntax errors.
If there is an awareness drill to educate oneself against bad programming habits, I'd love to find it.
Regarding performance tests: maybe just try to write code that works the first time, see how big a chunk of code you can write at once and still have it work the first time. And when it doesn't work, don't just fix it, analyze what happened and learn from it specifically to avoid making similar mistakes in the future. You'll acquire (I'm guessing) a list of things to avoid that eventually become good habits, enabling you to get stuff done faster and concentrate on more important things. I guess it's mainly a matter of avoiding the avoidable mistakes.
You've described, very roughly, the basis of the SEI's "Personal Software Process". Keep records of your errors, study what you do wrong, change your personal process to drive them out, repeat.
Have a look at the Joel test. It's outdated, and might be a little controversial, but it's it list of ways to fix "stupid shit".
I don't have a copy of "Code Complete", but from what I've heard it's a similar deal. XP arguably eliminates a lot of stupid shit, but arguably introduces a lot of stupid shit at the same time (as do many dogmatic processes).
While "best practice" lists are always going to be controversial (in both programming and music, but not chess), they do have a lot of value if you don't overuse them, and take them with a grain of salt.
I think you're also missing idea of "fundamentals" as essential thing that article means by "stupid shit" - fundamentals meaning habits and not just instructions.
Not all "bad stuff" is "stupid shit" in this approach. The "joel test" really doesn't relate.
Just about all Chess players know to avoid hanging pieces - the article described going from there to making that understanding habitual. Essentially, the article, if it was consistent, would be looking for some equivalent to musical scales that a software engineer could do practice before actually programming. It is not a matter of knowing best practices but a way of systematically developing the habits to put them into practice.
It is the difference someone telling you not to make a mistake and going over and over doing things to actually get in the habit of not making mistakes... A person can learn theoretically how to play music in a week and if producing keystrokes at the proper time didn't matter, people wouldn't spend more than a week learning the skill. As it is, a lot more practice is required.
But really, I'm pretty sure there isn't a software equivalent of musical scale because software isn't as specific a skill as reading or performing music. Software involves a large variety of logical skills which most adults already have to some degree. It is so complicated on some levels that mistakes are sort-of inevitable on other levels.
It's very interesting to get ahold of a "best practice" book or an organization's SOP manual just to see what battle-tested advice is given for various situations... you may disagree but they embody wisdom. At the very least you can find out where they're coming from.
The Joel Test isn't really intended for individuals, though. It's more of a "how to fix stupid shit" on an institutional level. I think what people are looking for here are ways to fix stupid shit on an individual level.
I think there is a simple equivalent in programming. A lot of programmers I know can rattle of the benefits of 3 or more frameworks. They can program in half a dozen programing languages. But they couldn't tell you the basics.
They couldn't say what the difference between functional and object-oriented programing is. They couldn't tell you what a good name for a variable is (maybe they have somewhat good variable names, but they don't really know why they are good or bad). They couldn't properly define an architecture for a new project.
There are basics in programing that should be known before all the programing languages, frameworks and what not. And I think from time to time everyone should try to learn more about the basics instead of a new language. From time to time think really hard why this variable should be called that. Why we need a new package for that. Why is a method built the way it is.
Hmm, It seems so on the surface. I'm on my third trip through, and it's going to take me less than a month and a half. It really gets substantially easier once you "grok" it. The first time is AWFUL though. This isn't to say of course that I don't get very much out of subsequent readings. SICP is very deep. It's just after a first reading (and doing all the exercises) you're free to focus on concepts more than "tactics."
|Sometimes you need _directed_ learning to push you past that plateau.
That and practice. My chess improved noticeable with timed play. Putting a time limit on a game forces you to do 'something', if only to stop losing. In my case, this forced me to eventually develop a strategy and focus my game on actually attacking particular areas of the board. This really improved my game.
On a side note, I just watched Crockford on Javascript and he said lambda functions are the greatest innovation in computer science. Could you point me toward a resource that will explain why?
Does he really call it an "innovation"? It would be more precise to say that the lambda function is a essential concept in computer science. Lambda functions -- which is just another name for anonymous functions -- are a basic element of lambda calculus, which is one of the foundations of computer science (predating electronic computers).
In more practical terms, you need lambdas to have closures, currying and higher-order functions in general. Without that you don't have Lisp, ML, Haskell, etc.
A lot of what that particular paper talks about is applicable to other languages that have lambda expressions in them.
Some of the examples in it does rely on lazy evaluation though. Laziness is usually encodable in most languages with lambdas. But most aren't lazy by default. Keep that in mind.
In regard to what you said about determined and directed practice, it's essentially a case of it being more about the quality, not quantity of time spent on something. I realised this a number of years ago and have been utilising it ever since. Most people tend to keep going down one particular route when they're trying to achieve something and, when they hit a roadblock, they keep pushing forward. This is working harder. The alternative is to go in a completely different direction where, rather counter-intuitively, we have a chance of picking up information/experience which will get rid of the previously encountered road block. This is working smarter. If something seems too difficult, it probably is, and the reason for that is that one is missing key pieces. Doing the same thing will not find these pieces but looking in non-obvious places will (since if it was obvious, it wouldn't be difficult in the first place).
I think that the mechanics behind this consist of what is basically, to borrow a term from comp sci, an impedance mismatch between our internal maps of the world and the shared maps we call physics or math or piano technique. We tend to organise things in a particular way so that we can communicate about them with each other but this shared set of symbols is never the same as an individuals' internal representation. I think that the reason all internal maps are different is because the subject (the human) is an integral part of them. In other words, one's understanding of gravity will have shared symbolism with completely subjective experiences that have nothing to do with gravity. The process of learning is essentially a mapping of one set of symbols to another. More accurately, there seem to be three levels, the experience itself, the internal symbols relating to the experience and the external shared symbols we use to communicate with each other. Based on this, I think that the categorisations in the shared map rarely make sense internally and that, as a layman, following the lines drawn by the categorisations doesn't make sense, hence the roadblocks that people encounter.
To put it another way, the categorisations we have in our shared knowledge are primarily for communication purposes and are, objectively speaking, irrational. Separating math from physics from music does not make sense in real terms since they are different perspectives on the same system. Or, to put it yet another way, the map is not the territory and those who understand that are much more adept at navigating with maps.
People do stupid shit because most of the time they don't know "what" is the stupid shit in whatever they are doing. It is hard and only looks easy on hindsight. This is the part were a mentor or good teacher can make a HUGE difference.
Same thing applies in job interviews and dating. The people opposite the table from you want you to succeed because no one likes giving interviews or going on first dates. They just want to fill the vacancy.
All you have to do is not disqualify yourself by being stupid or obnoxious.
This is such an important point but people just don't seem to grasp it.
The idea that you have to be the most dazzling job cantidate or the most amazing romantic partner to succeed in these areas probably holds people back a lot. In my experience if you don't do anything overtly stupid, the other party will subconsciously fill in the blanks so that you match "what they were looking for".
That's something you can always suss out (the wise dater/interviewer will always say they do, but subtle cues can clue you into the veracity of that claim).
Regardless, it's still like a sport - if you don't make unforced errors, you will be more likely to succeed.
Consequently practice does help. You're better at dating if you go on more dates. You're better at interviewing if you get more interviews. Practice allows you to "not make silly mistakes" unconsciously, which frees up your more creative side to show the best you.
I want to accept this advice, but it does go against the basics of risk taking. Also, not to pick on you, the article seems to fall for this too.
I'd say 90% of the stupid things I do are because I wanted to do something new or risky and as such I have a neophytes mind. Sure, when I start to hit an intermediate level I can then self-analyze and make that jump from average to 'sometimes good' but I can't do it all the time and I sure as hell can't do it in the beginning.
So lets look at interviews. I've been on a few. I've played the ultra-conservative role of being super-careful and treating them like I'm on trial. That approach doesn't seem to work. I've recently played the role of someone who is very socialable and even tells a joke or two and other "stupid" behaviors
Hey, you know what? It turns out most humans are far, far from these rational logical creatures. They aren't thinking "Lets fill this position" they're thinking "Lets hire someone I can work with who doesn't seem like an overserious ass or a nut and has at least the basic competency to understand the job and learn more." Geeks of the world need to understand the great importance in social skills, risk taking, socialization, understanding social culture, etc. The idea that we can just solve everything by being super careful and super critical of ourselves is not a smart strategy.
This seems very defensive.. Wouldn't this make you very careful, self conscious and, well, boring? Don't know if this will work in a job interview, but it has to be the _worst_ thing you can do on a date ;)
Sounds like you enjoy first dates, but if you're looking for a stable relationship, it's much easier when you've adopted a sustainable manner early on.
>The people opposite the table from you want you to succeed. >All you have to do is not disqualify yourself by being stupid or obnoxious.
I dunno, man. I graduated last month. 9 interviews in the last 2 months. Finally landed a job, so I guess I can talk about how the other 8 went. Honestly its not so simple as you describe. There is definitely a supply-demand dynamic at work. Too much supply. Literally too many sailors and too few ships. Every university of repute ( Courant, UCB, MIT, Princeton, Stanford, UChicago, CMU, Columbia, Cornell ) and the 2nd-tiers ( UMich, Rutgers, UToronto, Baruch, Boston ) together graduate some 1500+ solid quants each year. That's 1500 people with a Financial Mathematics Masters, in addition to C++ programming, whether via BSCS, MSCS, or PhD ( several PhDs in my pgm ), with typically another 1-2 graduate degrees thrown into the cocktail! I used to think with 3 Masters I was in a good spot, but I realized I was actually in the middle/bottom tranche. There are many people ( some 25% of class ) seeking a Masters in Financial Math who already have either a Physics PhDs or Statistics PhDs or ofcourse pure Math PhDs ( aka God ). So they get this science/math PhD, realize academia sucks, then head for a financial math degree, then get a C++ certificate ( there's a mini industry minting money off of "C++ certification for quant") and then show up at the interview. A first-timer ( ie. one with just Financial Math & C++ ) has zero chance. These are not the only ones you are competing with. You also have insiders ( people with MBAs and CFAs and 5 years at an IB/PE ) who may not do very well on the C++ or the math, but their experience at a trading desk gives them a huge edge. IBs surely cannot absorb 1500 bodies year after year. So the interviews are super-technical and nobody wants you to succeed. I wouldn't say I was stupid/obnoxious at 8 of the 9. 2 of them said I was "clearly overqualified and would quit in 3 months out of boredom" ! Some others were "not a right fit" type ( too much programming not enough math or vice-versa) , but there were a few hedge funds where I felt Boy this is my dream job but they were technically off-the-charts. The questions were goddamn hard. And the math was hard. Write C++ code for pricing a barrier option on an instrument with some specified parameters where the volatility is stochastic ( ie. use Heston's volatility model to price a barrier call. ) Now that's not the sort of stuff you look forward to without a stiff one. A classmate was asked to walk through a SABR and a GARCH and asked to explain in detail which one would be a better fit for a certain class of fx bonds and why. While the programs give you a good overview of everything, that is also their flaw - they give you detailed knowledge of nothing. So you have to have mandatory side projects to succeed. I had written a few thousand lines of java to price condors & written an optimizer over the holidays, so that was what got me my job. But even that wasn't sufficient. Walk me through your code. Did your model actually make you money ? How much money ? Did you make money because of your model or because of the view of the instrument ? I mean that stupid side project suddenly took a life of its own and became like a real production system which should jump through all sorts of hoops and make money from day 1! There was much more relief than happiness when it was all over and I got the job.
Bottomline, right now there are simply too many good candidates and too few positions. You have to be really really smart and lucky. Stupidity and obnoxiousness is a very trivial filter - most will get past that pretty soon. What happens after that is what counts.
This article echos one of Nassim Taleb's central themes, that what doesn't happen (or that we don't do, or we actively avoid) is often as or more valuable than what happens/what we do/etc. But we tend to discount the value of things that we can't see or that didn't happen, thinking everything of value must be 'actionable', and that's a mistake.
It's a pretty common idea among traders as well. One of the fundamental teaching examples is why it's better to avoid a loss than make a gain:
Say you start with $100 and take a 50% loss down to $50. Now, what percentage gain will it take to get you back to $100? Many people new to the game will unthinkingly answer a symmetrical 50% gain, but of course that's wrong. To get from $50 back to $100, you need a 100% gain.
For every loss, getting back to even requires asymmetrically more % gain. I imagine poker players are very aware of this harsh fact as well.
There's even an old misquoted Japanese/Chinese proverb that hints at it - If you sit by the river long enough, you'll see the body of your enemy float by. Implying the power of doing nothing and letting your adversary shoot themselves in the foot. (http://news.ycombinator.com/item?id=1225153)
The possibility of making game-ending blunders is one of the main reasons I don't play competitive chess any more.
Of course, it's very frustrating to make a "simple" error in a game of chess that might have already taken up a couple of hours of time and which may lose the match for your team, which represents a large geographic region.
But after a lot of reflection it became apparent to me that this kind of frustration is very much an inherent property of the game of chess.
It turns out (once you play enough games and get pretty damn good at it) that it's actually quite a limited and simple game. Once you're down half a pawn or so there aren't many possibilities to generate a counterattack on another part of the board to make it possible to win without relying on your opponent making a subsequent simple mistake.
I did play quite a bit of Go at university and for a while I was confident that I could get good. If I'd stuck at it I would certainly be a low-ranking amateur dan by now but I decided to spend the majority of my spare time learning more about IT.
One reason for my early enthusiasm in the game of Go was because it's such a complex game it's not really clear when a "simple" strategic error has been made. Perhaps more importantly, because both players experience much more difficulty choosing moves than in chess, there's usually a reason to play on when you've made a mistake even when playing a highly competent player.
This has been a long reply. I'm trying to illustrate that the secret to seeming good at everything is to participate in activities which are intrinsically difficult and where your intelligence will have the chance to shine through rather than to participate in activities where one foolish blunder can allow a person of much lesser ability to beat you.
"It turns out (once you play enough games and get pretty damn good at it) that it's actually quite a limited and simple game. Once you're down half a pawn or so there aren't many possibilities to generate a counterattack on another part of the board to make it possible to win without relying on your opponent making a subsequent simple mistake."
I do not find this to be the case. Even with games involving the top few players in the world (rated 2700+ FIDE) you will often find that players come back from a computer evaluation significantly below -0.50.
Also, you can be at a significant disadvantage and still be able to draw the game. It is really rare that one mistake can decide the game. -0.50 is well within the drawing range.
"One reason for my early enthusiasm in the game of Go was because it's such a complex game it's not really clear when a "simple" strategic error has been made. Perhaps more importantly, because both players experience much more difficulty choosing moves than in chess, there's usually a reason to play on when you've made a mistake even when playing a highly competent player."
On the other hand, chess endgames can often stay up in the air for a really long time, with two or even three results possible. Go endgames (almost by definition) consist of both players grubbing out a point or two here and a point or two there, with all group life and death already decided, so it is rare that the game is decided late.
(If it helps ground any of my assertions, my USCF chess rating is 1870 and my AGA Go rating is 4 kyu. I don't know whether I would certainly be a low-ranking amateur dan by now if I had stuck with it.)
By the way, I note that Josh Infiesto does not appear in the USCF rating list, so either he did all his playing before 1991 when they started keeping computerized records, or his games in which he wiped the floor with 1500-level players were informal non-tournament games.
dfan, you are correct. I've never played in any formal tournaments. However, I do have lots of friends that do, and many are rated ~ 1500. I know that beating people in informal games isn't really a great measurement of skill, but it's enough to illustrate my point. Additionally, I was born in 1991. I certainly wasn't playing chess before then.
edit
For the record, I haven't engaged in any serious study of chess end games. I'm not really qualified to comment on how end games might evolve.
Well, your main point is certainly right that a large part of playing chess well is simply avoiding blunders; it is surprising how many games even at the 2000+ level are determined because one player blundered, rather than because the other player made any spectacular moves. Of course, those "blunders" get a bit more subtle as you improve. But it is true that the fastest way to improve (below 2000 or so) is to learn how to stop making stupid mistakes rather than learn new tricky moves.
The precision of decimal computer evaluations for chess is mildly interesting. It would be much more interesting if the authors depicted and broke down at least some of the major components inherent in their evaluations. Example: points for material, points for king safety, points for pawn structure, points for piece activity, etc. This list could go on and on, but at least some of the major ones could be interesting. Has there been a program that has attempted to do that?
I'd argue that if you're able (with perfect play) to force a draw from a position P with fewer pieces on the table then a proper evaluation of the game at position P would be that it's even.
Oh, absolutely; the proper (omnipotent) evaluation of any position should just be win, draw or loss. But as fallible humans (and computers!) we use terms like your "down half a pawn or so" to denote situations like "White can draw but he's going to have to play pretty well to do it." My point was that the "drawing margin" in chess (how bad a position a player can find himself in and still be able to draw with best play) is surprisingly large, even for grandmasters. Here's a striking example from last year, involving two of the top five players in the world: http://www.chessgames.com/perl/chessgame?gid=1602565
This is true. Chess punishes you mercilessly for blundering. I've noticed that on the whole, chess games seem to evolve pretty predictably. Obviously this makes it difficult to recover a lost strategic advantage as so much of chess play is really just systematically advancing a position while avoiding blunders.
On another note, I've never really grokked Go. The rules seem so simple I'm never really sure if I'm playing right. Additionally, It's really difficult for me to evaluate a Go board and see who's winning. On the other hand, I've never invested much effort in understanding Go.
> The possibility of making game-ending blunders is one of the main reasons I don't play competitive chess any more.
This is the point of TFA... if you want to get really good, stop blundering. Until then you're making excuses. You've chosen other pursuits "rather than to participate in activities where one foolish blunder can allow a person of much lesser ability to beat you."
You've selectively defined ability to not include things you don't think are important-- but they do matter objectively.
My reply is structured so as to initially acknowledge the main point of the article and then offer an alternative, i.e. choosing different situations/environments where minor mistakes do not have such a large effect. This is a valid strategy for "how to seem good at everything".
I don't think Go is substantially different from chess in that respect. Sometimes you make a blunder and fail to save a big group in midgame. You lose. At this point a weaker player might keep on playing, hoping that the opponent makes a bigger blunder along the way, but that's considered bad form.
I have used a similar technique for teaching competitive croquet. Instead of this one simple rule, I offered three. The first two rules pertain to strict technique and croquet theory (similar to the specifics about not leaving pieces hanging, and structuring your opening moves). The third rule is always: don't F up your shot.
In the chess example, I reconfigured the 'don't do stupid shit', to something like, a) Open with an intent, b) Don't lose pieces without purpose, and c) Do everything for a purpose. I wanted to add this information to the thread, because I think the 'don't F up your shot' croquet analog includes a more explicit sense of limiting your actions based on your ability. In croquet, all the theory and mental prowess in the world is sacrificed if you can't execute the ideal shot under the circumstances. This is acknowledgement of your current ability is what I think applies to much of programming (and life).
Note that I am NOT suggesting that you avoid moving your abilities forward. However, focus a lot of effort on knowing what you know, and learn how to use it well. The more you use it, the more you learn and expand on that skill set. The more carefully you use what is within the realms of that skill set, the fewer mistakes (and the more successes) you will have.
Not doing stupid shit is important, but what about not being afraid to make mistakes? Making mistakes is healthy and arguably one of the best ways to learn; if your motto is 'don't do stupid shit', then I fear that you'll miss out on a lot of opportunities.
Also, your definition of 'stupid' is going to change throughout the years. You shouldn't be afraid of trying something new now because you might realize in a couple years that what you were trying to do was stupid; doing it is what helped you become a smarter and more experienced person.
I know this wasn't actually the point of the article, but it seems like the 'don't do stupid shit' motto could easily be taken too literally into this interpretation.
A better phrase would be: "Don't do stupid shit twice".
Still better, in my opinion, is: "work on your weakness".
My hobby is Olympic weightlifting. In Oly lifting brute strength can take you only so far, to succeed and progress you need to constantly work on technique.
And the only way to improve technique is to identify something you're doing wrong at the moment and to break that habit. It takes time, effort and buckets of hard, horrible work; but at the end your performance has been consistently improved for good.
If on the other hand you stick only to what you're good at, you will inevitably peak. I am very good at back squats. If I'm not consciously trying to work my weaknesses, I will do buckets of back squats and get strong at them.
But what I ought to do instead is work on getting under the bar faster, or work on my second pull, or work on foot placement during the jerk, and so on. If I put the bulk of my effort into these, I will progress much faster and further overall than any amount of squatting will grant me.
I'm fortunate to have a brilliant Muay Thai teacher and accomplished fighter who preaches this kind of philosophy every class. No matter what you do, you should do it mindfully and in the case of fighting, be aware of trade-offs and price you pay for technical inadequacy. Far too many people have the misconception that fighting or training is about hitting hard, when it's really more about hitting correctly. Parallel to the chess example: if you're getting hit and you don't know why or how, you're not training correctly. You're not studying and identifying your mistakes and then improving on them.
Having an excellent teacher to see and explain those things puts you at such an advantage over someone stuck with a less experienced or educated teacher. I like to think that theses lessons can help carry over into the startup world. That is (in sum), technical execution and winning are the only things that matter.
Oh absolutely. Having a great teacher really accelerates the process. I've been lucky enough to have studied with many great teachers over the years, and this is a common thread I've noticed. I've also noticed that really high-level skills tend to run in "family lines." For instance, my piano teacher can trace her own lineage through Artur Schnabel (one of the greatest pianists of all time), Theodor Lechitizky, and all the way back to Bach.
It's unfortunate (or perhaps a blessing in disguise) that there's no real "Teacher/Student" analogue for CS. On the other hand, the Programming/CS community is far more open about sharing information than some of the other communities I've been in. The piano community for instance is in general somewhat secretive. Ideas about piano playing seem to be passed down from generation to generation. Near the top, the techniques for obtaining "excellent playing" seem to be well agreed upon. As we get closer to the bottom, lots of misconceptions and disagreements seem to surface.
In the programming community, while there are obvious camps, since information is available so much more freely it's vastly easier to make informed decisions about what's stupid and what's not.
There may not be a strict "Teacher/Student" analogue for CS, but it is such a young discipline that I would imagine it would be pretty easy to trace a "lineage" for everyone back to certain branching events or influencers.
What you are really talking is about making a decision to raise your standards. The human mind does not process negatives very well so telling someone to stop doing stupid shit is a hit or miss message. Really the title of this post should have been, "How to be good at everything: Raise your standards." But that isn't as catchy or sticky as "Stop doing stupid shit."
Raising your standards means setting a new bar of what you are going to expect from yourself in a given task or skill. It comes with the presupposition that you can meet the standard thus the added confidence. Many people do "stupid shit" but don't know why. And frankly the 'why' doesn't matter. You have to believe that you can improve and setting standards is concrete way of telling your mind you won't accept certain behavior anymore.
Of course this sounds easy but it's not.
Making the decision is the hardest thing anyone can do. It comes with consequences, both good and bad. Consequences in potentially eliminating the fulfillment of needs you may have or possible coping mechanisms you have grown into. When you make a true decision you cut off possibilities.
Raising your standards necessitates making a 'real" decision and cutting off options. Sometimes doing stupid shit is the only way some people know how to wrap their head around the f*cked up things they have come to understand.
Scumbag article:
Highly provocative title; zero practical information.
I know negative comments are frowned upon but I just wish these knock-off quasi-zen bullshit articles didn't bubble up to the HN twitter bot so often. I'm irritated because I took the time to read this assuming that the author was going to deliver something useful and that wasn't the case, it was just a bunch of hot air. "Succeed by not failing." Very good.
I'd like to give the author full marks for marketing. Provocative title, slick design, little substance... he ought to get a job at Wired!
In the foreword to a book called "In search of stupidity", Joel Spolsky makes the case that the success of Microsoft owes much to the fact that each of their competitors did a lot of stupid shit while M$, mostly, didn't.
The article is a bit dated today, but I think he's still got a point:
For every other software company that once had market leadership and saw it go down the drain, you can point to one or two giant blunders that steered the boat into an iceberg. Micropro fiddled around rewriting the printer architecture instead of upgrading their flagship product, WordStar. Lotus wasted a year and a half shoehorning 123 to run on 640KB machines; by the time they were done Excel was shipping and 640KB machines were a dim memory.
I liked this post but am not quite sure where to start in applying this to software development. I created a follow-up "Ask HN: What are the stupid things Rails developers do?" here:
Or get a job at any software shop that does a lot of work with abandoned projects (e.g., "Help, my previous developer left me in a lurch, can you finish up this PHP e-commerce store customer management system database driven enterprise solution?")
I'm not much of a rails developer. I refrained from describing what constitutes "stupid" software development since everyone's heard best practices lectures...
Poker players understand this implicitly. They refer to marginal but cumulatively significant errors as "leaks" and good players invest enormous effort in identifying and plugging their leaks. While at the highest level poker requires some very sophisticated skills, the journey from novice to competent professional is mainly one of diligently plugging leaks.
Poker is unusual in being so strongly a game of incomplete information. A top professional may only have a few percent advantage over a complete novice, so skill is rarely evident in the short term. Identifying leaks is painstaking in poker because even in hindsight you are rarely sure of the right way to play a hand. Even over a long session at the tables, a player can do everything right but lose, or play terribly but walk away with bulging pockets.
I think that poker theory has a great deal of relevance to entrepreneurs. The mental fortitude required to invest money in an uncertain outcome based on partial knowledge is an overwhelmingly important skill in both.
But isn't this also a statement that the skill of a poker player and the skill of a programmer are utterly different.
Two skill types:
A. Doing something fairly simple perfectly, plugging even the smallest mistakes.
B. Doing something that is very, very difficult as well as you can - ie, doing it at all. Doing your best to simplify situations that have a potential to become even more complex while still managing great complexity and accepting that you will make mistakes (and guarding against and finding those mistakes).
Notice, the difference between a very good programmer and very bad programmer is absolutely not a few percentage points. Poke and music are each more like skill A whereas programming is more like skill B (though these classifications are of course quite rough).
It seems like if we aren't distinguishing the skill type here, we will wind-up just tossing out vague cliches. "Don't mistakes, dude".
This is BS. If you don't allow yourself to do stupid shit you're not being creative or explorative enough in the pursuit. You may be a technical master but you will never be a game-changing luminary.
In my opinion, in this context, the appropriate time to "do stupid shit" is after you've become a technical master and have the ability to "do stupid shit" with a high level of artistry.
This great post should be really encouraging: you can achieve a terrific competitive advantage right off the bat just by launching a business that manages to avoid doing stupid shit.
A music analogy. I know a woman who is a wonderful harpist. She's a freaking musical genius who comes up with awesome and highly imaginative stuff, and has the guts and skill to execute it. However, she often does this thing where she accelerates near the end of the number. I think she's just being sheepishly modest, but it makes the end feel like a cartoony little apology. Really, instead of speeding up, she might just as well segué into "Shave and a Haircut...Two Bits!"
There's an analogy here for both authors and software developers.
I agree with the overall point here, but isn't this just the same as saying "practice" and/or "keep practicing, especially with those that are better than you"?
That must be a favorite band director phrase. My director in high school always said it as "Practice doesn't make perfect - perfect practice makes perfect."
I see this more as an alternative way of saying "master the basics".
I can't count the number of times that I've seen developers (or classmates) try really complex things without understanding what's going on underneath, and then not be able to understand why it doesn't work.
Completely agree. One of the main things I learned in my 20 or so years of classical piano playing was to focus on basics.
In my haste I think I missed that thrust of the OP and also neglected to mention it in my response.
Yes and no, the point is to practice smart. Not everything has equal pedagogical value and learning difficult techniques is worthless if you can't avoid idiocy.
To me, it seems closer to "be mindful" or "be deliberate". You can keep practicing with those better than you, but if you don't know what you're doing wrong, or how to fix it, you won't get better.
You have to know what you're doing wrong, so you can stop doing it.
I'm currently obsessed with golf (and for those who write it off as a ridiculous waste of time, I encourage you to give it a go; I used to be of the same mindset as you and I was pleasantly surprised to be very wrong. It's a great thinking man's/woman's game, gives you lots of time with quality friends, and puts you in some of the most beautiful environments you can find).
The post speaks to me as 9 out of 10 amateurs are out there doing stupid shit almost every swing - trying to hit the crap out of the ball at the expense of basic fundamentals (primarily balance), attempting miraculous shots when in trouble only to put themselves in more trouble etc. I was one of those dudes not too long ago (and still am sometimes), but recently tried to 'stop doing stupid shit'. It is unbelievable how quickly your scores can drop (that's a good thing for those who don't play) by accepting the predicament you have put yourself in and playing the next shot with intent, focus, and generally being smart about it.
I never thought of consciously applying this thinking to startups/programming but it's completely doable and probably effective.
If you don't try to hit the ball too hard, just get it back in-play -- you'll beat everyone who's worse than you, everyone who's as good as you, and half the guys that are better than you.
I think that word 'learn' may mean 2 entirely different thing, and that's why the op may be confusing for some people. We can think of learning as improving your skills in one area, or we can think of it as exploring possibilities. For example you could learn some programming language (as your first) like C++ and then stop doing stupid shit and become expert at it, or you could do stupid shit like ingesting news about every new shine technology that comes out and settle for something else later, like ror. Succesful learning is for me a bit about balancing between 'breadth search' and 'depth search'.
However this is not exactly what the OP is about - I think author's point is to cut off things that doesn't work, and stick to those who do; it's about optimizing your learning methodology.
Also you can do really stupid shit, like browsing reddit all day long (which was my first thought when I looked at the title), but that's different thing, and it's quite obvious that it won't lead you to success.
First, don't be afraid to fail and make stupid mistakes. Then, after you've made those mistakes, examine them and eliminate them. I think the author is talking about the latter part, and that often this process of removing mistakes is all it takes to be good at something. This idea could be reassuring to you if you tend to think that you need a certain genius that you do not have in order to excel. All you need is the ability to recognize your mistakes and the will to get past them.
I'm fortunate to have http://www.math.uic.edu/~kauffman/ (louis kauffman) as my math professor. Instead of taking regular classes I just kept on taking independent study with him to get rid of all the stupid things I did in mathematics. I asked him for a research paper within my abilities and I am going to write on in the Fall. I am reading up on the research before hand.
I wrote the whole book about stopping doing stupid shit. Check it out http://whyprojectsfailbook.com
It's about project management but the principles could be applied to startups, business, software, etc.
Avoiding most common pitfalls and not doing stupid mistakes will get you really far.
The point he seems to make is that one failing is worse than many successes, and, sadly, he's probably right. It reminds me of college football. The team that schedules a third of its games against patsies and goes undefeated will go to the championship game. The team that plays 12 equals, but has one loss, will usually not be in the top 5.
Why is this sad. You should be glad because it makes life easy. Once you reach a critical threshold of skill there are diminishing marginal returns for investing time in deliberate practice. If you want to pursue some esoteric arbitrary skill in pursuit of personal excellence because you find it rewarding then good for you.
This also seems relevant to a general takeaway from the "Zen and Art of Motorcycle Maintenance": 'embrace quality'. Always strive for the best you can do in the situation you are given - whether it is chess, piano, coding, or mowing the lawn.
I read this as don't follow the convention if there be a better way to do things, don't follow the party line if the party line is wrong, and always push for the right thing, because it is better to aim high than to concede to a lesser solution.
I think about how much this applies to middle management sometimes. Are you doing to do stupid shit that others ask you to do even when it is wrong, or are you going to demand the right approach and build a reputation for at least trying to always do the right thing.
E.g. saying yes to a ridiculous deadline that you know your development team can't possibly hit is stupid shit. Say no.
I think the hard part about not doing stupid shit is knowing what the stupid shit is. This advice is great for people with intermediate-to-advanced knowledge of the field they want to "stop doing stupid shit".
Though it is mentioned in the article, I think it doesn't get as much attention, the most important part to this little piece of advice for someone learning something new: Learn what you should not be doing as well as what you should. (People usually focus on the later and not the former) ... A great tip for educators too.
"We tend to seek easy, single-factor explanations of success. For most important things, though, success actually requires avoiding many separate causes of failure." -- Jared Diamond
Do you think finding mentor(s) is the quickest way to identify the stupid things? I recently started going to Chicago Python users group meetups, and just listening to people talk around me and noting down stuff to google later, has inspired me more than anything (& also the mailing list)... finding stupid things "alone" is more like doing your own proof-reading ... its pretty difficult to find your own non-obvious bugs.
i think really what he wants to say is to drill the fundamentals. getting the basics right is important.
don't do stupid shit means nothing. why would anyone want to do stupid shit? This kind of the same meaningless wording that everyone thinks applies to them in horoscopes how many times have you read "doesn't suffer fools gladly"?
Corollary: "Ninety percent of life is just showing up." -- Woody Allen
It's stupid to be absent, either physically or mentally, from your own life. Whether this be not being "awake" or late during a meeting/class/work/date or not preparing places to show up to.
"Looking back on some piano competitions, it seems like the vast majority of the time, winners were chosen simply because they didn't do anything that was stupid enough to be easily criticised."
What would that mean in terms of programming or say investing? I want to learn both...with 0 knowledge of either but I'm not sure how to go about it, if I were to use this method of learning how would I do it?
With respect to investing it means "Stop losing money." One big mistake is to risk too much of your capital on any one position. To avoid that, you might check out the Kelley criterion for optimal bet sizing: https://secure.wikimedia.org/wikipedia/en/wiki/Kelly_criteri... .
Another mistake is to put on a position and then watch it drop without limit, insisting that you're "right" all along and the market just hasn't seen it yet. Better to have a stop-loss and bug out of the position the minute the market judges you wrong. You can always put on the trade again later.
I really didn't even have to mention that second point, since it's already implied by the Kelley criterion. If you're trading without a stop loss, then quite simply you're risking the entire amount of that position and you can calculate Kelley accordingly.
"Stop losing money" is rule number one. Remember, if you lose 50% of your capital, you have to double your money just to break even again. Better to lose only a percent here and a percent there, and occasionally hit a 10% gain.
You can also consider trailing stop losses, which also fit under the Kelly criterion if you count unrealized gains as part of your overall capital.
Sure, this sounds like good advice. But I'm relatively new at developing software, and I cant really identify "stupid shit" just yet. So what are some dumb things developers constantly do?
> So what are some dumb things developers constantly do?
People will give tons of various anecdotes on this, but they apply to them, not you. The stories they tell are just the scorched remains of a unique learning experience that can rarely be communicated verbally.
Keep doing things that seem interesting and hard. Don't be afraid to do stupid things. Everyone has to do a stupid thing at least once to learn from it. Identifying and eliminating your own stupidity afterwards is what will make you good.
Whether you identify the stupidity yourself, or has it pointed out by some helpful master is up to you. Both work.
Not commenting code. Not updating a variable's name when its purpose changes. Not refactoring long blocks of code into functions. Skimping on documentation. Making changes on master that should have been made on a branch. The list goes on..
Definition of stupid shit is relative though, what's stupid in the codebase today may not have been so stupid 3 years ago. What might look stupid upon first visit might actually be the only reasonable way to solve the problem.
But then there's other stupid shit, hacks for quickly solving a certain edge case. For example abusing Java type erasure and collections.
Other stupid shit from experience is avoiding risk of "new" technologies, (Still on CVS).
the way I like to think of this is stop failing in obvious ways. Failing in novel ways is fine as you learn things, all you learn from failing in an obvious way is something you could have learned by some other poor sap's example.
How to stop doing stupid shit: minimize your unknown unknowns. How to minimize your unknown unknowns: learn the correct way to do things. Without guidance and mentorship, which is largely non-existent in software, you're stuck flailing around until you attain enlightenment (either by discovery or invention) on each thing you do. When you start the next task, you start the process all over again. Over time, you gain wisdom, at the cost of neurotic self-doubt.
There's a reason StackOverflow is so popular - it makes it easier to find the 'well trodden paths' ("correct way") along with someone pointing out caveats.
By taking the leap forward to being fully transparent with your skill execution, by being courageous and taking the risk to let yourself be vulnerable, you can see massive improvements
I recommend video taping yourself during skill execution and then asking an expert to review it for you. You will be amazed at how many mistakes you are making and also how easy and straightforward they are to correct once you are aware of them.
This is helpful for everything from sports to dating to programming.
And because I'm always scheming, many savvy internet marketers are allowing people to upload videos of themselves practicing niche skill X and then hiring a coach to review them. Some cool HTML5/Flex software on a CRUD database could be a big hit here.
I think the most profound thing about the post was that it showed a striking difference between determined practice and directed practice. Just being determined and putting in the hours will _not_ be sufficient to pass a plateau of learning. Sometimes you need _directed_ learning to push you past that plateau.
Applying that to code, I think this is the difference between just programming a lot and thinking you'll get better, and actually reading texts, reading code and talking to other programmers to see how other people do things better.
For example, you can start using more anonymous functions in your code because all the cool kids are doing it, but unless you really understand how to deal with high-order functions and what a map and fold are, you are just going to be doing stupid shit that doesn't really help your code at all.