Hacker News new | past | comments | ask | show | jobs | submit login
Why Startups Fail (infographic based on Startup Genome data) (visual.ly)
76 points by danberger on Sept 1, 2011 | hide | past | favorite | 14 comments



"Inconsistent startups write 3.4 times more lines of code in the Discovery stage and 2.25 times more lines of code in the Efficiency stage."

This is interesting. Either inconsistent startups (startups that are more likely to fail) over-do development - or - they tackle problems that are more complex and thus require more code. The implication of the later point is that startups working on difficult problems are more likely to make mistakes (like premature scaling) and fail.


I always find there to be a very poor correlation between complexity, quality or efficiency and lines of code written. Therefore, I find the conclusion that they over-do development or approach more complex problems highly questionable. It might just be that they are not the most thoughtful programmers. This would go along much better with inconsistency. Instead of developing software, you set your mind on scaling fast, write a bunch of code and fall into the inconsistent startup category.


I'd imagine it's due in large part to a lack of focus on the problem they're actually aiming to solve. "Even though we're solving Problem X, why don't we add Feature X, Y and Z?"

Lesson to learn? Focus on your solution to the problem people are having, and listen to your customers. Don't worry about the features you can add 3 months from now--your startup may not even last that long. Worry predominantly about getting the initial traction, sustaining it and developing a positive relationship with your customers.


This stat really stood out to me.

I think it's simply because a lot of start-ups don't realize that what they are making isn't that unique.

They can leverage many components that already exist in the market.


I'm sceptical about the usefulness of the dataset. One of the first things we learn as startup founders (or interns for Dr House) is that if you ask a person to analyse or predict their own behaviour, chances are the answer is way off. Even upon careful reflection and introspection, too many biases are at play.

I first learned this doing customer development for our current startup. We surveyed potential customers until almost being arrested at a private conference. We thought "this time is different" because we planned to validate the concept until we went numb. But we relied too much on others' self-assessments.

I'm not suggesting self-assessment is pointless (clearly it underpins our personal development), but rather in fleeting engagements with people who lack vested interest (e.g. surveys), it can do more harm than good.

Additionally, I found the Startup Genome survey so long-winded that my answers ended up being rubbish. It would have taken all day to get passed my own biasses and really think through that many questions. I understand there's more to the project than the survey, but that's the part I'm particularly sceptical about.


* But we relied too much on others' self-assessments.* Can you explain? Do you mean customer development brings in everyone's biases into the picture and not the truth?

What do you think is a better way to do customer development?


Not all customer development, but the flavour we practiced, which mostly involved just asking people how/what/why they did what they did.

Imagine a scale that represents the strength of positive signals you get from data collected during customer development. At one extreme, you have weak signals from potential customers who just say they'll use your product. At the other extreme, you have strong signals from potential customers who part with cold hard cash and sign multi-year contracts (despite not having a product yet).

When we first tested our concept, we focussed on the following:

  * Just doing customer development (it was a big step and an exciting new world)
  * Building a big sample set ("if we're going to do it, let's do it right")
But, in retrospect,

  * We were oblivious to (the lack of) positive signal strength
  * We too quickly dismissed negative signals as "people outside our market"
So the intention was there, and man we worked hard at it, but the data was useless and our analysis was heavily biased. Sounds harsh to say it was useless, but it really was. We learned very little from talking to hundreds of people. Of course we learned how to better do customer development next time, which was/is invaluable.

I think when you ask people about their behaviour and whether they'd use your product, they're likely to err on the side of politeness. That's a serious problem for concept validation. But ask them for cash, and politeness takes a back seat.

Also, just asking if someone will use your product introduces bias already. It's a leading question.

You could instead ask the following:

  * "What are your top 5 interests?"
  * If they mention your industry or niche, continue with...
  * "What are your top 5 problems relating to X-interest?"
  * If they mention a problem your concept solves, then record a weak positive signal
  * Describe your concept/solution/product... 
  * If they're willing to provide an email address, increase signal strength
  * If they're willing to sign-up for a trial, increase...
  * Pay deposit, increase... etc
Long story short, I think signal strength and honest analysis is the key to validating a concept. It takes some real hustle to get financial commitments for non-existent products, but hey, that's what differentiates founders.


Thank you for the awesome and detailed response. Those are pitfalls I can definitely try to avoid for my venture.

Lot of what I am learning from entrepreneurship are things I read over and over again but you don't own the advice until you experience it.


Great content; fair-to-poor presentation. The designers should re-read Tufte.


Love the Startup Genome Project report... great idea well executed by some ambitious folks.

re: infographic... I guess I'm kinda old-school, but I'm in the camp of data lovers who sees the utility of infographics as one that enriches viewers by easily bringing to light some otherwise difficult-to-intuit metrics and comps.

This is a really pretty picture, but don't treat it as a TL;DR version of the actual thing, which I find much more informative: http://startupgenome.cc/


ouch.. that infographic hurts! pls don't call it an infographic. Thank You.


I'm left unsure how they define "premature scale." If they simply mean "started paying more for customers than they're worth, and doing so on a massive scale," then, well, duh.

It would be more interesting to know what successful startups do at Stage 3. I doubt it's just wait longer.


We just wrote up a more detailed post here http://news.ycombinator.com/item?id=2952799

We elaborate on what to do in the efficiency stage both in the report and in the benchmarking tool https://beta.startupgenome.cc/


Wow, impressively thorough. Kudos.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: