I'm sceptical about the usefulness of the dataset. One of the first things we learn as startup founders (or interns for Dr House) is that if you ask a person to analyse or predict their own behaviour, chances are the answer is way off. Even upon careful reflection and introspection, too many biases are at play.
I first learned this doing customer development for our current startup. We surveyed potential customers until almost being arrested at a private conference. We thought "this time is different" because we planned to validate the concept until we went numb. But we relied too much on others' self-assessments.
I'm not suggesting self-assessment is pointless (clearly it underpins our personal development), but rather in fleeting engagements with people who lack vested interest (e.g. surveys), it can do more harm than good.
Additionally, I found the Startup Genome survey so long-winded that my answers ended up being rubbish. It would have taken all day to get passed my own biasses and really think through that many questions. I understand there's more to the project than the survey, but that's the part I'm particularly sceptical about.
* But we relied too much on others' self-assessments.*
Can you explain? Do you mean customer development brings in everyone's biases into the picture and not the truth?
What do you think is a better way to do customer development?
Not all customer development, but the flavour we practiced, which mostly involved just asking people how/what/why they did what they did.
Imagine a scale that represents the strength of positive signals you get from data collected during customer development. At one extreme, you have weak signals from potential customers who just say they'll use your product. At the other extreme, you have strong signals from potential customers who part with cold hard cash and sign multi-year contracts (despite not having a product yet).
When we first tested our concept, we focussed on the following:
* Just doing customer development (it was a big step and an exciting new world)
* Building a big sample set ("if we're going to do it, let's do it right")
But, in retrospect,
* We were oblivious to (the lack of) positive signal strength
* We too quickly dismissed negative signals as "people outside our market"
So the intention was there, and man we worked hard at it, but the data was useless and our analysis was heavily biased. Sounds harsh to say it was useless, but it really was. We learned very little from talking to hundreds of people. Of course we learned how to better do customer development next time, which was/is invaluable.
I think when you ask people about their behaviour and whether they'd use your product, they're likely to err on the side of politeness. That's a serious problem for concept validation. But ask them for cash, and politeness takes a back seat.
Also, just asking if someone will use your product introduces bias already. It's a leading question.
You could instead ask the following:
* "What are your top 5 interests?"
* If they mention your industry or niche, continue with...
* "What are your top 5 problems relating to X-interest?"
* If they mention a problem your concept solves, then record a weak positive signal
* Describe your concept/solution/product...
* If they're willing to provide an email address, increase signal strength
* If they're willing to sign-up for a trial, increase...
* Pay deposit, increase... etc
Long story short, I think signal strength and honest analysis is the key to validating a concept. It takes some real hustle to get financial commitments for non-existent products, but hey, that's what differentiates founders.
I first learned this doing customer development for our current startup. We surveyed potential customers until almost being arrested at a private conference. We thought "this time is different" because we planned to validate the concept until we went numb. But we relied too much on others' self-assessments.
I'm not suggesting self-assessment is pointless (clearly it underpins our personal development), but rather in fleeting engagements with people who lack vested interest (e.g. surveys), it can do more harm than good.
Additionally, I found the Startup Genome survey so long-winded that my answers ended up being rubbish. It would have taken all day to get passed my own biasses and really think through that many questions. I understand there's more to the project than the survey, but that's the part I'm particularly sceptical about.