Hacker News new | past | comments | ask | show | jobs | submit login

Don't use this if you are going to spend money based on the outcome (or do anything else important with it).

People do not have a vested interest in completing the questions honestly or on the outcome. Their motivation is to get to the premium content, and although there will be some that are honest, most people are trying to get "free" content and have the most motivation towards not giving value (money or answers that betray their privacy/thoughts) back.

My day job is primarily based on analysing survey results, and I can tell you that the only way to get decent usable results from "paying" people is to make sure that they have a vested interest in the results of the survey and that the payment is a (nice) side-effect of completing the survey and not the goal.

Edit: Of course, this type of survey can be useful if you need to "validate" something that otherwise you can't (i.e. to falsify results). Simply choose multiple choice questions and stack the answers in the right order (people usually click in certain pattern when doing it at "random"). Keep repeating or extending your survey until you have the right ratio of participants in favour of your preposition, then call it a day and publish!




Can you give me an example of how people having a vested interest in the results of a survey would improve the results?

I always thought that if you have a question (like "should the government fund more particle physics") and you ask people with a vested interest (particle physicists) you'll get an answer aligned with their interest ("absolutely, extremely important") in which case why do a survey at all when you know what answers they're going to give?


You do indeed get an answer aligned with their interest, which is often useful (not perhaps in your example), as that is generally the point of the survey (to find out what your respondents think).

You need to pick your respondent cohort carefully if you are trying to ask a question of opinion (particularly more so than when asking a question of fact) to eliminate bias in your cohort. So in your example, rather than surveying particle physicists, you would look for another group of interested parties (e.g. a range of tax payers or non-specialist funding bodies) who have a vested interest (in this case in how their money is spent).

My work, for instance, revolves around surveying doctors on their experiences of medical training and career needs/aspirations. The vested interest of our participants is hopefully clear (influence on their training/career/work environment). We ask a wide range of factual and opinion based questions, eliminate biases (or segment responses on biases) due to various factors (e.g. location, age, gender etc.) and report our findings in the context of our cohort, i.e. "these are the experiences and opinions of doctors".

In short, you are right, you need to consider the makeup of your cohort and also how you report your findings in respect of that. Having a vested interest doesn't necessarily mean being biased, and sometimes that bias is what the survey aims to report on. In a survey like the google ones, that bias is money/free content, and thats all you will discover.


I think he's meaning they will actually pay attention to the survey, and then asking questions that aren't as black and white as the one you posed.

Asking particle physicists would they rather have X than Y when both are relevant to their interests would yield better results than asking a butcher particle physics-related questions and dangling a carrot in front of him for clicking through it.


GCS regularly conducts validation surveys which measure the incidence rates of common health issues like asthma, smoking and disability along with consumption of various media to make sure it's sample is representative of the population. It's pretty accurate. You can read more about it here: http://www.google.com/insights/consumersurveys/static/consum...


I've only skim read that as I'm at work, but it looks like it only compares their results with traditional internet surveys, is littered with caveats like it depending on the subject matter (type) of the survey, and doesn't fully show the methodology (full question wording and presentation, context, full details of respondents etc.).

It appears to only show the relative accuracy of certain types of questions in relation to other particular methods of internet surveys. Its not something that I would use to back significant spending decisions.


> Simply choose multiple choice questions and stack the answers in the right order (people usually click in certain pattern when doing it at "random").

Ideally, you could randomize the order of the answers for each person, and later divide them up by order presented to detect any bias. This is something that online surveys make much easier than paper surveys.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: