Hacker News new | past | comments | ask | show | jobs | submit login

There is much more to running experiments properly than it seems. While I'm not an expert on the statistics side, there are a few things I've learned over the years which come to mind...

1) Run the experiment in whole business cycles (for us, 1 week = one cycle), based on a sample size you've calculated upfront (I use http://www.evanmiller.org/ab-testing/sample-size.html). Accept that some changes are just not testable in any sensible amount of time (I wonder what the effect of changing a font will have on e-commerce conversion rate).

2) Use more than one set of metrics for analysis to discover unexpected effects. We use the Optimizely results screen for general steer, but do final analysis in either Google Analytics or our own databases. Sometimes tests can positively affect the primary metric but negatively affect another.

3) Get qualitative feedback either before or during the test. We use a combination of user testing (remote or moderated) and session recording (we use Hotjar, and send tags so we can view sessions in that experiment).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: