Hacker News new | past | comments | ask | show | jobs | submit login
How We Increased Conversion Actions 218% And Increased Sales 0% (georgesaines.com)
60 points by gsaines on Sept 29, 2010 | hide | past | favorite | 18 comments



Point is make sure you're optimizing around a QUALIFIED action.

EG - in the Skirtter conversion funnel - you're not disclosing it's a paid service until checkout.

That's doing two things:

a) Creating a jarring experience for users that are genuinely interested and didn't see any $$$ requirement before investing in watching a tutorial.

b) Sending lots of non-qualified people through the funnel.

You’ve got some great content here: http://www.skritter.com/pricing - maybe incorporate a little of that?

Alternatively use a testimonial to disclose $$$ like: "I learned more during the past 5 months than the past 3 years – best $9.95 I ever spent!"


Good idea aresant, as I mentioned, we're actually testing a lot of different variables right now, and one of them is including testimonials to that effect. It's not in the testing queue as yet however, due to limited traffic and an abundance of tests!


Thank you for this. I especially appreciated that you have brought the concept of "publication bias" (though not by name) into the discussion of AB testing. In medicine there is a tendency to publish positive results and not negative results, leading to a bias in favor of positive results. Most likely, this is an equally large or larger problem for bloggers.


Yep. I have an A/B results page which automatically collects closed tests. http://abingo.org/results 75% or so of tests get a null result (A may or may not be better than B.). Lots of the remainder tell me to what degree of statistical significance my great new idea for improvement sucked. But you can still improve over time.


Hey Patio, thanks for sharing that link, I didn't even know that page existed!


Oh, that is awesome! Do you adjust for multiple testing? You could do so and then test an overarching hypothesis (say, that there is at least one test on that page that differs from what would be expected by chance when looking at N tests).


Hey Carbocation, yeah, the file-drawer effect can be particularly strong with results for A/B and multivariate testing and partially I wanted to make sure people knew it was hard to get right and also to spread the word about our folly in the hopes it might save others time! :)


The greatest issue, as you acknowledge in your post, was that you weren't testing the point closest to the sale. The first thing to test is the final registration page. I often spend a fair amount of money to drive traffic that I can optimize against to single page landing pages. If you can't make enough money through Google Ads or Facebook ads (e.g. your cost per sale > revenue per sale), then you either have a bad business or a shitty landing page.

Rather than just optimizing existing traffic, buy traffic and find out how far off you are. The best part of these ad systems is that they are relatively efficient. That means you should be able to quickly figure out how far you are off.

While I'm sure "learn Japanese" has already been taken, there are probably plenty of long-tail keywords as well as people on Facebook who haven't been targeted. Your key is going to be to find out who those people are and drive them to your site.

I tend to be pretty straight forward with my own investments in projects: if there's a huge gap between cost of acquisition and revenue per customer (or present value of the LTV of the customer), you probably need to build something better or change your product all together. Your own internal cost per user will end up being just as high as google ads (if not higher), except google provides an instant vehicle for testing. In other words, it will help you fail quicker ... something that's extremely valuable in the world of startups.


I think that many people that post about AB tests fail to mention that one's expectations of AB testing have to align with the types of results one can get from AB testing.

AB testing isn't meant to used through the entire conversion funnel. That is, you can't measure an optimization on your homepage by sales. Sales is an indication of your entire conversion funnel. Each page of every site should have a single goal and you should AB test to that goal. In your case I would say that you're AB test was successful; however, you didn't go far enough. Once you were getting increased traffic to your signup page, it's time to AB test the signup page and improve that conversion.

The statistics behind AB testing are often lost by the hype. The tests used to calculated statistical significance require a random sampling and isolated variables. The moment you try to attribute success or failure of your AB test beyond the primary measurable goal, you're introducing outside variables that will skew your results.

All of that being said, I would argue that true AB testing is not cost effective for most startups. It takes a lot of timer and energy to test every part of your conversion funnel.


In your case I would say that you're AB test was successful; however, you didn't go far enough. Once you were getting increased traffic to your signup page, it's time to AB test the signup page and improve that conversion.

This was my thought at first, but thinking about it now, their AB test actually caused a decrease in conversion on the signup page. And that's the point they're making: they drove traffic to the signup page more effectively, but that traffic was no more interested in actually signing up. You can do a lot of things to massage the stats if you're only optimizing for a single step. You can make your homepage promise people a golden pony if they'll only click on the link to your pricing page, but they're not going to get there and suddenly decide they don't care about the pony and want to give you money instead.


IMO that screams for more AB testing as there's obviously a disconnect between the homepage and signup pages that should be tested.


> "That is, you can't measure an optimization on your homepage by sales."

Sure you can. If you are holding all other things constant and randomizing people to A or B on the homepage, there's nothing to stop you from using "sales" as your outcome variable in an AB test. It's probably what you should be measuring, in fact.


AB testing isn't meant to used through the entire conversion funnel.

Wrong.

Not using A/B testing through the entire conversion funnel is a shortcut to get results faster. You sometimes get the wrong result, but you get there faster. However if you have the traffic, you ALWAYS want to measure through the entire conversion funnel.


That's my point, is that in most cases the amount of traffic isn't there to support such an AB test.

It is also very difficult to isolate all other variables in the conversion funnel. For the case of an online software startup with an inside sales team like we have at Central Desktop, it's nearly impossible to say that every signup gets the same sales attention and is pitched in the same way.

Of course every case is different, but I'd argue that most startups don't have the traffic or the ability to isolate ALL potential variables in the conversion funnel.


If you don't have traffic, you don't have traffic.

You don't have to isolate other variables. Just make them randomized, and they become just another factor you don't know that is handled by the statistics.


In this situation I would recommend A/B testing to signup. Because that is the first hard decision that people have to make.

If you're A/B testing to actual sale, I have two points to make. The first is that you should be aware that it is OK to run multiple A/B tests at the same time. The second is that you should pay close attention to whether there is a possibility that your change will compress the sales cycle. If it does, an apparently positive result can be meaningless.


Yeah, we're actually running 3 tests currently, although you probably didn't notice them because they are fairly subtle. We were doing multivariate stuff before, but I think we're going to sticking with simple A/B tests for now due to the limitation of traffic.


This is a great article to demonstrate that testing automatically doesn't guarantee boost in conversions.

On the homepage, you optimized for conversions to signup page, so you got that! If on signup page, you optimize for signups, you will get that (hopefully). The only thing testing does is to substitute blind flying with a set of gauges. The act of coming up with test objectives and good variations is much more vital for increasing sales and conversions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: