26th May 2017
If you’ve looked at your site with a view to improving conversion levels, either on your own or with the help of an agency, you’ve completed a lot of work already. You’ve studied crucial pages, analysed the relevant data, and spotted the areas that need to be changed.
Chances are that the following thought has also crossed your mind:
“It’s clear to me where we’re going wrong, and the new ideas look great. Why should I worry about the hassle of testing the changes? An A/B test will slow us down. It’s better to get the developers to make the changes, and reap the rewards as soon as possible!”
Unfortunately, if you listen to that voice inside your head, you may end up in a worse position than when you started – and here’s why:
If you’re right, you need to prove it.
Let’s say you decide against A/B testing and make the changes to the site. One month later you check the data and it’s good news – sales are up by 50 per cent. After a brief celebration at your desk, you proudly email the rest of the company about your success.
Five minutes later, you get a reply from Bob in marketing asking if the half-price sale of the last four weeks may have also helped. Ah. You forgot about that, and a silent tear rolls down your cheek.
Your site changes aren’t the only thing that has an impact on sales and conversion rate. Seasonality of the business, promotions, and changes in product lines are just a few of the many factors to consider. Making changes and hoping for the best means you’ll never know if your changes worked.
However, if your changes are part of an A/B test, these other factors won’t have an impact on your results. No matter what promotions are occurring elsewhere, half the traffic sees the original version of your page, and the other half see the new version. Both are equally subject to outside influences. The only difference between your two groups is the changes you’ve made. Assessing the effects of your changes in isolation, means you can state exactly how much of an impact they’ve had on the bottom line.
Take that, Bob in marketing.
Despite everything, you might still be wrong.
No matter how thoroughly you research your changes, there’s no guarantee that they will be a success. For example, your test could disorientate existing users who don’t like change, or you might end up removing a piece of text that was crucial for conversion rates. Or your audience may, for reasons you’ll never know, have really liked that bright pink button you decided to turn green.
Don’t let these worries stop you from trying something new. There’s always a risk that what you try won’t be an improvement. But if you fear anything, you should fear inactivity. If you never make changes to your site, then nothing will improve. You’ll be left with an inevitable slow decline, as competitors that are try new things overtake you.
A/B testing means you can make changes to your site with little risk of it going wrong. If your changes cause the conversion rate take a nosedive, you just stop the test. The traffic that was seeing your unsuccessful experiment will go back to seeing the better performing original version.
If you made the changes without an A/B test, you’ll need to go to your site developers and ask them to undo everything, which may not be a simple task. The developers may need to roll back other work they’ve been doing or unpick your changes from the code. Either way, an awkward conversation is ahead.
In summary, A/B testing means that when your changes are successful, you can prove it, and when they aren’t successful, you can easily end the experiment.
If you’ve found yourself in a difficult position regarding changes you’ve made to your site, or if you want to find out whether A/B testing can improve your website, Further would love to help. Get in touch by clicking here.