Now, let’s look at the five most common mistakes that you’ll want to avoid when performing A/B testing on your online shop.
Mistake #1: Focusing only on sales conversion rate
You should be careful not to judge the success of your A/B test simply in terms of sales figures. Here are some other goals that are also relevant when optimising the overall performance of your online shop:
Increase in order volume
Reduction in bounce rate
Increased traffic
Visits to the shopping cart
Visits to checkout
Higher revisit rate
Longer page views
These can help you identify further possible areas for optimisation; for example, when there’s a disproportionate increase in two different criteria.
Example: The A/B test results in a 50% increase in the number of orders completed, but only a 30% increase in the number of visits to checkout. This might indicate that your checkout process needs to be further optimised to increase the conversion rate.
Mistake #2: Analysing results too early
If an A/B testing tool reveals that version A is 75% more likely to be effective than version B, then most eCommerce retailers would assume that the A/B test was a success and the results are clear. However, it’s important to keep the test running until it includes an adequate number of users, spread out over a longer period of time. A number like 75% is only significant if the test includes enough people. Other factors influence user behaviour as well, such as these:
Season of the year
Day of the week
Time of day
Holidays
Events
Check whether factors like these may be influencing the results of your A/B testing. When in doubt, continue testing for a longer period. For a test result to be valid, it should point to a 90 to 95% likelihood that one version is more effective than the other. This is the only way to be sure that it will be worthwhile to apply changes to your entire website.
Mistake #3: Only optimising one page
Conversion is usually the result of a number of interactions that a user has had with your site. In other words, a user rarely visits just one product site and decides to make a purchase without looking any further. They usually navigate to other pages, such as your start page, category page, product pages, upselling and cross-selling offers, shopping cart and checkout. All these steps affect your potential customer’s overall experience. That’s why you should not perform A/B testing on one page, but instead present the user with a consistent experience throughout their entire visit to your site.
Mistake #4: Confusing Google with double content
There are two other things to keep in mind during your A/B testing:
set the alternative version of your landing page to ‘noindex’.
add the canonical tag to the page’s HTML code.
Otherwise, Google will view versions A and B as two pages with identical content (duplicate content). At least one of the pages will then be removed from the index, which harms your SEO ranking. To avoid this, use canonical link elements in your URLs.
Mistake #5: Viewing A/B test results as permanent
If one version outperforms another during A/B testing, that doesn’t necessarily mean it will still perform better six months from now. Remember that other factors, like seasonality, also influence user behaviour. You might find that the conversion funnel changes again down the road, and then it will be time to re-evaluate past findings.