Why Do A/B (sometimes called split tests?)
A/B or split testing enables you to conduct controlled, randomized experiments to determine which variation of a website feature, process or element yields superior results. In order to get a clean read on exactly what is driving changes in metrics, only one element should be varied per test. For instance, if an online retailer wants to assess the impact of offering free shipping, the ONLY difference in the check out process between the control and test groups should be the shipping fees.
Running a Test
Make sure you know exactly what you are trying to learn before you set up your test, then execute using one of the excellent tools available online. Optimizely and Visual Website Optimizer are the most frequently recommended. Neither requires you to change the code on your site, and both are easy to use and offer 30-day free trials. Check out their blogs, tutorials and case studies to learn more about A/B and multivariate testing and best practices.
The optimization test tool will notify you when statistically significant results are achieved, but you may want to let it run a bit longer to ensure that results stay consistent.
Gather Feedback During Your Test for Valuable Insight
While the results of your A/B test are important, feedback that you receive during the test can she important line on your hypothesis. Immediate and overwhelmingly negative feedback can tell you right away that something has gone wrong. Abort the test to save time, and if your test plan is limited, resources.
Feedback gathered throughout the test can give you insight into how customers experience the two test options differently. Review feedback carefully to see if it bears our your hypothesis, or whether customers interpreted the change in a way you didn’t anticipate.
When your test is complete, carefully assess results. But don’t stop there! Dig deep into analytics and feedback again. Did different segments respond differently to changes? Consider testing alternate options or distinct funnels for each of the segments.
Review feedback from further along the conversion funnel to pick up on unintended consequences of the change. Did it impact visitor behavior at other points on the site? Do you understand why the change yielded the results that it did?
Look for feedback that starts, “Could you…” Or “I like what you did with… but it would be great if…” to get great ideas and learn about unknown unknowns.
Document results of every test, whether positive or not. Periodically review testing history to see if patterns or insights emerge.
Translate results into projected bottom line impact. Be sure to account for direct costs of rolling out the change on a wider scale, as well as for any secondary costs — or benefits — that might result.
Quantifying return on CRO investment is important for convincing stakeholders that a change is worth making. It is also valuable for document the contribution of conversion optimization efforts and to demonstrate the value of investing in additional human, technical and financial resources for your CRO program.
Implement Changes, and Play it Again
Once you get the OK to implement changes that tested successfully and are likely to result in high returns on investment, stay involved! Make sure that the changes made to the site are limited to those tested. One false move can quickly undermine your efforts and the credibility of your CRO program.
Conversion optimization is a never-ending process. Even if you could fully optimize every one of the myriad customer interactions and touchpoints on your sites and in your stores, the external elements that impact your business — the economy, competition, customer needs and so on — are highly dynamic and impact the performance of your digital platforms. As such, conversion optimization must be an ongoing, cyclical process that comes to an end only when the doors close on a business for the final time.
So gather your learnings about you and take it, once more, from the top.