A/B/n Testing

What is A/B/n testing (multivariate testing)?

A/B/n testing is best described as an extension of typical A/B testing. 

In regular A/B testing (or split testing), you compare the performance of two different variants of a webpage by showing one version to one group of customers and the other to a different group. The goal is to see which version has higher conversion rates.

In A/B/n testing, you can compare the performance of more than two versions of a website. The little n refers to the number of variants you want to test: you could test 3 variants (A/B/C test), four variants (A/B/C/D test), and so on.

Why would you want to test more than two versions?

A/B/n testing helps marketers gain a much better understanding of what works and what  doesn’t when it comes to a landing page or website. You don’t have to limit yourself to testing just two versions of a page at once, you can test additional versions simultaneously.

For example, if you’re creating a landing page for an upcoming event at your company, you can test multiple value propositions to see which one converts best.

With the A/B/n test results, you can make significant changes to your landing page based on data rather than gut feelings.

A/B/n testing example

Check out how Obvi A/B tested different unique selling propositions (losing weight vs resting better):

A/B/n testing downsides

There are a couple things to watch out for when it comes to running an A/B/n test. First of all, make sure to keep your variants at a reasonable level. If you have too many, you’re probably doing a whole bunch of work for nothing.

Another potential problem is that the more variants you’re testing, the more traffic you’ll need to reach statistical significance. This increases the amount of time you’ll need to wait to see which version actually has the best conversion rate.