We are looking for a data analyst! Check the job posting.

Six methods to calculate the ROI of optimization

Online Dialogue

Online Dialogue

28-09-2015 - minutes reading time

Tom van den Berg wrote for Emerce on September 25 about six methods to calculate the ROI of optimization.

Every day, many companies are optimizing their online activities with the goal of improving returns. Performing A/B testing has been the most important method within online optimization to increase conversion for four years in a row (source: eConsultancy.). But how do you determine the ROI of conversion optimization? I offer you six methods.

roi optimization

A question I hear more and more often is: what is the Return On Investment (ROI) of our conversion optimization program and A/B testing in particular? An interesting and legitimate question that you can't simply answer.

After a year of optimizing, you have run several A/B tests with some winners. If all goes well, you have implemented these winners and a positive effect is visible in your analytics program.

An important condition here is that you have analyzed the A/B tests correctly, but I won't go into that in this blog post.

The effect is often not as easily visible as you might expect. How do you determine if the effect achieved in the A/B test is also visible after implementation?

In this article, I describe six methods you can use to calculate the ROI of conversion optimization. I discuss the first three methods in detail and focus mainly on the effect after implementation. These are relatively easy for companies to implement as opposed to the last three methods, which I discuss more briefly.

1. Pre / post analysis

The easiest way to demonstrate improvements in conversion through A/B testing is to take the total conversion rate in your analytics program before you started optimizing and compare it to 3, 6 or 12 months later. Do I see an increase? Then the optimizations have had an effect. Do I see a decrease or no effect? Then this is not the case.

At the same time, I also object most to this method. After all, there are countless other possible factors that could have influenced this. Is your conversion perhaps already lower by default in December? And did it drop less now than it did during the same period last year?

What is often forgotten is that a test took place on a certain part of your site and that an uplift found in an A/B test only means an uplift for a part of your total visit. The effect on the total visit is therefore much lower and possibly even almost not visible.

In addition, it is often thought that if an A/B test has realized an uplift of ten percent it should also be exactly ten percent in reality. However, the uplift will lie within a certain range. For example, with a conversion increase of ten percent, the expected uplift may be between +2 and +18 percent.

2. Pre/post analysis of one A/B test.

Another method is the before and after analysis of one A/B test. The winning variant of an A/B test goes live on a certain date, compare the conversion rate of the period before going live and after going live.

An A/B test is usually performed on one particular page. If there is then a winner, it does not mean that the overall conversion increases by the percentage increase found in the A/B test. This only applies to the conversion of visitors who convert through that page. So you are looking at this particular segment in this case.

The advantage of this method is that you only look at a segment of visitors that matches the A/B test. This method is less valid if a campaign or action is not evenly distributed over the period used for comparison.

Also, this method does not take into account seasonality, the conversion per week often fluctuates anyway. A campaign can possibly be excluded by looking at a stable traffic source, for example SEO or SEA.

The diagram below explains in time which periods (before & after) are compared in the method:

  • Test uplift range: with 90% assurance, the uplift falls between X% and Y% (range).
  • Implementation uplift: we see an impact that falls within the expected range.

roi optimization 2

3. Pre / post analysis year on year

To exclude seasonal influences, you can compare the situation before you started testing with the period after the winners have been implemented. To make this clear, I will explain this using an example.

As a company, you started A/B testing in February 2015, and three months later (end of May 2015) you put all the winning variants live until then. To see if this has an impact on conversion, you compare the conversion in January with that of June. You do this for both 2015 and 2014 (you did not test then) and compare these two numbers to each other.

This way you exclude seasonal influences. If campaigns or actions are not seasonally related, this method is less valid. With travel sites, for example, the dip/increase in conversion can fall in another week due to a shift in vacations. This should be taken into account when making the calculation.

The diagram below explains in time which periods (before & after) are compared in the method:

  • Test uplift range: with 90% assurance, the uplift falls between X% and Y% (range).
  • Real impact: the conversion before & after in 2014 (in which no testing was done) is compared to the conversion before & after in 2015.

roi optimization 3

The first three methods apply after implementing A/B test winners and are relatively easy for most companies to implement. Below I name three other methods that are often less easy to implement.

4. Retest old against new website

Another option is to test the old website (for all A/B tests) against the new website (with all A/B test winners). This is a simple way to show what the increase in conversion is. So you combine all the winners in a variant and test this against the old website.

There are some drawbacks to this test:

  • During this A/B test, you cannot run any other A/B tests and therefore all other tests are at a standstill.
  • Chances are, the old website is not converting as well, which results in you losing money by sending 50 percent of your traffic to the old website.

5. Small portion of your traffic to consistent group of visitors

By always sending five or ten percent of your traffic to a consistent control version of your Web site, you can see exactly how much better your Web site converts. This group of visitors never comes into an A/B test. You can only do this when visitors are logged in. Because visitors today often visit a website with different devices, it is not possible to recognize them as one unique visitor. For this you need a unique ID.

There are two more major drawbacks:

  1. Some websites have just enough traffic to A/B test, then it's not possible not to use five to 10 percent of your traffic.
  2. You lose a lot of sales by showing this group of visitors a non-optimized website. Then you have to ask yourself what is more important: proving that A/B testing really works or making more money by showing 100 percent of your visitors the optimized website.

6. Implement all winners at once

The last method is to save up all your A/B test winners and then implement them all at once. At the time of going live, you should then almost see an uplift.

A disadvantage of this is that you lose revenue until you implement all the winning variants. And you don't know if certain winners have an effect on each other. In the ideal situation, you want to continue testing on an earlier winner as soon as possible.

Conclusion

These are six possible methods for calculating ROI. Each method has advantages and disadvantages. One option may also be to apply a number of methods to get an overall picture of the effect this way.

Aside from the fact that A/B testing should generate additional revenue directly, it often has great indirect value. The mere fact that an A/B test always gives you new insights and helps you better understand the visitor on your website makes A/B testing useful. So even non-winning A/B tests do have value. In addition, I have found that A/B testing often creates positive energy within teams. Everyone wants to think about the variants and is curious about the results.

Other / new insights I would love to hear.

Online Dialogue

Online Dialogue