POD | prioritization model

Our model: https://www.ondi.me/odpod 

Experimentation is a common method to run your optimization program. A/B-testing is the way you can easily test the effect of the changes you are making and learn about your customer behaviour. However, what if you have so many ideas that you just don’t know where to start? Or, what do you do if you start noticing that your A/B-tests do not meet expectations? There is a way to solve these problems: prioritization of your test ideas. 

There are prioritization models that preceded us, and we tried many of them. We encountered problems like: “I don’t know what the potential of my test idea is?” and “Of course, my test idea is very important”. This led to awkward prioritization ranks and no solution to the problems mentioned before. And maybe even more important, no intention to even use it at all. 

This made us give up on prioritization. 

But then, we met the PXL Framework by Conversion XL. They managed to come up with a model that was very intuitive to use. Instead of giving a score to the “importance” of an idea they are just asking if the change you are making in your experiment is noticeable within 5 seconds? Or whether you used qualitative or quantitative data to support your idea. 

The PXL Framework showed us that a prioritization model can be simple. This inspired us for our prioritization model. Our main goal was to motivate everyone involved with optimization to start prioritizing to reach a higher level of experimentation.

Hereby, we are introducing the POD model. The POD ranks all test ideas on 5 subjects questioned in twelve input fields. 

First, you need some documentation. What is your test idea, on which page and device are you testing? It is important to know that your ranking is not dependent on the information in these fields. If you are more mature in experimentation it is possible (and we recommend!) to add the value of the pages and devices in these fields.

The ranking starts with answering the question: how important is your idea? Is your idea very urgent because you are testing something that will influence a redesign or an implementation that can’t wait? And of course, does it support the goal of your experimentation program?

Next, we want to determine if we are statistically able to detect an effect of the experiment (power). This is answered by, is there enough traffic on this page? And: Does everyone see this change? If we have enough traffic on the page we are more certain about finding a significant effect. And if we are sure that everyone sees the change we can be more certain that our test results are actually a result of our experiment. 

We also need to know what the investment of the test. Therefore, we are asking you to estimate how much time the experiment will cost to build. Estimating time for work is a commonly used method in many agile companies and therefore (hopefully) easy to estimate.

The last subject is very important: why are you doing this experiment? and are you able to validate your idea upfront? We are using the hierarchy of evidence to rank the importance of the previous findings. The first level (lowest score on your ranking) is because you just think that you have an amazing idea, which is basically the same as I read it somewhere or others are telling that we should do this. Is this case you actually have no validation in place. The second level, which gives you a bit higher ranking is when you have some validation in user tests, usability research, surveys, screen recordings, etc. We support this kind of testing but as shown in the hierarchy of evidence we do know that user testing is not always explaining their behaviour and therefore less reliable than your analytics data, the third level. Your analytics data is the actual behaviour your customers are showing on your website. The final level in our ranking is previous experiments. Previous experiments are a combination of all your knowledge which is already actually validated in an experiment. Therefore, we are counting this one as the most reliable kind of research you can base your test idea on.

And those are the twelve input fields (questions) you need to answer to calculate a ranking, which we’ll call prioritization. We would like to support you to use this model and let us know your findings, in the meantime we’ll keep optimizing!

contact

CONTACT

Kunnen wij je ergens mee helpen?

Vul onderstaand formulier in en we nemen zo snel mogelijk contact op of bel ons (030 4100 170).