Five steps to an evidence-based prioritization model

Conversion optimization has been around for a while now. Over the years, we have delved into topics such as statistics, psychology, server-side testing and automation. One topic that has received less attention is prioritization. High time to change that. 

Of course, many companies have moved from a basic framework, such as PIE and ICE, to a more complete framework, such as PXL. Most companies have been using such models for years. High time to optimize and validate our model to achieve greater impact on business goals.

In this article, I propose a partially automated and evidence-based prioritization framework with a double feedback loop to run better A/B tests and find more winners.

PIE, ICE & PXL frameworks

We have been using PIE, ICE and PXL-related frameworks for a long time. Not only within conversion optimization, but also growth hackers use such models.

The advantage of these models is their simplicity. Especially models like PIE and ICE need only three numbers to arrive at a priority score. PXL-related frameworks need about ten numbers, but because this model is much more fact-based, it was my favorite framework for many years. However, all of these models also have some major drawbacks.

First of all, PIE and ICE are completely subjective. We give subjective scores to each attribute. For example, take ‘potential’ within PIE or ‘confidence’ within ICE. With an A/B test win rate of, let's say 25%, how confident can you be? How confident are you about potential?

Second, there is too much focus on convenience. Within PIE and ICE carries Convenience 33.3% contribute to the overall score! This comes at the expense of innovative experimentation. If something is difficult to build, it ends up at the bottom of the backlog. For PXL, Ease has less impact on the overall score. However, it is still the attribute that can get the highest score out of all 10 attributes. Convenience is obviously important to achieve high testing speed. However, complex experiments, such as new features, can have a greater impact. A combination of both is essential.

Third, there is little to no alignment with corporate goals. I assume that when you run experiments, the main goal will be the same as the company's goal. Still, it helps to align with current company OKRs (Objectives and Key Results) or OGSMs (Objective, Goals, Strategies and Measures) to conduct relevant experiments. This helps in the acceptance of experimentation throughout the organization.

And fourth, and perhaps most important for PXL-related frameworks: there is a huge lack of evidence and feedback. For example, in the PXL model, ideas related to problems found in qualitative feedback receive a higher score. However, this may not necessarily lead to better experiments. Perhaps ideas involving qualitative feedback in your situation have a low win rate. Yet you consistently give these ideas a higher score, significantly lowering your experiment win rate! Another example is ideas related to motivation. In the PXL model, you give these ideas a higher score, but perhaps experiments involving ability lead to many more winners.

5 steps to create the foundation of your new prioritization model

We need a prioritization model that helps us make better decisions so we can run better A/B tests and get better insights. At the same time, we want to maintain the simplicity of current models. The model should also be evidence-based, automated to some extent, and with a (double) feedback loop based on the success of completed experiments.

Step 1. Document the psychological direction for each experiment

Data at Online Dialogue show that when you use psychology appropriately in your experimentation program, your win rate will increase. The first step, therefore, is to document the psychological direction for each experiment. To do this, you can use your preferred psychological model. I name two:

The simplest model is the Fogg behavior model. For each experiment, document whether you are trying to increase motivation or ability or applying a prompt.

You can also use the Behavioural Online Optimization Method (BOOM) from Online Dialogue use.

Step 2. Calculate the profit percentage and impact of each direction for each page

Having documented the psychological direction, you can now calculate the profit percentage and impact (average conversion increase per winner for your key KPI) for each psychological direction on each page.

At Online Dialogue, we use Airtable as a documentation tool. In this tool, it is easy to make these calculations. And since we document everything in Airtable, including experiment results, automating prioritization scores is effortless (see next step). Of course, you can also use another tool.

Example from Airtable

Step 3. Use the scores as the start of your prioritization model and automate (first feedback loop)

The next step is to set up your prioritization model. The beginning of your model is the score from the previous step.

For the win percentage, you can multiply the number by 10. So a winning percentage of 41.5% becomes 4.15 points. For impact, you can multiply the scores by 100, so an average increase per winner of 5.1% becomes a score of 5.1.

Based on the screenshot above, any experiment idea on your backlog that applies a prompt to the home page gets a score of 4.15 + 5.1 = 9.25.

Of course, these scores must be updated automatically. After each experiment, the winning percentage changes, and after each winning experiment, the impact may change. Your documentation tool should make these calculations automatically, so prioritization scores of the ideas on your backlog are also updated automatically.

Again, with Airtable, this is relatively easy.

Step 4. Add other features that apply to your organization

Next, you may want to add additional attributes that apply to your business.

Examples:

  • Alignment with business goals and OKRs (important test goals get higher scores)
  • Percentage of traffic that will see the change (above the fold gets a higher score)
  • Minimal detectable effect (Lower MDE gets a higher score)
  • Percentage of sales going through the page (higher percentage gets higher score)
  • Urgency (more urgent means a higher score)
  • Ease (make sure to balance simple and complex tests for speed and impact)

Three things to keep in mind:

  1. First, make sure the winner percentage and impact have the highest weight in the overall priority score. These are based on previous experiments and should be the best predictor for your next experiment.
  1. Do not add too many attributes. This slows down the prioritization process.
  1. Score these additional attributes as you think is good for your experiment program and optimize in step 5.

Step 5. Validate and optimize the model (second feedback loop).

We are optimizers! We analyze data and optimize. Why don't we do this for our prioritization model?

With the appropriate documentation tool, or with an export function, you can create a pivot table. On the vertical axis, show the priority scores (or a range of scores) of all completed experiments. On the horizontal axis, display the profit percentage and average impact of these experiments.

The experiments with the highest priority score should have the highest gain rate and impact. If they do not, adjust your model. For example, change the scoring of the additional attributes or put more weight on the winner and impact scores.

Example of a pivot table

Keep experimenting and keep optimizing your model.

Better prioritization for better decisions

A successful experimentation program creates enthusiasm within your organization for experimentation and validation.

The success of your program is often determined by the number of A/B test winners and valuable insights from your experiments. To run the best experiments, a good prioritization framework is essential.

Our prioritization models should be simple, evidence-based, automated to some degree, with a (double) feedback loop based on the success of previous experiments.

As I mentioned at the beginning, I believe there is too little attention paid to the importance of prioritization. With this blog, however, I hope that more organizations will start using an evidence-based model, aligned with business goals, to become even more successful with experimentation. 

Should you need help in setting up a prioritization model for your organization, please feel free to contact with us.