March 5, 2026
Why experimentation is becoming an operating model for smart organizations
A conversation with Valentin Radu, founder of Omniconvert, on experimentation as an operating model, AI and sustainable digital growth. Read more
We are looking for a data analyst! Check the job posting.
That it is instructive and financially attractive to experiment is by now well known to many marketers and managers, but how do you make sure you use the data and insights at your disposal in a smart way? How do you take a structured approach and get the most out of your team(s)? This is how to implement a data and evidence-based way of working in your organization in 5 steps.
Before the team starts running experiments, you do some checks to make sure the web data is correct and can be used. What metrics and events are you using? What is being measured and what is not? What measurements do you need to experiment (think: event tracking, analytics settings and conversion metrics)? In addition, check that all tooling is in order. What is the quality of your data? Is all data coming in? Is the entire population measured by the A/B testing tool and is it randomly distributed?
The moment the data is correct and suitable for testing, the team looks at the bandwidth of the online channel or website. Bandwidth is basically the test potential calculated over the entire year. The calculation gives a lot of information about where you can optimize and how much impact should then be made on which template. Based on this you can determine what your total experiment potential is (and how many resources would then be needed). The bandwidth calculation answers the following questions:
In the article “Why you always start with a bandwidth calculation” read more about the bandwidth calculation.
It all starts and ends with your customer. What motivates a visitor to become a customer? Why do visitors specifically choose your brand, product or service? What keeps them from buying something?
Data can largely help answer these questions. Using web data, the team learns what steps visitors take, where they hesitate, drop out or continue. Based on these key insights, the team then goes on to figure out why visitors take these steps. For this, it is important to hook a behavioral expert or researcher. Why do visitors do what they do, how does the brain work regarding decisions and how can we make it clearer or easier for them.
In a customer journey analysis, you conduct an investigation into the motivations and abilities of your visitors. From sources such as web data, surveys, science, market research and conversations with colleagues, the team extracts valuable information to fathom what is going on with customers and visitors at what time. For the research we use the 5V model at Online Dialogue. Based on this research, the optimization hypotheses are formulated and prioritized on chance of success based on the data from the preliminary research.
From the customer journey analysis, the team extracted many insights and issues that you can start exploring, tweaking or testing. Making quick improvements and growing is often the goal of testing, but there are many more insights and opportunities that experiments provide. At the test level, results can provide a lot of insight by test type:
Depending on your approach to testing, when you analyze the results you want to determine whether you are going to implement the variation or whether you want to run a test again to make sure the result occurs again. Or perhaps the results provide inspiration for an entirely new test hypothesis. In this way, insights from testing help the team determine strategy.
Jeff Bezos has previously said “If you double the number of experiments you do per year you're going to double your inventiveness.” He is referring to the amount of testing and insights your experiments yield. Experimentation is a way of working that can yield a lot, especially in the long run, both in conversions and in insights into visitor and customer behavior. It is important to build up the testing program structurally and adjust it (where necessary). The time and attention you devote as an organization to data and evidence-based work determines your focus.
In the short term, the focus is on transactions. Your goal is to increase the number of conversions quickly and exploit your bandwidth as much as possible. So “do, do, do” and do a lot of testing.
In the medium term, you move to an informational program. Here the focus is on gathering insights. You look more at the quality of your tests and add a psychological layer to your analysis. How are people motivated, should we facilitate or respond to the online experience, what different groups are there and how can we tailor our communications and activities for them?
The moment the team experiments a lot, you can start to drive and innovate based on the insights you've gained about your customers. You will find that the knowledge you gain is extremely valuable in developing new products, new campaigns, features and delivering as much customer value as possible.
By structuring in this way, you greatly accelerate the way your team(s) work. Quick wins for now, learnings for standardization and innovation in the long run.
Be aware that experimentation itself is only a very small part of a data and evidence-based practice. There are plenty of steps before and after conducting an experiment that will make your practice successful. In addition, your approach and focus depends on the stage your organization is in. A data and evidence driven way of working only accelerates and increases the growth of your organization if you have a clear working structure.
This article was published on Dec. 4 at Emerce.co.uk