24 november 2025
Worden we slimmer of juist dommer door AI? De inzichten van Klöpping, Scherder en Online Dialogue
Reflectie op Klöpping × Scherder door Simon Buil (Data-analist bij Online Dialogue)
DiDo #50 Party Edition is happening next week. Save your spot!
A conversation with Valentin Radu, founder of Omniconvert, on experimentation as an operating model, AI and sustainable digital growth.
Digital growth is shifting rapidly. AI is accelerating workflows, competition is intensifying and uncertainty has increased. We spoke with Valentin Radu about what drives him, where growth gets stuck and how experimentation is evolving as an operational model.
“We’re facing the most dramatic shift in how we generate more value in our work.”
For Valentin, the speed of change is both exciting and slightly frightening. He wants to be part of that shift and help customers and colleagues make progress in ways that were unthinkable a few years ago.
“The capacity to innovate and validate, to run experiments, is what motivates me the most. I sometimes feel like a kid in a toy store.”
Data integrity and data collection are crucial. Technology is evolving rapidly.
However, Valentin sees a different primary constraint.
“Internal politics and decision-making processes are still the main brakes when it comes to companies making progress.”
For him, organizational capability is the main bottleneck. Without alignment and the ability to act on insights, progress stalls, regardless of the tools in place.
“You’re making big bets with limited information.”
Launching the wrong product wastes millions. Missing a trend means competitors take customers. Relying on gut instinct is gambling.
Experimentation changes that dynamic.
“Don’t fund a $1 million hunch. Test the core assumption for $20,000 to find flaws before the budget is sunk.” Instead of betting everything on one idea, companies can test smaller versions first and learn what actually works before scaling.”
“The winners in your industry won’t be the ones with the best first idea. They’ll be the ones who learn fastest and make the fewest expensive mistakes.”
“I am a huge fan of continuous discovery.”
In today’s environment, research cannot stop. CRO, in Valentin’s view, is an excellent methodology, but it is a mistake to limit its insights to the website alone.
The entire customer journey should be continuously optimised and aligned with business goals. Research should extend beyond web analytics and include market insights, customer support data, product intelligence, demand assessment, pricing elasticity and user sentiment.
“Are you after market share, keeping margin at the bare minimum? Or are you after profit at all cost, sacrificing growth? You can optimize for either. But that requires internal alignment and departments that work together.”
It’s tempting to focus on AI first. There is a lot happening: agentic commerce, automation, AI-driven workflows.
But in uncertain times, the real advantage is adaptability.
Agility depends on people who feel safe to explore, test and fail fast. Tools help, but without the right culture and management mindset, organisations cannot adjust their business model to where the market is going.
2. Invest in clear, reliable and relevant data
You cannot adapt without strong hypotheses. And you cannot build strong hypotheses without reliable data.
Data needs to be clear, accessible and relevant. Moving faster without clarity increases risk.
Key questions such as what will not change in your customer base, why customers migrate, and where to invest cannot be answered without solid qualitative and quantitative insights
3. Balance efficiency with undeniable customer value
Efficiency matters. But cost-cutting should not undermine long-term value.
If you save $10K through automation but lose $100K in margin due to weaker customer experience or retention, that is not a win.
The challenge is balancing operational efficiency with delivering undeniable value in the market.
Where AI speeds things up
AI accelerates hypothesis generation and test prioritisation. It can analyse large volumes of customer data, detect patterns and suggest what to test based on commercial impact and likelihood of success. What used to take weeks can now take hours.
We have already seen this with our AI CRO audit and benchmark. In around 10 minutes, it can assess CRO fundamentals, accessibility, data hygiene and user sentiment from hundreds of reviews.
Agentic tools will go further. They will be able to run tests consecutively, learn from outcomes and continue optimising with minimal human intervention, provided clear guardrails are in place.
Where it goes wrong
Moving faster in the wrong direction is still moving in the wrong direction.
AI optimizes for the metric it is given. If that metric is clicks or short-term conversions, it will maximize those, even if retention, customer quality or long-term value decline. The numbers may look strong while the business weakens underneath.
There is another risk. If every brand uses the same AI tools trained on similar datasets, experimentation starts converging toward the same answers. Differentiation decreases.
The real issue is not that AI makes mistakes. It is that AI executes very effectively on a poorly defined strategy.
You still need to know where you are going. AI simply helps you get there faster.
The Model Context Protocol (MCP) is a new open standard by Anthropic that is able to securely connect large language models (LLM’s) to external tools and data.
Data + insights access: this is the biggest win. MCP lets AI connect directly to your analytics stack, CRM, or RFM segments without someone manually exporting CSVs. The insight-to-hypothesis gap shrinks dramatically.
Experiment setup and code generation: once the insight exists, MCP-connected agents can actually write the test: variation copy, targeting rules, even A/B test configuration pushed directly into your CRO tool. No ticket, no dev queue.
Results interpretation: instead of someone manually reading a dashboard after a test closes, an agent pulls the data, flags significance, and summarizes what it means for the next step. The learning loop CAN close faster.
Cross-tool coordination: most experimentation pain isn’t inside one tool, it’s the handoffs between them. MCP will make those handoffs programmable instead of manual.
The bottleneck stops being “can we get the data” and starts being “did we ask the right question.”
BTW: this is what we are working on right now at Omniconvert, aiming to release the beta version of it this April.
Valentin: “The real shift: less time herding data, more time making calls that matter. The plumbing is getting handled. AI and connected systems compress the “who needs what, when” work. Insights pop up faster, status writes itself, routine choices get automated.
That’s great… until it isn’t. When execution speeds up, bad strategy compounds faster too. So the human edge moves upstream: define intent with precision. What does ‘good enough’ look like, which metrics matter, what trade-offs we accept. Vague direction used to slow things down. Now it scales confusion.
The best leaders won’t be the most technical. They’ll be the clearest on outcomes, crisp in communication, and willing to override the machine when context demands it.
Judgment is the scarce asset. Protect it, train it, and instrument it. If you want a feedback loop that sharpens judgment, plug your customer’s voice into the system continuously.”
Curious how experimentation can become a strategic capability in your organization? Explore how we approach experimentation at Online Dialogue.