From data to insight with the FACT & ACT model

Suppose you are an online marketer for a coffee roaster. You decide to do an a/b test to boost coffee maker sales. A nice design is created with some pretty pictures, a big green call-to-action, 10 percent off, some more social proof there and you're done. After four weeks of testing, you see that more coffee makers are indeed being sold. But unfortunately, coffee bean sales and newsletter subscriptions have declined. Now what? What did you actually learn from your test? That all of the custom elements make for more coffee maker sales? Is that then also going to help increase sales of your other products? From data to insights - it's easily said, not so easily done.

The fact that your design creates more sales is nice. But what does that mean? Why does this particular design create better sales? What behavior have you changed? These are questions that are probably unimportant in the short term, but in the long term determine whether and how successful your optimization program is. What if the design trick doesn't work after a few months. What should you do then?

Translating data into insights is not easy, because data is an abstract concept. Data are absolutely necessary to arrive at an insight, but you need to know how to interpret the data. For this reason, the interaction between web analyst and psychologist is crucial, with a third person, the UX'er, being an additional link.

At Online Dialogue, we work according to the FACT & ACT model. This model structures daily conversion optimization practices and focuses on working with a multidisciplinary team. A mutual dependency between web analyst and psychologist ensures that data is critically examined, allowing you to learn more step by step.

1. Find

When you visit a client, the first thing you want to know is what behavior you're dealing with. The Web analyst dives into the data to do a preliminary analysis, and the psychologist asks additional questions.

The coffee roaster's web analyst notices that many people spend a long time on the coffee maker overview page and then leave the website. The psychologist asks further: “What is the scrolling behavior?” “Are there other tabs open?” “Are filters being used?” “What is the clicking behavior?” By examining these questions as well, it becomes clear that many visitors scroll a long time, click back and forth between products a lot, and use a lot of filters. Filters like price and type of coffee turn out to be very important.

In addition to the pre-analysis, the web analyst does a technical check and a bandwidth calculation. The technical check determines if the data is correct and if everything is set up correctly in the analytics program. It also identifies possible additional metrics that can properly map behavior.

Through the bandwidth calculation, the Web analyst discovers which pages you can test on and with which KPIs. The calculation also shows how long a test can run and which uplift in conversion is needed to measure an effect. This is important information for the psychologist because the minimum uplift determines how large an experiment should be: for a small uplift, a small experiment is sufficient; for a large uplift, a large experiment is needed.

The Find phase leads to an overall picture of website behavior. Such preliminary research by the Web analyst shows conversion rates, where clicks are made, where visitors leave the Web site, how long they spend on a page, and so on.

Most behavioral insights are extracted from a flow analysis. Through this analysis, you can identify difference in behavior by segment. For example, you can see differences in behavior between visits from an ad and from a search in Google. Once data are available, a question-and-answer game between web analyst and psychologist ensues. Thus, ideas about behavior come to life and assumptions about behavior can already be examined with available data. In addition, the web analyst's bandwidth calculations should be included to determine where to test and how large the experiment can be.

First, we see in the data that most visits come from a Google search and some from social media. Few visitors land directly on the coffee roaster's website. Visitors from Google click around the website a lot and visitors from social media leave the page rather quickly.

The data research suggests that visitors are interested in coffee makers; after all, they spend a long time on the coffee maker overview page. In addition, visitors seem to be looking for information, we infer from the many clicks and use of filters. Leaving the page indicates that visitors are not yet able to proceed to purchase.

Do you have an insight now?

No, now you have a general picture of behavior on the website and thus input to form hypotheses.

All information, from bandwidth calculation to psychological behavioral study, is collected in a determinant study. This study is the basis of the test roadmap, indicating where we can test and where we can make an impact. An important question we continually ask ourselves is: “What is the next best test?” To determine this, we make sure to prioritize our test ideas, taking into account everything we have learned so far, the expected impact and value to the business.

2. Analyze

A hypothesis is the most important part of an a/b test; without a hypothesis, it is impossible to get from data to insight. The reason you set up a hypothesis when doing an a/b test is to give the research a framework. If the psychologist determines in advance what the hypothesis is, you know what you can assign a measured effect to. If you don't determine that in advance, you're in the dark.

When writing hypotheses, we distinguish between main, partial and test hypotheses. A main hypothesis makes an assumption about behavior on the website, a partial hypothesis makes an assumption about the behavioral change that will take place by applying a technique, and the test hypothesis makes it specific. By testing different test hypotheses, you can confirm a sub-hypothesis and ultimately a main hypothesis.

Because there are many visits from Google and many visitors clicking through on the site, the psychologist believes that most of the visitors are looking for a coffee maker, but do not yet know what is important when buying one.

The assumption from the data research may be that visitors are not yet ready to make a purchase because they are not sure about the product. This assumption comes from the fact that visitors click a lot on informative elements of the page and view many products.

The corresponding main hypothesis is, “Visitor buying behavior depends on certainty about the coffee maker.” The corresponding sub-hypothesis reads, “By increasing certainty by increasing positive feelings around coffee maker, more coffee makers will be sold.” A test hypothesis could be: “By showing positive reviews on the summary page, feelings of certainty are increased and more coffee makers will be sold” or “By providing quality assurance on the page, feelings of certainty are increased and more coffee makers will be sold.”.

So a hypothesis, and then?

3. Create

A hypothesis is the result of the question-and-answer game between web analyst and psychologist. Once the hypothesis is formed, the UX'er comes into the picture and a design is created. This goes in agreement with the psychologist, because it is imperative that the design reflects the hypothesis. If it does not, you cannot draw any conclusions about the measured effect.

By examining both hypotheses, you already get a better picture of website behavior. Are visitors sensitive to more emotional information, such as what other people think, or are visitors sensitive to more practical information, such as quality and warranty when something stops working?

For each test, you test one hypothesis and one design. The UX'er creates a design for the first hypothesis. Each coffee maker gets a rating in the form of a number of stars, which is visible on the overview page. The psychologist still hesitates to express the rating in report grades, but after consulting with the UX'er and consulting scientific literature, the stars are retained.

The data you need to measure the effect of the test is determined in advance. This must take place in advance, because your hypothesis determines what you are investigating. It is not right to look for an effect in your data afterwards and come up with an explanation. You won't learn anything from that.

Together, the psychologist, the UX'er and the web analyst determine which metrics are needed to test the hypothesis. The psychologist knows what data is needed to determine behavioral change, the UX'er knows better than anyone else what changes in design might affect it, and the web analyst figures out if all metrics are possible.

To determine what effect the design change will have, a number of metrics are important. For example, the psychologist and the UX'er expect conversion to increase, time on the page to shorten, fewer clickbacks, lower exit, less filter usage and an increased number of visitors entering the checkout.

All measurement points are explicitly mentioned in the test hypotheses. In addition, it is interesting to measure whether visitors pay attention to the stars, by looking at heatmaps, for example, but that is not the most important measurement.

4. Test

Just breathe, first data then insights.

5. Analyze

Once the test is over, the Web analyst goes to work. This produces a nice report with answers to all the questions. But in the end, it's all about the significance level of the new variation. Ideally, you test where conversion increases significantly. Higher conversion means more sales and a satisfied customer.

Nevertheless, we don't need a conversion rate increase to get insights from the data. We learn something from every test we do, particularly because test hypotheses include multiple metrics. These metrics tell the psychologist more about behavior. We learn whether we are on the right track, or not. We notice whether certain changes within certain groups produce behavioral change, or not. Each outcome of each test is a puzzle piece to learn more about visitor behavior.

We find a significant result, we are very happy with that and so is the coffee roaster. But we are not satisfied with that, we would like to find out how we arrived at this positive result.

Back to the test hypothesis for a moment, “By showing positive reviews on the page, feelings of certainty are increased and more coffee makers will be sold.” So by increasing certainty, through rating in the form of stars, we see that more coffee makers are being sold. Thereby, in addition to increased conversion, we can also see that there are fewer back clicks, less filter usage, less time on the page and that exit is lower. Furthermore, the web analyst notes that fewer other tabs were open.

Is this then an insight?

6. Combine

The trick is in the combination. By testing a lot, we collect a lot of data and can therefore potentially learn a lot. Data alone are not enough, which is why we write hypotheses. By writing hypotheses, we give meaning to the data. A significant result is no longer an abstract fact, but a confirmation of an assumption.

We test the same partial and main hypotheses with multiple test hypotheses to find as much evidence as possible for our data-derived behavioral assumption. And then we finally have an insight. You understand why behavior on the website is the way it is.

The test in which we increased the security on the page had a positive effect. This gives us the idea that we are testing in the right direction. If we now continue with different test hypotheses and get repeated confirmation that increasing certainty drives sales, we have an insight. Namely: visitors who want to buy a coffee maker need certainty about the product.

We can use this insight throughout the website. Maybe it works even better on the product detail page or just in the checkout. Notable was the web analyst's finding that visitors have fewer tabs open during their visit. For the psychologist, this is interesting. Indeed, it could indicate that fewer visitors were comparing competitors. Thus, each test and each finding provides input for the next.

7. Transform

The moment you have many insights about your visitor's behavior you are also able to notice change in behavior at an early stage. Change in behavior means the market is ready for change. Time to innovate!

From data to insights, it's easily said, not so easily done. The key to getting from data to insights is the interaction between web analyst and psychologist. So for each product, some pretty pictures, a big green call-to-action, 10 percent off and some social proof isn't going to help you in the long run. Knowing that visitors need assurance when buying a coffee maker certainly does!

This article was published on Oct. 10 at Marketingfacts.