When To Iterate On A Website Experiment (And When To Move On)

When To Iterate On A Website Experiment (And When To Move On)

I absolutely love this quote, and feel it's especially relevant to marketers:

"I have not failed. I've just found 10,000 ways that won't work."

– Thomas Edison

 A big part of marketing is quickly getting something live, noticing how the market reacts, and then iterating to make it better based on what worked well. That's why following a regular cadence of launching new experiments is a critical step in improving everything about your marketing: from messaging to conversion rates. 

Every test, even those that result in a negative lift, will help to inform your next move, build compounding value, and refine your growth engine.

But how do you know when an experiment should be iterated on, or when it should be abandoned completely?

In this post I'll share the 3 primary signs to look out for when deciding whether you should iterate on a website experiment, or bail completely and try again. 

When To Iterate On A Website Experiment, And When To Move On

Here's a handy dandy diagram to get us started with. As we go through the different cases, I'll highlight the part of the diagram worth paying attention to in purple.

No alt text provided for this image

Case 1: Negative lift + medium or high traffic volume

No alt text provided for this image

Webpages with a lot of traffic are the best testing grounds because you get directional results very quickly.

That means you do not need to wait for your experiment to reach a statistically significant result if you're seeing a negative trend. If you see a consistent negative trend after a few days with enough visitor traffic, you should consider iterating on your experiment. Consider changing the copy to be more direct.

If one day the results are positive, and the next day they're negative, there is no need to iterate just yet. Give your experience some time to normalize before making conclusions.

Case 2: Flat result + high traffic volume

No alt text provided for this image

Unlike A/B testing, personalization should generate a large lift (normally at least 20% compared to 4% for A/B testing). 

If your lift is smaller (between -20% to 20%) and the experiment has not reached statistical significance, you should iterate on the experience to create a larger impact. Take what you've learned from the experiment and run another experiment that's a bit bolder. 

Make sure your experiment has enough visitors to determine the result is flat. Look for at least 300 visitors in each experiment before making this conclusion.


Case 3: Low visitor traffic

No alt text provided for this image

If your experiments are being seen by fewer than 100 visitors, it will be difficult for you to measure performance differences. In these cases you should try to expand your segment size to reach a larger audience. The bigger the audience, the faster you'll see results.

It’s in the cases when you’re getting good results and the experiment is getting a lot of traffic that you want to hold-out until it reaches stat sig. This will ensure you’re getting the most conversions from your test and on the right path to growing revenue through your website. 

We’ve put together examples of how you can iterate your tests in each of these cases so you can get a new experiment launched and collect data ASAP.

To view or add a comment, sign in

More articles by Mutiny

Insights from the community

Others also viewed

Explore topics