A key concept on split-testing in digital advertising is what’s called a “confidence interval”.
Just another fancy stats term for saying how much you can trust your testing conclusions.
Typically, one aims for a 95% confidence interval.
That means, if you repeated the test, there is a 95% chance that the results would be the same.
So, you can be 95% sure that the winner ad is in fact a winner, and should perform best than the other ad variations within the test, in future campaigns. You can make this number more, or less.. depending on your particular circumstances and tolerance to risk.
Just keep in mind that the higher the confidence you want, the more data you will need, which is going to take more time and ad spend to accumulate.
In times like this, it might make sense to go with a lower confidence interval… you want to make quick iterations, as long as you’re reasonably sure of the results… otherwise your business may not even be there by the time your test is completed!
What do you think?