When you run an experiment or analyze data, you want to know if your findings are “significant.”
“Statistical significance helps quantify whether a result is likely due to chance or some factor of interest,” says Thomas C. Redman.
For example, if you have a couple hundred visitors (or impressions), and one variant is “leading” (has a slightly higher conversion rate), it doesn’t necessarily mean that it’s performing better.
It could also simply signify that it’s “luckier” than the other variant.
In many cases, even if you test the exact same messages against each other, you will get different results because each visitor is inherently different.
It’s important to run your tests long enough to get reasonable data. You want to at least have 90% confidence, but optimally, you want to have 95% or more confidence to select a winner.
Here you can read a detailed guide on statistical significance. But if you’re not interested in reading through the details, you can just get right into the action by using our significance calculator. Here’s how it works:
1. Campaign-level A/B test: Put your variants’ impression and conversion numbers into the yellow cells. The calculator will then show you if the difference is statistically significant at a 90%, 95%, and 99% confidence.
2. Store-level A/B test: Put your segments’ session and order numbers into the yellow cells. The calculator will then show you if the difference is statistically significant at a 90%, 95%, and 99% confidence.
If the 90% confidence cell is green (YES), it means that you can be confident that one of your variants is better than the other, and it’s safe to declare it as the winner.