
What if we told you there was a way to improve your marketing campaign, optimize user experiences, and learn more about your customers?
Well, there is.
All you have to do is run A/B tests, and it’s easier than you think.
We’re going to talk about the importance of A/B testing and how you can use this method to improve conversion rates, measure change, and boost your bottom line.
But first, let’s cover the basics.
Intro to A/B testing
A/B testing is when you analyze two different versions of the same thing to see which one performs better. It’s also known as split testing.

Source: HubSpot
Suppose you created a new landing page for your website. You like the way it looks, but you’re not sure if it’ll outperform your existing landing page.
You could just change it to the new one, but it might lower your conversion rate.
You would like to know which version your visitors prefer. And to do that, you can run an A/B test. Half of the visitors who land on your site will see Landing Page A while the other half will see Landing Page B.
When the test is finished, you can see which version of the landing page had the best conversion rate. That is the version you should add to your website.
You can run A/B tests for virtually anything––not just landing pages. Brands commonly test:
- Newsletter designs, subject lines, and other email copy
- Call-to-action buttons
- Product descriptions
- Customer reviews
- Popup advertisements and more
Running a split test can help you understand user engagement and make better marketing decisions.
The versatility of split testing makes it one of the most valuable assets in your digital marketing toolbox. It lets you see how people respond when you change your digital content. This gives you more insight into visitor behavior and buying preferences.
A/B testing in 5 easy steps
There’s a right way and a wrong way to conduct A/B tests.
The right way will help you reach your conversion goals by finely tuning various elements of your website to align with your visitors’ preferences. The wrong way will cause you to waste a lot of time, and nobody wants that.
Below are some best practices to help you get the most out of your A/B tests.
1. Start with a strategy
Conducting A/B tests is a lot like following the scientific method. You don’t just randomly change things. There’s a process you should follow.
You have to:
- Start with a Question: “How can I increase newsletter subscriptions?”
- Make a Hypothesis: “Adding an exit-intent popup to my landing page will increase subscribers.”
- Conduct the Experiment: Run an A/B test on your landing page, with and without the popup.
- Draw Your Conclusion: “I gained 8.75% more subscribers by using the popup.”
If you don’t have a goal or desired outcome in mind, you’re aimlessly testing things in hopes that something might work out. That’s why it’s important to develop a strategy before you start your A/B testing.
Know what you want to achieve, then you can start planning your tests.
Here’s an example.
Pretend your goal is to increase newsletter subscriptions. The first thing you should do is build your strategy around that goal by thinking about different ways you can increase subscriptions. You could:
- Add an opt-in form popup to target visitors when they leave your landing page.
- Create a new landing page.
- Change the button color of your existing call to action.
These are excellent things to test for. You can run as many A/B tests as you want, but you need to only test one element at a time.
Don’t test your new popup on the new landing page because you won’t know which element visitors are responding to.
However, you can run multiple tests of the same element. This is called Multivariate Testing.
An example of Multivariate Testing is split testing three spin-the-wheel popup versions, where you use a different call-to-action button for each variant. You’re running multiple tests, but you’re still analyzing the same element––the spin-the-wheel popup.
2. Lay the groundwork for your A/B test
Every A/B test should have a:
- Control: Your current digital asset. The control could be blog post titles, email subject lines, a call-to-action form, or anything else that can be tested.
- Challenger: The modified version of the control that you want to test.
Let’s say you have a welcome popup on your blog post, and you want to test the effectiveness of an exit-intent popup. The welcome popup is your control, and the exit-intent popup is your challenger.
Now, it’s time to set your sample size. Who will you send the test to?
If you’re split testing an email campaign, you can simply send the control to half of your subscribers and the challenger to the other half.
How do you target visitors when you want to test pages and elements on your website?
You want to make sure you have a large enough sample size. Otherwise, your test won’t give accurate results.
Optimizely has a calculator that helps to set your sample size.
All you have to do is
1. Enter the conversion rate of your control––that’s the conversion rate of your existing object.
2. Determine how much of an improvement you want the test to acknowledge. In the picture below, the Minimum Detectable Effect is 20%. It means the test will detect a lift in conversions that’s 20% or more.
3. Choose the statistical significance of your test. Think of this as accuracy. If your test has a statistical significance of 95%, that means you can be 95% confident the results are accurate.
And tada! The calculator will give you a sample size of how many visitors to show the test to.

3. Launch your test
You’ll need an A/B testing tool to launch the test and collect the data for you. You can learn more about how to launch tests and measure results by reading our guide on split testing statistics.
Check out OptiMonk if you’re looking for a solution that lets you create personalized popups, run A/B, and multivariate tests on them.

While it’s not specifically an A/B testing tool, you can use OptiMonk to create and test various website elements like:
- Popups
- Side messages
- Nanobars
- Banners, and more
Once you launch your test, you need to decide its duration.
Generally, companies run tests for a few months or a business cycle––but every case is different. Ideally, you want to test until you reach 95% statistical significance.
You should never run your test for just a few days. It won’t give you accurate test results because you could experience a short-term surge or dip in conversions that skew your performance.
4. Measure your results
Measuring the results of your test is a quick and easy process.

The picture above is the results of a multivariate popup test run through OptiMonk. You can see
- How many views each test has (Impressions)
- The number of conversions
- The conversion rates
- The confidence of each test
You can see from the picture that Variant 1 has the best conversion rate, but the test is far from over. Variants 1 and 2 haven’t reached a confidence level of 95% or greater, so we can’t confidently declare Variant 1 the winner yet.
When should you stop the test? Once your tests reach a 95% confidence level, you have enough information to conclude the test. Then you can select the variant with the highest conversion rate as your champion.
5. Plan your next test
Don’t think of A/B testing as a one-time thing. Look at it as an ongoing process because customer preferences change over time.
Plus, there’s always room for improvement––especially when you’re creating customer-focused content. Every time you want to redesign your web pages or change the button size on your opt-in form, run an A/B test to see which version visitors prefer.
Think of ways you can improve various elements of your marketing campaign. Evaluate your landing pages on Google Analytics. Look at metrics like bounce rate, conversion rate, and time spent on a page.
If you think there’s room for improvement, you can think of ways to change and test your pages.
A/B testing examples using OptiMonk
Now that you know how to successfully run an A/B test, one question remains: Do they really work?
Absolutely!
A/B testing is one of the most effective conversion optimization solutions you have. But don’t take our word for it. See for yourself by reading two case studies below.
Boot Cuffs & Socks
Women’s footwear company Boot Cuffs & Socks turned to OptiMonk when they needed help with driving sales. They created a popup for shoppers who leave the site without finalizing their purchase.
They created two versions to encourage visitors to complete their purchases. One offered $4.25 store credit, and the other gave a 10% discount. Boots Cuffs & Socks wanted to see how both versions performed against each other, so they ran an A/B test.

The test ran for 40 days, and they found that people liked Version B more. Its conversion rates were 15% higher than Version A.
In the end, Boot Cuffs & Socks’ marketing efforts worked. The campaign reduced cart abandonments by 17% and achieved a 280% monthly return on investment.
SwissWatchExpo
SwissWatchExpo also used OptiMonk to test their popups.

They conducted a multivariate test on three popups to see which version website visitors preferred. Here are the differences:
- Version A used the Dynamic Text Replacement (DTR) feature to display one of their watches in the popup.
- Version B had a message prompting the visitor to finish their purchase before the item sold out.
- Version C offered a $100 discount and free shipping.
Version C was the champion. It won the test with an impressive conversion rate of 28%.
After finishing the tests, SwissWatchExpo launched its campaign. Their online transactions went up by 27% and increased their revenue by 25% in three short months.
Why do you need A/B testing?
Albert Einstein once said, “Failure is success in progress.”
You won’t create the perfect marketing campaign by guessing. You create it through trial and error—learning what doesn’t work and adapting your strategy.
That’s where A/B testing can help. After running a few tests, you can improve your understanding of customers’ preferences and shopping habits.
Learn which marketing strategies are likely to boost sales and which strategies to avoid, as well as how to create a delightful customer experience that influences user behavior. Not only will this enhance the performance of your ongoing campaigns, but it’ll also help you make better-informed decisions going forward.