A/B Testing Email Campaigns: A Practical Guide
Most email marketers are running A/B tests wrong. They test one subject line against another, see a 3% difference after 150 opens, pick the winner, and call it optimization. That is not testing — it is guessing with a nicer dashboard. Real A/B testing requires a clear hypothesis, a large enough sample, and the patience to wait for results that are statistically meaningful. This guide will show you exactly how to do it right, and what to do with the results once you have them.
Before You Run a Single Test: The Hypothesis
Every test needs a hypothesis before you run it — not after. A hypothesis has two parts: a prediction and a reason. “I think changing the subject line will improve open rates” is not a hypothesis. “Adding the subscriber’s first name to the subject line will increase open rates because it makes the email feel more personally relevant” is a hypothesis.
This matters because the reason is what you learn from, not just the result. If you test personalised versus non-personalised subject lines and personalisation wins, your hypothesis tells you why — and what to test next. Without a reason, a winning variant is just a lucky guess you cannot build on.
Write your hypothesis down before setting up the test. If you cannot write it, you are not ready to test.
What Is Worth Testing (And What Is Not)
Not everything deserves a test. Start with elements that move open rates or click rates meaningfully, and ignore minor copy tweaks that rarely produce consistent signal.
High-impact elements to test
Subject lines are where most testing effort belongs, because they determine whether anyone reads the rest. The variables that consistently produce signal:
- Personalisation vs. generic (adding a first name or company name)
- Length: short (under 40 characters) vs. longer (60+ characters)
- Question vs. statement format
- Urgency or scarcity vs. neutral framing
- Emoji vs. no emoji
Send time is underrated. The same email sent at 10 AM Tuesday can outperform the 3 PM Thursday version by 20-30% in open rates. Most email platforms now offer send-time optimisation based on individual subscriber behaviour — but it is worth verifying with a manual split test on your specific list before trusting the algorithm.
Call-to-action copy has a major effect on click-through rates. “Download the guide” versus “Get your free copy” can produce meaningfully different results. So can button colour, button size, and whether the CTA is a button or a hyperlink.
Email length matters more than most guides admit. A 200-word email and an 800-word email on the same topic can produce very different click rates, depending on your audience and the complexity of what you are asking them to do.
From name is one of the least-tested variables but can significantly affect open rates — particularly the difference between a person’s name (“Sarah from Acme”) versus a brand name (“Acme Co.”).
What is not worth testing
Minor copy tweaks in the body (“Get started today” vs. “Start today”) rarely produce enough signal to act on. The effect size is too small and too noisy. Single-image versus no-image tests frequently produce contradictory results across different audience segments, making the signal hard to interpret. Save these for later, once you have run higher-impact tests and have a strong testing infrastructure.
Sample Size: The Part Everyone Gets Wrong
The most common A/B testing mistake is declaring a winner before the results mean anything. If you have a list of 2,000 subscribers and split them 50/50, you have 1,000 per variant. At a 25% open rate, you will get roughly 250 opens per variant. That is not enough to detect a 3-percentage-point difference in open rates with any confidence.
As a rule of thumb:
- For subject line tests (open rate): aim for at least 1,000 opens per variant. On a list of 5,000 with a 25% open rate, that means sending to your full list.
- For click-through tests: you need even more, because click rates are lower. At a 3% click rate, 1,000 sends only gives you 30 clicks per variant — far too few to draw conclusions.
- For conversion tests (purchases, sign-ups): you typically need your entire list plus weeks of data.
If your list is under 3,000 subscribers, rigorous A/B testing for anything other than subject lines is genuinely difficult. Focus on one test per send, test subject lines exclusively until your list grows, and treat results as directional rather than conclusive.
How Long to Run a Test
Subject line tests should be concluded within 24-48 hours of sending. Opens cluster heavily in the first few hours after send — after 48 hours, you are collecting noise from edge cases.
Click-through and conversion tests need longer, because engagement after the open happens over a wider window. Run these for at least 5-7 days, especially if your emails link to pages with checkout flows or form completions that people sometimes return to.
The Testing Sequence: Where to Start
If you are new to A/B testing, run these tests in this order. Each one builds on the last.
Test 1: Subject line personalisation. Split your list 50/50. Variant A gets “Your free guide to email marketing.” Variant B gets “[First name], your free guide to email marketing.” This is the most reliable first test because the effect is usually large enough to detect even on a medium-sized list. Industry benchmarks suggest personalised subject lines increase open rates by an average of 20-26%, though results vary heavily by industry and list quality.
Test 2: Send time. Once you have baseline subject line data, test send times. Run the same campaign to different segments at 9 AM, 12 PM, and 3 PM on the same day (or Tuesday vs. Thursday if you have a large enough list to split three ways).
Test 3: CTA copy. With a handle on your open rates, move to click optimisation. Test button text, not button colour — copy has a larger effect. Run this for 7 days and measure click-to-open rate, not just click rate, so you account for open rate differences between sends.
Test 4: Email length. A shorter version of your newsletter versus the full version. Measure click-to-open rate and downstream conversions.
Test 5: From name. This is often the last thing people test, but for newsletters and regular campaigns, the from name shapes trust. Test a personal name versus your brand name, especially if you have a recognisable individual behind the brand.
Reading the Results: What Actually Matters
Open rate is the right metric for subject line tests. Click-to-open rate (clicks divided by opens) is the right metric for body content tests — it removes open rate variance and isolates how well the email content drove action. Conversion rate (purchases, sign-ups, downloads) is the right metric for anything further down the funnel.
Do not optimise for open rate when you care about conversions. A more clickbait subject line might win the open rate test and lose the conversion test. Always track downstream metrics even when you are testing upstream variables.
Common Mistakes That Invalidate Tests
Testing multiple variables at once. If you change both the subject line and the from name in Variant B, you cannot know which change drove the result. One variable per test.
Stopping early because one variant is “ahead.” Early results in A/B tests are noisy. A variant that is leading at 4 hours may be losing at 48 hours. Commit to your sample size and timeline before you start.
Running tests during unusual periods. A test during a sale, a holiday, or a major news event will produce results that do not generalise to normal sending conditions. Flag these periods and either wait or treat results as suspect.
Not testing consistently. Running one test per quarter produces almost nothing useful. The email teams with the best-performing campaigns run at least one test per send. Build testing into your standard operating procedure, not as a special project.
Applying results to a different audience. A winning subject line from your US subscriber list may not win on your UK list. A format that works for your e-commerce buyers may not work for your SaaS trial users. Segment your tests when you have enough volume, and verify results hold before applying them universally.
Which Tools Support A/B Testing Well
The quality of A/B testing features varies considerably between platforms.
ActiveCampaign
ActiveCampaign offers split testing across campaigns and automation sequences, including full automation A/B testing where you can test entire email workflows against each other. This is a level above what most platforms offer. You can test subject lines, from names, email content, and send times, with automatic winner selection at a threshold you define.
The trade-off: ActiveCampaign is more expensive than most alternatives. Automation split testing is available from the Plus plan at $49/month for 1,000 contacts.
ActiveCampaign
Marketing automation that drives growth
ActiveCampaign is widely regarded as having the best marketing automation capabilities in the email marketing space. It combines email marketing with a built-in CRM, making it idea...
Klaviyo
Klaviyo has some of the most powerful A/B testing in the market, including multivariate testing for e-commerce flows like abandoned cart and post-purchase sequences. Their predictive analytics layer can identify which subscriber segments are most likely to respond to each variant. For e-commerce brands on Shopify or WooCommerce, Klaviyo’s testing depth is hard to match.
The trade-off: pricing scales steeply with list size. At 10,000 contacts, Klaviyo costs $150/month — significantly more than MailerLite or Brevo at comparable list sizes.
Klaviyo
The platform for unified customer data
Klaviyo is the gold standard for ecommerce email and SMS marketing, particularly for Shopify stores. Its deep integration with ecommerce platforms enables sophisticated automated f...
MailerLite
MailerLite includes A/B testing on paid plans from $9/month. You can test subject lines, sender names, and email content across up to 3 variants. The winner can be selected automatically based on open rate, click rate, or click-to-open rate after a time period you specify. For small and mid-size senders who want solid A/B testing without paying enterprise prices, MailerLite is the best value option.
The trade-off: A/B testing is not available on the free plan, and you cannot test full automation sequences.
MailerLite
Email marketing tools for growing businesses
MailerLite is known for its simplicity, affordability, and clean design. It's one of the best options for small businesses and beginners who want professional email marketing witho...
GetResponse
GetResponse supports A/B testing across campaigns and includes a time-based winner selection feature. It allows testing up to 5 variants simultaneously, which is useful once you have a large enough list to split five ways and still get meaningful results. The trade-off: GetResponse’s reporting interface is more cluttered than MailerLite’s, and filtering test results by segment requires more manual work than on ActiveCampaign or Klaviyo.
GetResponse
All-in-one marketing platform
GetResponse is a full-featured marketing platform that goes beyond email marketing to include webinars, landing pages, sales funnels, and marketing automation. Founded in 1998, it'...
Kit (formerly ConvertKit)
Kit keeps A/B testing deliberately simple: subject line testing on all paid plans, limited to two variants. If subject line optimisation is your main goal and you prefer a creator-focused platform, this is sufficient. But if you want to test content, send times, or automation flows, you will hit the ceiling quickly.
For a detailed side-by-side of two of the strongest testing platforms, see the comparison below:
| Feature | ActiveCampaign | Klaviyo |
|---|---|---|
| Rating | 4.5/5 | 4.6/5 |
| Starting Price | $29/mo | $20/mo |
| Free Plan | No free plan | 250 subscribers |
| Founded | 2003 | 2012 |
| Email Templates | 250 | 100 |
| Integrations | 900 | 350 |
| Deliverability Rate | 97.5% | 99% |
| Marketing Automation | ✓ | ✓ |
| A/B Testing | ✓ | ✓ |
| Landing Pages | ✓ | ✓ |
| Segmentation | ✓ | ✓ |
| Drag & Drop Editor | ✓ | ✓ |
| SMS Marketing | ✓ | ✓ |
| Ecommerce Features | ✓ | ✓ |
| API Access | ✓ | ✓ |
| Multi-Language | ✓ | ✓ |
| Web Push Notifications | ✕ | ✕ |
| Live Chat | ✓ | ✕ |
| Advanced Analytics | ✓ | ✓ |
What to Do After a Test Concludes
Document the result, the sample size, and the date. Apply the winning variant going forward — but do not assume it will hold indefinitely. Audiences change, inboxes change, and what worked in Q1 may not work in Q3. Revisit key tests every 6-12 months.
Build a testing log, even if it is just a simple spreadsheet. Over time, your tests will reveal patterns specific to your list: the send times your subscribers respond to, the subject line formats that consistently outperform, the CTA framing your audience prefers. That accumulated knowledge is more valuable than any individual test result.
For further reading on testing frameworks and email performance benchmarks, the Litmus State of Email report and Klaviyo’s A/B testing best practices guide are worth reading alongside this article. For sender requirement changes that affect how you should frame urgency and personalisation, review Google’s bulk sender guidelines.
The mechanics of A/B testing are simple. The discipline — forming hypotheses, waiting for significance, applying results consistently — is what separates the email teams whose performance compounds year over year from those running on gut feel. Start with one test on your next send.
For more on improving the effectiveness of your emails beyond testing, see our guides on email personalization, email segmentation, and choosing the right platform. If you are evaluating tools specifically for their automation depth, our best email marketing platforms and best tools for ecommerce pages compare the top options.
ActiveCampaign
Marketing automation that drives growth
From $29/mo
Related Articles
How Gmail's AI Inbox Is Changing Email Marketing in 2026
Gmail's Gemini AI now summarizes, prioritizes, and filters your emails before subscribers see them. Here's what changed and how to adapt your strategy.
How-ToAI in Email Marketing: What's Actually Useful in 2026
AI promises to transform email marketing, but most of it is hype. Here's what AI features genuinely improve results — and which tools do them well.
How-ToEmail Copywriting: How to Write Emails People Actually Read
Learn how to write marketing emails that get opened, read, and clicked. Practical techniques for subject lines, hooks, body copy, and CTAs with real examples.