While guessing the number of jellybeans in a jar might win you a prize, this type of guess work is much less successful when it comes to email marketing.
Thankfully, marketing automation provides marketers with a variety of methods to measure what’s working and what isn’t regarding your emails, along with ways to methodically test content when you’re unsure. This information allows you to make data driven decisions about how to execute your email strategy and achieve maximum results.
What is A/B testing?
Have you ever had a debate about which version of an email would perform best?
“Keep it short and sweet! Use lots of pictures – even emoji’s in the subject line!”
“We need to talk about the new features. Plus, my subject line is way more compelling, and it doesn’t need emoji’s!”
What if I told you that you can have your cake and eat it, too?
Welcome to the wonderful world of A/B testing. A/B testing is used to send different versions of an email to a portion of your list, and track how effective each version is. You can then designate what the criteria is for the “winning” email, and send the remainder of the list the winning version.
Why should I A/B test an email?
Why would you want to send an email that doesn’t really capture the interest of your audience? The simple answer is – you don’t. However, we don’t always know what is going to perform the best. In that case, we would A/B test the email.
A/B testing an email gives you an opportunity to use data to automatically choose which version of an email the majority of your list should receive. It also provides you with a chance to test variables within emails to see if there are changes that you should be making in order to maximize engagement and provide the most value to your prospects.
Your A/B test could be as simple as seeing which subject line will perform the best, or it could be a complex test where you’re checking a longer, text-heavy email versus a shorter, visual heavy email.
A/B Testing Basics
How do you run an A/B test, you ask? No matter which marketing automation platform you’re using, the steps are essentially the same: It’s as easy as counting to 4.
- Think back to your high school science class experiments. Remember the phrase “Isolate the variable?” The same rule applies when it comes to A/B testing. The first step is to determine which specific element you want to test in an email and keep it simple. We want to measure the differences between the two emails, and that’s hard to do with too many variables.
- Next, determine the length of time you want your A/B test to run, and the criteria for choosing a winner. Will it be based on email opens, or clicks?
- How much of your email audience do you want to use for the test? You can choose up to 50% of the original list, but the average is 25%. If you choose 25%, that means that 12/5% of your original list will receive the version A email, and 12.5% will receive version B. Once the test has concluded, the remainder of your list will receive the winning version.
- Now for the fun part — check your analytics! It’s time to see what worked.
How do I know what’s working?
If you’re using Salesforce Pardot as your marketing automation platform, you can see what happened with your A/B test by visiting Marketing>Emails>A/B Tests. elect the email you wish to see, and you’ll be taken to the initial report, where you can view a timeline for how each email was performing during the test period.
In this case, we were testing whether an image or a button was more effective for click-through-rates. We used the same subject line and body copy in both emails, but in the A version, we had an image as our CTA, while in the B version, we had a button as our CTA.
As you can see from the results of the A/B test, the image version of the CTA clearly outperformed the button version, so once the A/B test had concluded, the remainder of our list received the A version of the email. If we wanted to see how each individual email performed, we could click on the button under each version that says, “View Report,” and that will take us to each individual email so that we can see a map of how each email performed.
What did we learn from this test? For our audience, an image CTA was more effective in getting the prospects to engage with our email, so we should consider replacing CTA buttons with image CTA’s for future emails.
Does that mean we’re done A/B testing? Absolutely not. Now that we’ve determined the type of CTA to use, we can start A/B testing email subjects, so that we can find the most effective one. Then, we can then take a look at the email copy to see if there are improvements to be made. By continually testing different elements of our emails, we can learn valuable lessons about our audience and improve overall engagement.
Email Metrics are the Key to Data-Driven Decisions
What is the key to successfully making a data-driven decision regarding email performance? The metrics. After each email sent, regardless of whether it’s a single list email or an A/B test, check the email performance report to see what you can learn about what took place. For opens or clicks that are performing poorly, create a plan to improve or test those items in your next email send. For those that are performing well, take note, and try to incorporate those into future emails.
Understanding email metrics will enable you to make a data driven decision about how to improve email engagement with your organization. Utilizing tools within Pardot like the A/B email testing functionality, and the email metrics reporting will give you visibility into what is and isn’t working, and allow you to take action based on qualitative, rather than purely quantitative data. Use this knowledge to increase your email engagement rates, and maximize results.
To learn more information about Engagement Studio in Pardot, visit Trailhead, and check out the Pardot Email Marketing for Pardot Lightning App Module.