Turning Cold Email Campaigns Into a Science

February 5, 2015 Sam Laber

Jeremy Boudinet is the director of marketing at Ambition. He is an expert on sales, gamification, millennials, professional development, SaaS and leadership and his work has been featured in Time, Information Age, the Daily Muse, and Social Media Today.


Sales is often described as half-science, half-art, and the very first cold email campaign we ran here at Ambition is living proof. It’s easy to hone in on the ‘art’ aspect of a cold email campaign and overlook the ‘science’ portion, but both aspects are equally important and need optimizing to the fullest.

Our next email campaign will be smarter and more sophisticated, and the one after that will follow suit. Continuous improvement in the sales process absolutely extends to the cold email campaign, and as automation tools take on greater prevalence, it’s critical that sales teams leverage these tools for scientific value as much as possible. Here are some insights we gleaned on best (and not-so-best) practices in executing a scientific cold email campaign.

The cold email campaign

From November to December 2014, our sales team undertook a first-ever initiative: a cold email campaign.

We compiled 572 of our most qualified potential prospects, got set up on PersistIQ, and called in the value messaging big guns of Heather Morgan and SalesFolk to help us execute our plan. Roughly 6 weeks after the campaign’s launch, we had 73 new legitimately hot leads. Here’s how the campaign looked like in its entirety.

Not bad for a first campaign. But as we alluded to in previous blog posts covering this campaign, and as Heather Morgan explicitly discussed in last week’s SalesHacker post, we gained as much from the failures of this initial campaign as we did from its successes.

Gaining actionable intelligence from a cold email campaign

The focus of this post is on the timing and spaced intervals of the individual emails we sent throughout the campaign. In its entirety, the campaign contained 8 touchpoints, which means that each of our 578 recipients received up to 8 emails, depending on how and if they responded to each email. Let’s take a look at the spaced intervals and delivery times of our campaign emails:

The schedule above denotes the timing of each email. We’ll stipulate that we experienced some drop-off in open rate with each subsequent email in the campaign. You can view the open rates of each touchpoint below.

Response rate, on the other hand, trended much more erratically. The graph of the campaign’s response rate is as follows:

The disparity in open and response rates of each touchpoint is worth noting. For example, our initial email, which was delivered on a Saturday, had far and away the highest open rate. Variables, other than timing, that may have impacted open rate include:

  1. The fact that these were our introductory emails, and thus less likely to be recognized by their recipients as solicitation.

  2. The fact that the subject lines of these particular emails may have resonated more with their recipients and proven more effective in inducing the recipients to open the emails. Those variables notwithstanding, 308 of our prospects opened the email at touchpoint 1, a number that still garnered 100 more opens than our second-highest touchpoint, touchpoint

4. That being said, the main objective of each email is to generate a positive response, and in that aspect, our first touchpoint was a disaster. It generated the lowest adjusted response rate percentage (replies/opens) of any touchpoint!

Was sending a Saturday email the most effective strategy? It appears not, though we will have to validate whether that was actually a mitigating factor in future campaigns. We’ve started on the right track, in terms of bringing an element of science into our email campaigns, though we did fail to fully optimize our multi-variable testing with respect to date/timing of delivery.

Key mistake: Failure to fully A/B test

Here’s what we know: Our first touchpoint may have had the highest open rate by far, but it also had by far the worst response rate. The numbers are absymal compared to later emails.

Perhaps our biggest mistake was in not varying the delivery times of the email as much as we could have. Had we been able to A/B test the adjusted response rate and open rate of our weekend emails, we would have more actionable insights that could give a clear idea of the best practice for delivery time for our next campaign.

To that effect, I will reiterate the established best practice of making sure to fully optimize your emails for A/B testing. We did a great job with that in regards to value messaging. Delivery time — not so much.

Key insight: The value of Monday emails

Both sets of Monday emails that we sent generated solid response rates and open rates. While some experts have cautioned against sending Monday emails as part of a drip campaign, the results for us proved relatively fruitful — open rates remained consistent with the rest of the campaign and our response rates were above average.

Again, we will have to validate the significance of this insight in further campaigns, but it will be interesting to see whether this trend was just a misnomer, or in fact a consistent, expected outcome. Either way, we now have a starting point to go off of for future campaigns.

Overarching takeaway: The art of sales needs science

Aside from the importance of persistence, which we addressed in a recent post on the Ambition blog, the major takeaway from our initial campaign was that we ultimately did not look at it enough as a lab experiment.

When you’re embarking on a series of cold email campaigns, a best practice is to always ensure that you’re leveraging the campaign itself as a data source to inform future campaigns. This is where the science of sales comes in — you must optimize the A/B testing and control the variables in your campaign to create results that are as scientific as possible.

If not, you run the risk of continuing to repeat the same mistakes and make uneducated, shot-in-the-dark decisions about how you construct and/or when you send your emails. Do not fall victim to that, as we did, to a degree. For sales teams that are novices in undertaking cold email campaigns, these facts only take on greater importance. Don’t repeat our mistakes — put on your goggles, break out the bunson burners and embrace the purpose of cold email campaigns as scientific research that deserves equal footing with lead generation.

About the Author

Sam Laber

Sam is the director of marketing at Datanyze. He's a big John Hughes fan who occasionally fills the DZ office with the sweet sweet sounds of 90s rock giant, Creed.

Follow on Twitter More Content by Sam Laber
Previous Article
5 Common Mistakes Sales Development Reps Make in their First Week
5 Common Mistakes Sales Development Reps Make in their First Week

For sales development managers, onboarding new reps can be pretty stressful. You’ve got a variety of new pe...

Next Article
5 Reasons SDRs Should Listen in on Sales Demos Every Day
5 Reasons SDRs Should Listen in on Sales Demos Every Day

In sales development, it’s pretty easy to feel pigeonholed and lose sight of the bigger picture (i.e. what’...

×

Get new posts sent to your inbox!

Great success!
Error - something went wrong!