The importance of looking beyond open rates to measure deliverability

23 Nov 2020 3:01 PM | Hanna Fray

There are an overwhelming number of email service providers (ESPs) out there to serve email marketers’ needs. Depending on the sophistication of your email program, list size, industry, budget, support needs and size of your team, you may find more success with certain ESPs over others. While several third-party organizations, such as G2 or Email Tool Tester, provide objective comparisons between ESPs, there are limited resources out there that provide up-to-date deliverability performance comparisons.

As such, many email marketers take on the task of determining deliverability success through the use of trials or segmenting their mail streams to test other services before committing to switching ESPs.

My goal is to help you on your comparison journey by sharing some of the biggest mistakes I see when email marketers compare deliverability performance between ESPs, and also share the key metrics that you should be looking for.

Different factors influence open rates. When comparing deliverability performance between ESPs, a popular trend is comparing open rates exclusively to gauge overall deliverability. Despite the appearance of an apples-to-apples comparison, open rates can be deceptive for many reasons, including:

Contact lists or segments. If open rates are being compared between two different lists or segments, there is no guarantee that the lists or segments are weighted with subscribers who are equally likely to open. If the emails are being sent to the same list or segment, you've eliminated that variable, but introduced others.

Take, for example, subscriber fatigue — many recipients simply don't open every email single they receive. If a contact opened your email yesterday, and today's subject line is similar or identical, they may see no reason to open today's email. If the subject lines are different, then you've introduced the subject line and content as new variables which may have also influenced the test results.

Difference in send times of campaigns. The day of the week — and time of the day — of the send can sometimes make a serious difference. There are countless articles out there touting which day of the week or time of day are best to maximize open rates, but realistically the best time to send an email depends on your list dynamic.

If you are using the same recipients to test metrics at both ESPs, you can't exactly send them two identical emails back-to-back from different ESPs. You've once again introduced new variables by staggering the sends and sending on different days and times.

Variances with Mailbox Provider/Spam filters. If you've been sending for a long time on ESP A, your emails create a recognizable pattern to providers like Gmail, Microsoft and Verizon that decide whether to bounce or filter the message. Your emails from ESP B are unknown to the inbox provider — is this really you, or a spammer impersonating you? Emails from ESP B can be expected to be treated differently by mailbox providers and/or B2B spam filters.

End-user preferences such as personal safelist/blocklist. Recipient-level filters will also influence your open rates. A recipient can set up filters that send your emails from email address A/ESP A to a spot where they're sure to open and read them but this filter might not apply to emails from email address B/ ESP B. S of course ESP A will appear to perform better. Your recipients don't know you're switching ESPs yet and to look for your mail in an unexpected folder.

IP and Domain Reputation. There may be differences in the IP reputation from each ESP that are completely temporary. For example, if you're on a brand new account with your ESP, it's a very common practice to be placed in an IP pool with other new senders. Simply put, ESPs have to protect the IP reputation in the interest of all existing customers. Therefore until you've sent a few emails and set a cadence for your sending behavior and volume, the ESP doesn’t know the quality of your emails. So you're typically not sending out of your ESP's best IP addresses until you have a few sends under your belt and they know that they can “trust you”. Differences in IP reputation between ESP A and ESP B might not reflect the actual IP reputation you'll experience once there's enough data about your sending practices to assign you to your final IP pool.

Authentication differences. Authentication is also a factor if you've authenticated email at ESP A and not at ESP B. This once again introduces a feeling of ‘suspicion’ on the mailbox provider’s end that could lead to filtering or spam placement. If you have a DMARC policy that requires domain alignment for DMARC to ‘pass’, in order to deliver your emails, and you begin sending from a source that has neither SPF or DKIM aligned, you will find performance will be very low.

Metric calculation differences. Not all ESPs measure open rates the same way. For example, if ESP A measures open rates as Opened / (Sent - Bounced), and ESP B measures open rates as Opened / Sent, this will further confuse the issue. Some ESPs show you a default open rate of "total opens" whereas other ESPs show you a default open rate of "unique opens". Make sure the number you're comparing between ESPs is actually being calculated the same way.

Varying levels of support. Many ESPs don’t have a centralized deliverability team to handle deliverability concerns, and even if they do, those individuals are not available to you as a resource, without an additional fee. Certainly consider an ESP or plan tier that offers all types of support, including handling your deliverability or compliance concerns. You want to be sure you know the best way to get started with a new ESP in terms of warming up properly, testing, etc.

Overall platform performance/experience. The best test is what you can actually achieve with a particular vendor. If ESP A has higher performance today according to your testing, but ESP B has better tooling and reporting features - it's much, much easier to improve your deliverability on ESP B so that it's on par or better than ESP A, than it is to get ESP A's development teams to create and deploy the features to match ESP B's offering.

You might take a temporary hit on open rates, but having more meaningful ways to engage with your customers and understand them better will pay a return that far exceeds that initial bumpy investment. Al Iverson (Director of Deliverability at Salesforce) has an awesome outlook that you can find here. https://www.spamresource.com/2020/06/new-email-vendor-expect-your.html

Hopefully, will serve as a helpful resource around properly comparing deliverability between platforms. Overall open rates are clearly not an ideal way to actually measure performance between ESPs, as it's almost impossible to control all of the variables to a sufficient degree to get meaningful results. Instead, I would recommend seed testing - vendors such as GlockApps, Kickbox, and others offer seed tests that provide an unbiased (as much as possible with email deliverability) snapshot into your performance. Even then, don't take seed test results as your universal truth and use those results as part of a holistic approach to improving your deliverability; you may simply have not built up enough reputation on a particular ESP to achieve better results.

The moral of the story? Consider the full picture when evaluating email service providers.


Powered by Wild Apricot Membership Software