Email engagement! But is it? Really?
The core metrics for engagement used by marketers are open rates and click rates, but can you really trust these metrics? Welcome to the world of growing number of Non-human interactions with email. The current metrics are becoming less and less accurate and a we need to take a step back and take a good look at what engagement is.
The Keynote as Monologue
Hi everyone, Jakub Olexa, founder and CEO of Mailkit and ESP how to Czech Republic. We are very small ESP, but we do.We do serve large customers, mostly B to C, but we do have quite a few b2b. Our customers are mostly large enterprises with global coverage like Volkswagen, and similar that need deliverability across all over the world. And today I’m going to be speaking about email engagement or something that you consider and most, most email marketers consider engagement, which is clicks. But are the clicks real? I think most of you are considering clinics as the most important engagement signal. But the data you’re looking at, is skewed by non human interactions. And I’m going to be talking about these non human interactions. So what are non human interactions? These are not abusive bots. There’s nothing abusive about these interactions. Quite the opposite. These interactions come from email security systems, anti spam, antivirus gateways, and sometimes even anti virus software installed on desktop PCs, mobile phones, etc. Their goal is to scan the contents of only incoming messages and identify any abusive behaviour, any phishing and other vectors of attack. Obviously, this directly affects the opening click rate reports skewing all the numbers left to right. The commercial scanners are scanning mostly in bulk, you know, those are the Barracuda is and Cisco’s and sirince. And you name it, they scan. Usually the every incoming email, often follow every single link in an email, whether it’s a preview, or call to action, they don’t discriminate. Luckily, they are pretty good at avoiding unsubscribe links. And they’re trying their best to not to click on opt in confirmation links. They’re not as good at that as identifying the unsubscribe links. Only the other hand, we have the mailbox providers which have a completely different approach. They only check samples of messages. As they’re receiving large numbers of emails. They can efficiently clusters similar messages and only scan a subsample of those messages. So as a result, there’s a completely different impact depending on on your recipient base. Whether it’s b2b or b2c, both are effective. But there’s a major difference in the impact of these click rates, A, B to C, since most of the mailbox reside with these large mailbox providers are less less affected by these non human interactions. And it’s fairly limited, the number is usually below 10%. Good senders easily under 5% of clicks. But there are exceptions, if you deviate from your usual sending patterns that that number will grow. On the other hand, b2b is hit by bulk scanning that is being done by the commercial systems in a massive way. It ranges from 20% to 80% of clicks. But we have seen campaigns that had 100% Click rate, every single play every single message, every single link, click. What do we what we’ve observed is that sender reputation plays a role in the rate of scams.
The better the reputation, the lower the number of links that are being scanned by these systems. So if you have a better reputation, you have more accurate click rates. This applies to this applies to both the commercial security systems as well as the big mailbox providers. Both are looking into the reputation of the sender, along with many other things like email, authentication, demark, deployment, sending patterns, etc. So what if we tried to identify these non human interactions and mitigate the impact. So there are a few indicators that can help us identify the interactions. The first one we looked at was the user agent string. But that can only help us with a very small subset of non human interactions. By nature, the scans are done by the mailbox providers and security systems to identify phishing, so they tried to escape as much as possible. So very few of the systems are transparent about that activity. And one of those exceptions is actually young, which identifies itself with a very nice user agent string. But this covers less than 1% of non human interactions. And as such, it really doesn’t help to solve the problem. So then there is source IP. That’s a another way of how to look at the infractions. And we looked at one year worth of data, and 19% of all clicks originated in data centres. And Amazon data centres, actually, were accounting for 80% of those clicks. So in my experience, there are very few people sitting around in data centres reading their emails. So that is a very strong signal. The click counts are another signal. If you have multiple clicks within, within one, one recipient, when in a short timeframe, it’s kind of odd. And we have we are seeing, you know, multiple clicks. If there are multiple links within an email, we are seeing, you know, all the links being being clicked. So that’s a good indicator, and another indicator is time of play. So, obviously, you may think, if the click occurs before delivery, it’s unlikely that the human would do that. There are quite a few of these players are actually occurring during the SMTP session. So the the systems are doing the scan and once they finish the scan, they would give you the 250OK I think these sometimes show up even minutes before email gets delivered, that that timeframe usually matches the MTA retry interval or the greylisting period. But the time of click whether it whether you’re looking for pre delivery, or shortly after delivery, Let’s say two seconds after delivery is still includes a lot of false positives. People are addicted to mobile phones, and you’re seeing a lot of actually regular plagues, they’re happening, you know, right after the, the email hits the inbox. So there’s a lot of false positives. So what you’re seeing is a chart of clicks coming from Amazon data centres. And you can see their clicks occurring before the delivery time. And the rest, the response time is measured in relation to the actual delivery time 250OK, and response.
And most of the flakes occur around the time of delivery, within first, within first two or three seconds, and the interactions fall off around 75 seconds after delivery. So is there a way to mitigate this without affecting the real, real, real numbers and affecting real recipients? Well, we looked at all the signals and found that while many can help us identify the non human interactions, there’s no definitive way to negate the impact without false positives. There’s a white paper being worked on within mogh that will sum up all of the information. And all the research that we and other DSPs are doing on the topic that should come up this this year. But so far, the medication is a problem. One of the reasons why this is a problem is that among the non human interactions, we have found interactions that are done by email analytics platforms like e datasource, 250OK, return path, email search engines that we never heard of, but they provide additional visibility to the email campaigns. So ignoring these information, and getting rid of these clicks from from the stats could lead to unwanted side effects like loss of insight into your campaigns, loss of brand visibility, etc. If you stop sending to to these addresses, because you identify them as non engaging, you will miss the information. You should also keep in mind that we were only able to identify interactions that matched our specific patterns. So scans that behave like regular recipients are extremely difficult to identify. And Google is best at this. They’re way ahead of everyone else. Microsoft is catching up, I spoke with a few few security vendors, and pretty much everyone is trying to use some sort of behavioural simulation to avoid being detected. So far Yahoo to our knowledge is very transparent and easily identifiable. But we have to keep in mind that that’s just the scans that they want us to see. So when it comes to mitigation, there are a few recommendations. The basic one is follow best practices. We have observed the well formatted holy content sent at regular frequency or I would call it appropriate frequency reduces the amount of non human interactions. Especially you know, the Christmas period when senders start sending out considerably larger amounts of emails leads to major spikes in click through rates. But a big chunk of those spikes is actually for caused by bigger amount of scans been done. So senders that regularly would be somewhere in 5% range of non human interactions would suddenly have 15 to 20% of those. So best practices are the first step, then second, we’ve identified that using secure links has a minor effect as well. In the end, it’s all about the sender’s reputation, the better the reputation, less, less known human interactions. But all of these non human interactions come with a with a cost as well. The obvious cost is that your stats are skewed, right? So you can easily lose trust in the data you’re seeing and question whether your marketing programme actually works or not. But it’s not just the data you see, but it’s also about the data you’re using. So all your marketing automation, all the falls that you have set up, suddenly start firing off for non human interactions. And that means sending considerable amount of unnecessary and even unwanted emails, or, you know, what, maybe other channels that are connected to those interactions as well. So in addition, as the scanners fold links that they visit, they also go to the destination pages, which affects the website analytics. This is particularly bad for media and content marketers, for whom clicks and visits are often the only conversion metric. So this, this in inaccurate data can reduce the ROI and adspend and directly hit their revenues. The whole concept of engagement, as as it is currently used by marketers is, you know, in pieces, because of these interactions. markers are used to base engagement on last open and the last flag, and they’re using this information for lice, hygiene and segmentation. But it doesn’t work anymore, because many spam traps and especially type of traps, and recycled traps, would interact with emails on a regular basis. And they do follow the links, creating click signals. And suddenly, what seems like an engaged, engaged recipient is actually a spam trap. So removing inactive users from lists will not remove spam traps anymore. And it will not remove inactive users who were claimed by their by their security gateway, or by the mailbox provider. So keeping a list up to date with only active recipients is becoming a challenging task. We as email professionals need to rethink that term engagement. Because it’s no longer as simple as there was a plague there wasn’t read. Thank you very much. And since we have time, I prepared a bonus. And
if you let me show you what those engagements look like. So, this is a this is a two month period of clicks from November 1 to December 31.
you can right away see that there is we have assigned the IP addresses to data centres. And, you know, whatever you see here that doesn’t have a data centre assigned is most likely our regular play
that we couldn’t They couldn’t we can identify when we go to Amazon Web Services, that’s definitely not. So that’s definitely not regular behaviour. But again, hidden in between, there’s a lot of false positives, as well. So this will take a minute to load. And here we are. And the most interactions that you can see come from MX domain of a data source. And this is a single recipient, single email and got 99 clicks. And it goes like that. We can also see the the, the user agents are the same. And we can see the domains there, they be changed a lot. But you can see that fluctuating pretty much every email gets scanned, every email gets clicked, gets clicked multiple times. And that’s an all human interaction. The problem is how to filter out these. So it doesn’t affect the, the actual humans that that might be involved,
or systems that proxy the information as well as, as I mentioned, systems that give you insight into your analytics, like a data source or, or 250OK, or others. So it is really not a simple problem. But since we started early, and we have time, plenty of it. I want to just start from beginning if you’re okay with that. So the topic of the day is or topic of the session is email engagement, which most marketers are looking at clicks and opens. And, yeah, it’s time to think about those because the data and clicks and opens is heavily skewed by non human interactions. So non human interactions are not abusive bots. They’re the opposite. They’re the friendly systems, security gateways, anti spam anti virus, even software installed on desktop computers, that is scanning the content of incoming messages to identify if you use phishing, and to to identify those, they’re following the links in emails. This directly affects the open and play credit reports skewing the numbers. The commercial systems like Barracuda, Cisco siren, and others, they usually scan every incoming message, follow every link in an email. And they do a pretty good job at avoiding the unsubscribe links and the opt in confirmation links even though they’re not as good as the often opt in links as they are but the unsubscribe links are the mailbox providers, Google, Microsoft, Yahoo are more advanced. And they check only samples of sent messages since they get a huge amount of emails of bulk emails, taking efficiently pluster similar messages and just do a sub sampling of those. So this translates to completely different impact depending on the recipient base. Both b2b and b2c are effective given that most b2c mailboxes are with big mailbox providers There’s less impact with b2c. The numbers usually below 10%. Good sellers easily under 5% of clicks from the Rm b2b is hit hard. The commercial systems, which are scanning pretty much every message where b2b mailboxes usually reside, we’ve seen seen everything from 20% to 80% of clicks being coming from non human interactions, we’ve even seen 100% Click rates, with every single email delivered being clicked, and every single link within each email being played. So what we’ve observed is that the reputation plays a role in the rate of scans. senders with high reputation get a lower number of links scan, which means more accurate click rates. Both the the mailbox providers and security vendors consider the reputation along with many other things, email authentication, demark, policy, sending patterns etc. If if you’re sending if you deviate from your usual sending frequency and sending patterns, it will
lead to more non human interactions, especially during during Christmas season, you know, Black Friday and such. We’ve seen that the descenders that normally gets round 5% non human interaction because they have a very good good reputation highly engaged recipients. Suddenly, because they start sending considerably higher volumes, they would they would get higher click rates. But the big impact is that goes higher click click rates also mean way more way more non human interaction. So they would jump from 5% to 15%. So is there a way to identify these non human interactions and mitigate the impact?
there are a few factors. So we’ve looked at user agent strings, they can be used to identify a very small subset of non human interactions. By by nature, the security vendors and mailbox providers are doing this to prevent to prevent abuse. So they are trying to hide their actions. So the spammers the fishers cannot bypass their security. The only example that we found out of the big mailbox providers that can be identified by the user agent string is actually Yahoo.
that’s the case only for less than 1% of all non human interactions. So it doesn’t really solve the problem. The source IP, on the other hand, is quite helpful because if if you actually eat attribute the IPS to two data centres, suddenly you see that there is a big chunk of clicks coming from data centres. We looked at one year worth of data and 19% of clicks came from data centres with Amazon data centres accounting for 80% of those non human interactions. So it’s safe to say that, you know, there are not that many people sitting around reading emails in data centres, so that’s a strong signal. Also, another signal is the click count. If you have a have an email with 2020 links, and all 20 clay and some multiple times and all of this happens within within two seconds. That’s, that’s a strong signal. Another indicator is the time of click. If, if the click occurs, exactly the the second email is delivered It’s suspicious. But there are cliques that are occurring. As you can see on the on the chart, they’re occurring even before the email was delivered. This is because many of these scan scans occur during the SMTP session. And as they scan a click, many of these SMTP sessions get deferred. At the end of the session, the SMTP server sends a transitional error. This is usually greylisting or similar, similar reason, and then then the times of clicks, the the time between the Clegg and the 250OK, the actual delivery would would closely match the retry interval or the greylisting period of that receiving server. The problem is that there’s a huge amount of interactions that are actually valid that occur right after delivery, you know, 1235 seconds after delivery. Because people are addicted to their smartphones, so they keep on clicking whenever it beeps. And that means just the the time of click is not reliable enough and could bring a tonne of false positive. So on the chart, this is this is, this is just Amazon data centre, you can see that there is a spike, it’s before zero seconds, actually. zero seconds is the time of delivery when when the clicks happened. And it all falls off at around 75 seconds when the interactions from Amazon data centres stop. So is there a way to mitigate this? Well, we have a lot of data. And there are quite a few few signals that can help us identify non human interactions. But there’s currently no definitive way to negate that impact without having false positives. We are currently working on a white paper within mark with with other other msps it’s something to look out for, there’s a link to to the resources page that will hold that white paper once it’s published. But you can find a lot of other interesting documents there as well.
But the problem with non human interactions is that by negating them, you may inadvertently harm your data.
A lot of these non human interactions are coming from various email analytics platforms. It’s a data source its return path, it’s 250OK and others by removing these and ignoring these it can lead to unwanted side effects like loss of insight into your into your campaigns, loss of brand visibility because we found we found email newsletter search engines indexing these these emails and clicking the links.
So by just throwing it out in the window, you could easily lose lose that data and by not sending to these you know removing their them from from from your automation funnels. You could you could actually harm your brand visibility. The other thing is that during our research, we were only able to find non human animals ones that actually, we weren’t able to detect that matched some of some of the patterns we were looking for. And scans that behave like regular users. Regular recipients are extremely difficult to identify. Google is definitely ahead of the pack. There, they’re employing AI and their, their scans are virtually invisible. Microsoft is catching up. And Yahoo to our knowledge is extremely, extremely transparent. So they’re easy to identify, at least what they’re letting us to identify. So when it comes to mitigation, there are some simple recommendations follow best practices, we’ve observed that well formatted quality content set at the right frequency reduces the amount of non human interactions. So any deviation from the standard set of sending patterns will affect this using secure links, seems to have a minor effect as well. We see more non human interactions on HTTP compared to HTTPS. But eventually, it all comes down to sender reputation, the better the reputation, less non human interactions. But the non human interactions come with a cost. And the obvious cost is that your stats are off. Which means that you can easily lose trust in your data. And that’s, that’s the holding up good because would be that do that. And you can probably just go back to something from Outlook, and hope that it gets delivered. But it’s not just the data that you’re looking at the reports, but it’s also the whole all the scenarios that are tied to go to those data. So keep in mind that all the marketing automation scenarios will play out for those non human interactions, as if those were human. And especially if your scenarios depend on, on the engagement. And you use multiple channels. Now that can spiral out into sending out text messages, remarketing through Facebook’s Google AdWords, et cetera. And suddenly, you’re spending money on additional additional digital channels for no good reason, just because of normal interactions. The other thing is that obviously since they’re scanning the links, they’re also following those links and hitting the the the webpages that skews the website analytics, which is particularly bad for media and content marketers for whom clicks and visits are the you often the only conversion metric. And having inaccurate data can reduce the ROI and the ad spend as a result, and directly hit their revenues. So it’s a big, big issue for that for those these interactions, most importantly, but the whole concept of engagement as, as it’s being used by marketers today on the tap
markers, you know, all of us are used to used to be using read and click information for list hygiene and segmentation. But that doesn’t work anymore. Many spam traps, and especially typo and recycled traps, interact with emails on regular basis and follow the links which creates a click signal and also a read signal. So removing inactive If users from the list will not ring the traps anymore, it will not remove, actually inactive, it will, it will be somewhere in between, depending on what your target group is what your line of business is. So keeping your list up to date with active recipients is becoming a very challenging task. So we, we need to rethink the term engagement as a whole. And I thank you very much. That’s it for me. Any questions? Okay, thank you, everyone. Yes, I did cover it twice, because we started early and people came too late. So I did my best to cover twice. If you’re interested in or if anyone is interested in in the data, I’m happy happy to, to share and discuss that. Please look out for for the model white paper that will be coming out if, if you’re interested in digging into data, please reach out to me. And most importantly, yes, enjoy Inbox Expo. Thank you very much.
About: Jakub Olexa
Involved in computer business since early 90’s, with a background in networking and systems management I was able to use my past experiences to move current projects forward and add an unique value to each of them.