I see claims relating to deliverability all the time, 98% this, 99% that.. and little irks me more than pushy sales people (we can end with a full-stop right there) with little knowledge of delivery, deliverability or reputation quoting SenderScore and (sometimes) other metrics in a meaningless fashion, because they are taken out of, or with no context whatsoever.
I will not be naming names however it largely falls into two camps: ‘the scaremongers’ and ‘the braggarts’.
There are those who proclaim via blog post, white paper or online discussion how great their deliverability is and quote their amazing SenderScore averages as proof of this amazing feat of deliverability and inbox sorcery.
A holistic view should be taken of an ESP’s deliverability, and the only experience that counts is your experience with your ESP. Comparing ESP’s on SenderScores when you have no inkling how they manage their outbound mail streams is not going to produce results that are beneficial to anyone, mail flow to specific ISPs may be routed through different IP address, thereby negating the value of SenderScore.
There are many other reasons why the methodology is totally flawed, and others have found this out when they tried to use publicly available SenderScores as a metric for gauging an ESPs deliverability. The methodology is flawed when taken out of context.
Worse still are others who call clients of competing ESP’s with the shocking news of ‘issues’ with their SenderScore. As often as not these IP address are dedicated to the client in question. The issues, if these are clients of a reputable ESP are unlikely to be infrastructure related. The issues are more often than not the results of the clients program.
Encouraging the client to jump ship to another platform will not miraculously save the client from whatever their SenderScore happens to be, only changes to their program will ensure that happens.
SenderScore is worthless …
… if taken out of context
It matters as much as a number of other metrics, far less than some and more than others. If however this metric is used in isolation, without context, it is useless.
You have a choice of two IP address:
– one has a SenderScore of 56 and an Acceptance Rate of 86%
– the other has a SenderScore of 92 and an Acceptance Rate 54%
Which IP address would you prefer to use? To see the score as balanced and a fair representation of inbox placement for all senders is a fallacy. For an ESP to pin all their Deliverability KPI’s on this metric alone (or as its primary KPI) is an ESP setting themselves up for failure.
To see SenderScore as a ‘truly independent scoring system’ is flawed; the SenderScore itself is independent, all IP address are scored using the same methodology – this is true. That said the data is not a crystal ball into the complete email ec0system, the score can only reflect the data that is provided to them by the ISPs that choose to work with them.
Take Away: SenderScore is only one indication of how your email might perform at those ISP’s partnering on some level with Return Path
Sure an IP that see’s its SenderScore go from 94 to 70 obviously has had issues and you will want to identify what those were, however if that was the only thing that concerned you (a SenderScore in excess of 90) then you might well not notice half your mail is being rejected outright.
Let us take just one example of how this might happen. An online retailer sending email in Australia. Great SenderScore is a fair indication the sender is probably not having problems with Outlook and Yahoo, but what about the major Australian ISP’s? A SenderScore of 100 will not help you there if you are listed on the CASA CBL in China, or if Cloudmark have your IP listed.
I have seen IP address blacklisted at Cloudmark, and witnessed the havoc that wreaks on North American campaigns B2C and B2B, those same IP Address had a SenderScore of 95 or above. Taken out of context this would indicate 95% of email would be accepted and over 80% would achieve inbox. That is not the case as many have found out..
Recently a major ESP experienced significant delivery issues into Australia as a result of a listing on the Chinese CBL.
I use SenderScore
I love Return Path, their tools are an essential part of my work. I consider many of the people that work there good friends.
I do not wish anyone to mis-interpret what I have said here; Senderscore is not #BS, selling your platform on SenderScore is #BS.
I use their tools, recommend their tools, utilise the data they provide and consider those data points extremely useful metrics. I have to admit the “Acceptance Rate” that ReturnPath also publish to its partners is more useful to me personally.
As a deliverability professional my life is often made infinitely easier by their existence, I do not believe a working week has gone by in the past 3 or 4 years where I have not spent some time accessing their tools directly via their web interface or catching up on their site.
ReturnPath data is something that I use hundreds of times throughout my working day with the information being pulled into bespoke tools via API endpoints.
I just want to call #BS on the dozens (possibly hundreds) of sales types out there who are trying to sell on Sender Score. If you do not truly understand it, the benefits and the limitations, the insight and the flaws then please do not use it for selling your platform or your superior deliverability skills.Sharing