Site logo
Video

Speaker/s name

Kath Pay

Description

Most email marketers are aware that A/B testing is worthwhile, yet don’t always know how to perform a scientific, statistically significant A/B test.

Speaker
Kath Pay

Founder, Holistic Email

Video URL

https://vimeo.com/531524263

Transcript

Kath Pay 0:29
Okay, we're wrong. Awesome. So excited to be here. And today I'm talking about one of my favourite topics is ad split testing with a little bit of a twist, because we're, you know, I've incorporated the respect but at the end of the day, it's not like I added respect to it, because it's inherent to a b split test will not explain how once I get into the session, so, a little bit about me. You just get my slides happening. There we go. That's me. I'm CEO and founder of holistic email marketing, and also author of this book of the same name. But the book was actually named after the holistic email marketing philosophy. Not after my, my agency, my consultancy, there's a little bit of contact detail there for me if you want to get in touch later on. A little bit about why why you can probably hear roosters cows goats. What else to say dogs. Yeah. Apparently I love Ireland's never really realised that but started off in Australia moved in for 16 years to United Kingdom in England. And then I've, I'm temporarily here Well, at least for a year, you know, Tikkun so there's the sound of the roof. So get used to it. Okay. Um, as I mentioned before, I've just authored this book, really, really, incredibly, like, amazed that it became a best seller, I've also been a, I'm a finalist now also in the Book Award. So if you haven't grabbed a copy, now's a good time to do that. We're actually it's got a chapter on testing as well. So if you want to delve in a little bit deeper than what we go today, you'll see that there's a chapter on it. And here is the sprinkling of some of the wonderful clients that we've had to work with. We're currently and also in the past. So we love helping them very much with our strategic side of things. Because we are a consultancy, as opposed to being an agency. I'm going to get straight into it. So why do we want to be testing? This is a really, really, really crucial question that we want to be asking. Because, in my mind, mind if we keep on doing the same things, and we're going to keep on getting the same results. And it's testing that allows you to break out of doing the same old thinking, and allows you to actually achieve new results. And that's what we all want, you know, we want to be improving our email programmes. So you know, this is the kind of thing that testing can very, very simply help you. So I've got here the definition of respect, the Google that came up with this, you know, definition, do you regard the feelings, wishes or rights of other things fits in so perfectly with testing. It really, really does. So what we're what we're really wanting to do is, is start to incorporate what it is that our customers, our subscribers our database are actually wanting from us. And so because we're sending them tests, and we're in fact asking them, what do you lie, we are showing them respect. So keep that at the bottom of your mind or, you know, throughout this whole presentation, because that's, that's really, really crucial.

Sorry, I've got I've got screens all over the place. Okay. So if we think about this, right, by testing, you're asking them what they like, like I said, You're showing them respect, you're, you're saying to them, okay, so do you like this? Oh, maybe maybe you like this. And of course, the added benefit for us is that we're actually finding out more about them so that we can then make these decisions and more, whether it's content, whether it's, you know, the actual logistics of the email, the frequency, whether it's questioning them on their motivations and their emotions. There's a whole heap of things you can doing. So if you start to think about I mean, I did this quote one time, and it's just gone all over the place that's always been quoted. And it's basically, if you think about it, sending a test is basically the equivalent of sending a survey, right. And what we really, really want to do is make sure that we are creating the right survey to asking. And the more powerful thing, though, with testing over a survey, surveys are ads. And surveys are great, they're valid, and all the rest of it. But what they end up doing is you're asking them to mindfully give you an answer, and that mindfully given answer isn't always true to who they are. So because it's often true to who they want to be, where it's clicks, right, and even purchases, conversions as such, is actually true. It's who they are at that given time. So it's really, really crucial, we can think of them as I said that in my opinion, actually. So we along with the, you know, holistic philosophy, we also part of that is the holistic testing. And it's really calling upon the, the scientific ABC, to testing, right? So but we then take it one step further to make a colour stick. So the benefits of doing holistic tests is that you get to discover which version delivers the best result, right. So that's what we all understand. And a B split test normally does. And also, just by the way, when I say a B split test, I don't mean necessarily strictly a, b, it could be A, B, C, D, E, F, G, right? So but we're just calling it a because it's nice and simple. So the first thing we want to do is, is discover that version, the one that works, and so we get an uplift on that immediate campaign, then we also want to be discovering insights about the audience that we can then put into our future email campaigns to ensure that they are being iteratively. Improved, right? So we've got two there. The third one, though, this is this is a really, really good one. You can't do this with all the results, as I said frequency before, so that's not a finding that you can roll out to other channels necessarily. And there are certain things like, you know, yeah, there's a few other ones that can't do most of them more logistical than anything else. But if you were to be doing something, a test that is very much focusing on and testing and your hypothesis is all about the motivations and their emotions and why they're doing this. And you know, what's going to titillate them or you know, intrigue them get their engagement factor happening, then those results actually are true

throughout all of your other channels, because they are to have that subscriber, they are true of that customer, regardless of what platform that they're using present is a finding about that, I bet your audience so keep that in mind as well. So the difference in outcomes just quickly, again, just your regular type of test. And I call this ad hoc, it sounds a bit personal, but it's after working with hundreds of clients and students and everything, I realised that that's, that's the best name for it, because they're more often than not ad hoc. They're not working with a hypothesis. They're literally just trying a different subject line against another subject line really focusing on the words like again, not doing the hypothesis, they're erratic, so they don't there's not necessarily you know, a plan in place they don't necessarily record the results as well. So you know that that is the state of email testing as it is now and I'm not saying everyone is doing that of course not but the majority definitely, if they test at all. So this is this in my opinion really needs to change because often and you'll see a deck often we're actually optimising for the wrong result. So it's pretty scary. But the holistic ones as I said, you know, gain immediate uplift gamers longitudal insights into customers, and then roll them out, roll those insights out. So is a huge benefit for you. So I mentioned before has to be recording results. This is absolutely crucial. It's only when you can be recording the results and analysing them and being able to go back and review and say, Oh, I wonder if that's changed. We've noticed a little bit of difference in the circumstances iE COVID So let's go and test all of that out and see if it still remains the same if the hypothesis is still proven to be true, or whether it's changed, you can only do that if you're actually recording themselves, I encourage you to record the results. And also, of course, you can be using this and we like to use them as a plan, and then also the record of them. And the whole idea is that we do that iterative learning. So we're, we're working continually all the time to be improving. Every time we send a campaign email, we're doing a test we're identifying, and then we're recording the results. And then we're going okay, so this has given us food for thought we might try and do this next time, right, so that we're continually improving that over there, it's also called the aggregate of marginal gains. Okay, aggregate of marginal gains. And what that simply means is that it's like a building block, right? You are learning something, you get an improvement, you learn something else, and implement, you get an improvement, learn something else, and so on, and so on. So that over time, you get lovely, uplifting results. And one of the key reasons why email is fantastic for doing a B split testing, and particularly for that, that third point, I've talked about rolling out the results to other channels, is because your database is your target market. And we are pushed him. And we are incredibly cost effective, it's a lot cheaper to be doing an ad split test using email than it used to be driving traffic using PPC to any page. Now, I'm not saying you don't do that, but I'm saying, once you've actually identified the basis, you know, where you think, Okay, this page is going to improve conversions over this other page, then you're using the one that improves the conversions, and then you're iteratively improving this on your website. Right. So that's kind of where we're at. So as I say, I think that email is the best channel to be studying or testing plan. And the wonderful thing about this, is if you start to use email, right, and you start to actually share your findings, share your insights and say, to your PPC team, Hey, have you tried these ads, or, you know, this, this is a finding, that we've found works really well with acquisition. So why don't use you know, test that out and see if it if it works for you guys. And then it can do it with your social, you can do it with your landing pages, and everything. And the beauty about it, then is that email will become a permanent fixture within your AB split testing, you know, routine schedule. And you can often then say, Well, actually, we're bringing in not just incrementally increase,

you know, revenue for your email campaigns, but also for other ones as well, because you're helping the other channels. So therefore, you might get an A, you could probably put in, you know, additional budget or resources or something. So it can work out really well for you. So, I've talked about hypothesis a couple of times, and like I said, most of the students and clients that I work with that necessarily use a hypothesis, but it's really, really crucial that you do, because how it works with email is that you want to be doing, you want to be testing it multiple times in the first place, right? You can't ever base the results on one test could be an anomaly could be something with the, you know, the economy that day, who knows what it is, right? So you need to be doing multiple tests. And the best way to do multiple tests without boring in turning off your customers by using the same copy and everything is by using the one hypothesis, and therefore your copy your offer, your everything is going to be supporting their hypothesis, and then of course, can be different. Right? So that means that all of your campaigns, you could do, you know, six campaigns for the week, and they're all supporting the one hypothesis. And but they're completely different as far as the customers know that they don't even know that they're successful. So a hypothesis pretty much says, I think that by making this change, it will cause this effect. So this is a scary thing about hypotheses. And most marketers kind of shy away from it because it means you're having to put your foot you know it down and say okay, I think that this is going to be the winner. Right? As opposed to we'll see which one wins, you know, taking the easy route out. So based on your results, you should be able to say then, this is All this is false, because you have put in, I believe that variant B is going to win over the control, right? Now, the really crucial property have to include here is because so causes the learning. So you say, I think that by making this change, it will cause this effect, right? And then you say, because this is the insights you really want to be understanding and in analysing, and then carrying over to those other what, not only to your email programme, but to all of your other channels as well. So one of the things we've talked about testing, we've talked mainly about testing your campaigns, right? I'm talking today on testing everything, including your marketing automation lifecycle programmes, just do it in your transactional emails, they can all be tested. And this is really, really crucial that you need to be doing something like this. Because if you don't, if you just say, hey, we've done well, we've got our automated you know, programme in place, we're going to a welcome programme, we've got our BAM cart, and then working well, awesome, fantastic kudos to you could actually be working better. That's the question unless you test them, you're not going to know that. And this is the crucial thing. So I always recommend that you put it in in place. Now, this is one option, I don't think this is the best option. But it can be used, where you're actually testing on an individual basis, each email within the, you know, the programme the stream, right? So you kind of testing two variants of that, and you don't have to be tested on all the emails, just maybe the important ones with industry. So you could be doing that, or else this is my preferred method, when you're actually setting up your marketing automation programme, right? That that that automated, trigger, however you want to refer to it, you actually build into it to permanent streets. Now, you're going to have that hypothesis, and it's going to be looking at the control, right, which is possibly your preference, what you think is going to be the winner. And then you go and put in the various V, right, and they're permanent, you keep on running that until you get a statistically significant result. Once you've gone and done that you've done your analysis, you then come up with a new hypothesis, and you replace the losing stream with the word say, now you're testing the winner of the old hypothesis with this new one. So what you want to be doing is is like I said, continually be refining, improving, and making sure that you're not leaving money on the table.

Again, recorded this is a really, really simple right so you know, here's just a couple of it's slightly different to the VA you the campaign emails, but just the way it works, right? So but make sure you're you're recording, then you're recording what you're actually currently testing, you're recording the results. And then you can also start putting down ideas for future ones as well. But of course, I always I personally, I like the idea of having a plan there in place and it gives you ideas and oh, what about what about but analyse it first and then so go. Okay, let's prioritise this actually a little interesting, you know, artefact that we found in this last one, I think we need to prioritise it, because I'll show you a couple of examples down the track. So here just really, really basically, if we wanted to have a look at hypotheses, right, a hypothesis could be and these are full hypotheses because I haven't put them because in or in that that, but just for the sake of simplicity and time emotional question will generate more sales in a directive statement. Double loyalty points will generate more sales in two times loyalty points, and an emotive image of person smiling, wearing an outfit will generate more sales and displaying an outfit laid out. Okay. So factors tested. For the first two subject line call to action title copy what I hear you say, right, so you know, everyone always says you can only test one factor at a time. That's true. If you are testing specific copy, and you're not using a hypothesis. If you're using a hypothesis, you can be testing with subject line, the call to action, the title and the copy. You can even be testing an image if it supports the hypothesis as well. Right? And you can be testing the landing page, because it's all part of the customer journey, isn't it? So and it's all coming down to the end result which is the conversion whatever your definition of So what I really, really want you to do here is to break away from that directive of, you can only ever test one factor because that is true, when you're only testing a is more specific. So if you're testing, you know, subject line a, which is specific, just the copy versus subject, line B, which is just a copy, you can then throw in a call to action, everything because then that makes a little more messy, right? And, but, but this one here, is because we're basing it on a hypothesis, it will make the the actual results more robust, because you're testing multiple factors, which are supporting that hypothesis. So as I said, test the multiple factors, they all need to support, you can't just throw in a random, you know, a call to action, which does not support the hypothesis needs to support and will make the results more robust. Now brings us on to your success metric, alright. Ideally, it should be mapping back to the objective for the email or the programme, that that that's the key factor, right? Too often, when we're doing testing, we tend to like to call upon the easy to access metrics that just opens and clicks. Because they are much easier to access and, you know, conversions or so, but in my experience, my opinion, it really, really needs to be mapping back to your objective, because if you don't, you could end up optimising for the wrong results. So here is a real life example. The hypothesis first purchase programme, a subject line that provides all the savings to be had with Brand X will deliver more conversions than one stating the broad benefits. Okay, so savings versus benefits. And what they found here

changed what they found here. That fits one with a 98.36% uplift in conversions and a 99% statistical confidence rate, even though the actual open rate wasn't statistically confident, or statistically significant, and was incredibly similar. So a lot of people would be tempted to be using open rate, because it kind of seems to be the obvious one. But in reality, logic, and obvious, don't always work with email. I think that's true of most, most digital channels as well, we love logic, this doesn't always work. So here, we have gone with our success metric to be the conversion. And the conversion rate was statistically confident in even though the open rate wasn't click rate was had a weak 80%, you know, level of statistical significance. So this is a you know, just an example of why you need to make sure that you are using the correct metric when you're when you're doing. So, I've seen too many examples where they've actually been optimising for the wrong metric. And therefore they've actually been optimising to lose money. Now, the actual hypothesis was, let me just go back here was that the savings was going to win over the broad benefits, but in fact benefits one right? So you haven't put your foot in the sand said I believe that savings is going to win over benefits but the results proved you wrong. So that doesn't actually mean that you have failed that still means that's a valid test you have lots of valid learnings and it was based your hypothesis was based upon everything that you'd read your insights on your audience you know, it was an educated I prothesis but as we said, something that's always logical doesn't always end up being the case and that's why testing is so important. Now if we actually go and add average order value to that same test right because we like to be measuring all of the metrics you want to be measuring them all you want to make sure that you know because this is when it will give you a little bit of food for thought sometimes. so here we can see that the winning one was the benefits right the losing one was that was the the top one the savings, but we can see the average order value for the savings was higher than the benefits which had a lower average order value. Now in my mind, that's food for thought. That means Okay, so let's Hmm, okay, I wonder why it's still one but it's headed average order value that's really, really intriguing. Let's go and create a new hypothesis to test and find out what we can do that we can actually achieve, you know, better results with even, you know, a higher average order value as well. So be careful also with your conversion rate calculations. Often we call upon the, what Google does a Google Analytics, what the website conversion rate is based on and everything but understand websites are pull channels and emails are pushing. So we don't want to be limiting it to just the standard calculation is conversion rate calculations, number of products sold divided by number of sessions, we don't want to be the limit, limiting it to the number of sessions because we've had a whole email programmes write an automated programme happen, we've had an email campaign gone out or a series of them. And so we've had a lot of happenings before then that we need to take into consideration and put that into the results. So this is a little bit controversial. And I think it needs to be reviewed on an individual basis. Generally speaking, I would say go and do the both results for both see if, if they if they do tie up, or if they don't, and then you need to understand and go look at your open to, you know, your clicks, that actual, the whole process, and see which one you think is actually the better one, because again, you could be optimising for the wrong results. So here's an example of that the old conversion rate of a client.

So they were testing different segments, old segments versus new segments in how they broke up the segments was completely different. So it was a very, very exciting, very big, very influential test, it's going to influence everything they did moving forward. So we tested for a long time, made sure that it was right. So their original one was, was basically to do the conversion. And versus the new conversion calculation using the one that liked it before the new one actually won. So they've been using the old one, and now we're going to stick with the old segments, and that was going to lose them money. So I'm running out of time, but I really just want to quickly give you this infant, right, so let's look at some channels that you can share your insights with those same three hypotheses, the same factors tested. And now we can share them with all of these different channels, plus more. So start thinking about what you can be doing here's a bunch of takeaways, set up a permanent testing stream for your automated ones, make sure you use a hypothesis test multiple factors providing you you have a hypothesis, and show us the correct metric to measure your success. Keep the test running until it's statistically significant record the results, gain the learnings apply to other channels. So here's another little preview of the book. Again, I encourage you to pick it up if you haven't, like I said, there's a whole chapter in there, which is definitely going to go into a lot more detail than what I have today. But I hope I've given you enough information to sort of spike your interest in and start questioning what you're currently doing or maybe even what you're currently not doing. Hopefully I've inspired you to start testing if you're already testing. So let's just

Okay, there we go. I don't know that I've got time for questions. I'm really really sorry. I'm okay. Yes, yes. Yes. So lots of comments about the book about templates and slides. I think Andrew is going to be giving them to you guys. And um and thank you. Thanks for listening. I hope it was helpful.

You May Also Be Interested In

Our Business Membership Programs are available for 2024