Site logo
Video

Speaker/s name

Tim Watson

Description

This session is about customer driven testing and split testing, with a focus on how to increase the chances of winning big with split tests. We'll discuss the concept of having an unfair advantage when it comes to split testing, and how to identify and capitalize on that advantage. We'll also be discussing the importance of forming a strong hypothesis before running a split test, and how this can lead to better, more accurate results.

Speaker

Tim Watson

Video URL

https://vimeo.com/771738355

Transcript

Tim Watson 0:42
All right, I think we're going to be good to start now. And so let's start by saying thank you very much for joining the session hope you are all safe. I'm going to be talking about customer driven testing. And I'm going to share with you the story of a split test that resulted in a revenue winner of $288,000. But just before we get to that, let me say a few words about myself. My name is Tim Watson. I'm an MMO. consultant, I've been 100% focused on email marketing for over 15 years. And I help brands increase the value in local levels for them, typically working with companies anywhere from 100,000 to 90 million emails a month. I'm pretty lucky to work with email in lots of different situations. And that gives me some advantages and a deeper understanding of what goes on in email and split testing, which is today's topic. All right, because this session is all about creating wins, let me ask who likes gambling. And if you like gambling, by all means, chat in the session window now or even don't chat in the session with those who say you don't like gambling. I'm sure some of you do like gambling. And certainly, there's a great thrill when you get to win and get lucky with it. Personally, I mostly don't gamble. And let me explain by what I mean by mostly don't gamble. One time when I was in Vegas, I went into a casino. And in the foyer, there was a nice big shiny slot machine. That slot machine had a sign on it, which said 90% payout. And this was quite funny. For me. The key selling feature of that slot machine was you get back 90% of the money you put in seemingly. which didn't really seem a great offer to me. But yeah, hey. But my point is not that you shouldn't play slot machines, by all means go ahead and play slot machines if you enjoy that. My point is that casinos have an unfair advantage. You just can't win in the long term, because the odds are stacked in the casinos favour. So when I said I mostly don't gamble, and then I only gamble when I have the unfair advantage. I wrote better in a casino. But I'll certainly take other bets in life when I have the unfair advantage. So what am I talking about gambling? Well, I think split testings a lot like gambling. You know, you're placing your A B test bet and trying to work out what's going to win. And it's a bit because we don't know the outcome, there's certainly a chance we can win. But there's no certainty. That's why it's a bet. It's a split test bet. And the critical part of split testing is deciding where to place your bet. And this is often not really thought about substantially. Now what are you going to do that's going to result in a win, whereas it you're going to place your bet. Placing many bets is certainly one good way to increase your chances of winning big and if you're normal and you do 10 tests, then you know on three, you'll get a reasonable weigh in on three, the same result on three may be the treatment be worse, you'll get one big winning 10. And real worlds limitations means you can't test everything. You know, there are many possible bits you can place in your tests. But which one should you pick? Can we bet small green or split tests? Can we increase our odds of winning and get the unfair advantage? I believe we can. I believe we can get an unfair advantage when we run our split tests and improve our results. So of course, one way is to start with a good hypothesis. You need clarity on why you're doing this bet

helps you decide what to test. It helps the learning because you've got context so that when you do Get your test results, you've got the context of what happened and why you did it. So you can better learn from the results. And a good hypothesis is made up of three parts, it's going to be the issue, the perceived problem that needs to be resolved, can be the solution, the method of resolving it. And the expected outcome, typically, something's going to go up, something's going to go down, you're going to get more clicks, more revenue, more inquiries, you're going to get fewer cart abandons. So we have to work out what our issue is, what the solution is, and what our expected outcome is. So an example could be the perceived problem is that people don't trust product claims. And which case, our solution could be to add third party endorsements and ratings. That would be a sensible kind of issue solution hypothesis. But it's not enough just to have a good hypothesis. What we're trying to do by an unfair advantage, is we need to get better at guessing the right issue to solve the problem. So if we can get better at understanding perceived problems, we get an unfair advantage in our split tests, and we're going to have more likely win outcomes. So how can we get better at understanding perceived problems? And could we use previous winners? I've seen very often people base a test on what worked for other people. So you know, see, these are some tests that work for the people taken from published test results. Things like emphasise third party product accreditations don't have multiple calls to action. Include customer testimonials, use trust indicators. Use a product wizard to help with selection, show low stock warnings and prices in seven. Use comparison grids for products with recommended choice, use social proof have a stronger benefit based subject line, increase product image sizes, show products in context, reduce page length, increase page length. So these are all things that have worked for other people, and is copying these gonna make us smarter? Huh, difficult? And what about that last two that reduce page length is a winning result for some people in increase patient is a winning results as well. totally opposite yet both are winners. You know, how is this happening? Well, copying a test from someone else is a bit like watching the roulette table next to yours. Seeing someone win on 14 read then betting on that. You know, if you've ever tried running a test that someone else ran, that gave them a great uplift, you would have been disappointed. I've certainly tried it. And I've been disappointed that none of the ideas on the previous slide for tests were stupid. They were very sensible things to do. And very good for inspiration. But you know, the problem is their roulette table is not yours, your brand your audience your challenge is very different. And simply repeating someone else's test is not going to give you an unfair advantage. So how do we get the unfair advantage? That's easy. We get the unfair advantage by asking the croupier to tell the wheel. We get the croupier to play for us and took the wheel, we get the unfair help that we're looking for. So what does that mean in marketing terms? Well, if we're marketing, that means we should ask our customers and our prospects, what would tilt the wheel for them? What should we do to remove the blockages experience to buying? I'm talking about doing research, I'm talking about doing research into your issues before deciding what to test. Now we're pretty lucky in digital marketing, we have lots of different ways we can engage in conversation with our customers and our prospects. We can do page polls, email surveys, lots of other methods. What we need to do is research and ask customers, what are the issues that stopped them buying?

We need to ask the right sort of question. So if we're going to talk to customers and find out what the issues are that we need to solve and split test, then we need to ask the right questions of our customers and our prospects. The first thing really important means asking open questions, not closed questions. You know those closed questions that have a yes no answer, or a limited set of answers. open questions have no predefined answer. So as an example, a closed question is if you're selling fruit, and you ask do you like apples or oranges You totally missed the possible answer of I like cherries? Because it's a closed question you never uncover. The reason why people are not buying is because you don't offer cherries of the right type. A much better question to ask is an open question, which is, what fruits Do you like, that allows customers to tell us the things we hadn't even thought of that helps us reveal the issues that are the heart of what's stopping people going further in their journey with us. The reality is, the only people who really know why they didn't buy or take some other sort of action is our customers and our prospects. We need to ask open questions to get them to share that with us. The sorts of questions I'm talking about things like this, you know, these questions really drive to the root issue of why people didn't proceed. Questions like what frustrated us and most about trying to do business with us? You know, that could result in 1000 different answers. But if you ask that question, actually, you'll find there's a small subset of answers. And suddenly, you start to understand where your problems lie. Or what did you do instead of buying from us? What was your complete decision making process let's get into reveals a path of thought they went through so we can work out what we should test that's going to make it more successful for that path being with us. What information Did you want that we can provide? Maybe people didn't go further because we didn't tell them what they needed to know. Let's ask them, what was it you wanted to know that stopped you buying? So these would be great questions for prospects? How can we make this page better? You know, what are the top three reasons you didn't proceed? If you went to a competitor, what did they offer? That was better? Great question. I mean, if people have been bought from a competitor, why not just ask them? Why did they go there? Maybe there's something we need to test on our system, that's going to get them back to us. So these are good questions for prospects? I'm not saying these are the very questions you should ask. But hopefully setting the scene for the types of open questions that lead you to understanding the issues and give you the unfair advantage when you run your split tests. And if you run any sort of response polls and email surveys, you should typically get somewhere between a 2% to 5% response rate if you do those surveys, right. So you only need to send out, you know, 1000 2000 3000, you know, a few 1000 emails per survey, and you're probably going to get enough responses to start to understand where your issues lie. Alright, so let me go back then to the $288,000. When this all started with a company I work with, they frame news articles. This is for people who have been featured in the news, they want to display their press. This is an example of one of their landing pages. The image in the top left there is a personalised image, it's specific to a particular person shows then you've framed and presented nicely so that they can decide if they want to go ahead and buy that. So we ran off research phase, we ran a phase to find out what stopping people converting, and we asked the question, if you decide not to order, can you tell us what stopped you? Obviously, we know, the email addresses and page polls that we can reach out to people who we know who started to engage and then stopped engaging, they were prospects, and they didn't actually proceed. And by we reached out to those people. And we asked this question, if you decided not to order. Can you tell us what stopped you? came back. But one of the fairly common answers was people just found that preview image of what their framed news would look like was just too small.

And they were kind of not sure if they really wanted it off. It was quite what they wanted, because it was too small. In fact, this is one very specific This is verbatim This is one of the answers to that specific question that was asked. And they said we'd like to see preview can't decide if it will look like too much information. Price is pretty high. So I want to make sure I'm happy before ordering. The person didn't buy because they weren't entirely happy with the understanding what the product was. So with that insight as to what the issue was, the split test became quite easy. Let's just split tests on the landing page, adding a zoom feature, so that people can zoom in on the product. So we increase the resolution of the images used on the landing page from the email campaign and have increased them resolution's those images on that landing page for the email campaign, we added some zoom buttons so people can zoom in and pan around, really feel and understand that product in the digital world make sure they were comfortable before they decided to buy. The result is tracked using Google optimise conversion rate went from 1.82% to 2.02%. That was with a probability to be the best of over 99% are really solid results, we've run over quite a few sessions to prove that. And that conversion rate increase equated to $288,000 of revenue extra per year. So maybe you're thinking that was just lucky. Under all of this research, we just got lucky with the test anyway, I can certainly tell you that. Not only did we do that test based on the research, but we did some other changes to pages, email campaigns based on the research, and we've got some more wins as well on top of that. So the research really gave us the unfair advantage when deciding what to test. And it's pretty much the Abraham Lincoln version of split test, the classic quote of give me six hours to chop down a tree and I'll spend the first four hours sharpening the axe. Most people spend 20% of the time considering the issue to test and 80% of their time doing the actual testing is totally ran the other way, is 80% research into what the issues are reaching out asking prospects and customers what's stopping them converting, then understand what to test. That gives you the unfair advantage. That means you're going to get better wins and bigger wins when you do the testing. 80% research and 20% testing. All right, and just a last quick note to say. Remember to check it wasn't luck. If you're doing testing pay attention to your statistics. I know it's a horrible subject, but use a sample size calculator, statistical significance calculator and make sure that your sample sizes are big enough and that you've got solid results. You know, when you were playing dice if you roll a four you don't immediately conclude the dyes bars to throw any for and you use the same principles with testing you need enough people enough samples to get solid results. So thank you so much for listening. I hope you found it was useful 20 minutes and I hope you get the unfair advantage. If you've got any questions, by all means, drop into the chat window. Reach out to me Tim Watson Zetas sphere. And I wish you well with the rest of the conference. Thank you so much.

Our Business Membership Programs are available for 2024