The Academy
/
Personalization
/
Lesson 6
Lesson 6
:
Test and track

Test and track your campaign performance

Overview

How do you prove that personalization actually impacts your goals? This lesson will show you how to test your campaign for quality assurance to make sure nothing is broken. Then how to run an A/B test natively in Experiences to measure the performance of your personalization campaign.

Transcript

Hey, I'm Austin Distel, and in this lesson, I'm going to share how to test your campaign and track for performance. Let's do it.

So before we get this campaign live, we want to do some quality assurance to make sure everything is working properly.

So what I'll do is go through each one of these and preview it, and then I'll click on the links that I changed such as this one here, and ooh, it looks like something is broken. So I need to go to the editor and see why that is.

I'll make sure I'm on the right audience, and then I'll click on my experience. So it was opening in a broken tab because I didn't type in the full URL, so I needed to do "htttps://useproof.com", and I will do that for this one too. Save and close. Now, I'm going to preview it again, and now it's working.

So this is the kind of stuff that you want to make sure you check for before launching your campaign. You can also use the event debugger here in your settings to make sure that all of your events and audiences are matching.

So let's go to one of our pages like useproof.com/pulse. Because I'm loading a page, now I'm able to put myself into an audience. I'm going to refresh this and see if it came through.

All right, so just visited pulse. Now, I can look at the identity and see who this person was. It was me at Proof. I can see some of the traits that are attached to my identity, and I can see even which experiments I'm a part of if I'm the test or the control.

So that's how we know everything is working. The identities are matching, the events are firing. Now, let's go back to our campaign, and what we would do from here is we would split our traffic 50/50, and press "Save," and publish. This would make the campaign live.

Now, in conversion rate optimization, there are two kinds of tests that are often run.

You have the baseline test, which looks at metrics before and after something was implemented. So I could say, "Well, last month we didn't have Proof Experiences and our conversion rate was 10%. Now, with Proof Experiences, our conversion rate is 20%."

I think that you can probably see some flaws there, right, that there's a lot of things that can happen in the meantime, and that we can't directly tie the full result, the impact from just one thing.

So that's why an A/B test or a split-test is better. This is where you run both the control and the variant. For example, this is without Proof would be the control, with Proof would be the variant, and you run them simultaneously, splitting the audience in half, or you could change this to be at 80/20 or something like that.

So there's two reasons why you want an A/B test. First off, you want to make sure that in case things don't go according to plan you, you have limited your impact radius on that metric.

You don't want to put all of your eggs in this basket in case something breaks, or maybe your hypothesis was incorrect, and you need to go back to the drawing board and try a new campaign or a new experiment.

The second reason why you want an A/B test is because you want to see clearly and confidently that it was only these variables that impacted the performance of the campaign.

So obviously, this campaign is not yet live. Let's go ahead and look at another campaign that has been published, and we can start to see the performance metrics come in.

This is our Pulse Clearbit Experiment. Basically, the idea of the campaign is that we have multiple industries that could use our other product, Pulse, and so we wanted to see how personalizing for industry using Clearbit Reveal would be able to impact the performance of the campaign.

So on this page, I can actually look at the goals that we have and see what the percentage lift is per goal and also, by the page and by the audience. So I can see how the SaaS audience is doing, the e-commerce audience.

Overall, this campaign has increased new trials by 12.31% at 70% stat sig. Now, basically, what stat sig is, is your confidence that this number is true.

So we're 70% confident that personalization has increased our number of trials by 12.31%. The golden standard is to have 95% stat sig, but this is all dependent on your company's risk of tolerance.

Some companies need to be 99% sure every time that a campaign is working properly. Others are fine at calling the A/B test done at 70%. So it is all up to you. At Proof, we do 95% stat sig.

We also run our campaigns for at least 14 days, two weeks because that will allow any unevenness between the days of the week and any other externalities that might happen during the campaign to level themselves out.

Now, we can be sure that over the course of two weeks, we had enough data points to feel secure that we increased our lift by 12.31%.

So that's how you test your campaigns and track performance. My name is Austin Distel, and I'm very grateful for you watching all the way here to the very end and putting up with my awful jokes.

I encourage you to chat with us in the bottom right-hand corner. We can answer any questions, schedule a demo, and I'd love to see how personalization can help you hit your company's goals.

You're also welcome to reach out to me on social media or even email me directly at austin@useproof.com. I would love to vet your personalization ideas, and if you have a campaign that does hit 95% stat sig, I'd love to feature you as one of our successful case studies.

Thank you for watching. Stay safe, and together, let's make the internet delightfully human. Peace.

Resources

Ready to give personalization a try?

Request a free trial
19 people booked a demo in the last 24 hours