Jagaul.com Digital Marketing How To Use Testing To Create a Competitive Advantage

How To Use Testing To Create a Competitive Advantage

| | 0 Comments | 1:18 am

Are you motivating your audience to take action? How do you know which creative assets compel your audience to act?

Understanding how your audience interacts with your creative is crucial to generating leads, making sales, and keeping your readers coming back. But knowing which does better is not all about numbers. It’s an art and a science. It requires trial and error and lots of creative juice.

One of the best ways to approach these motivations is through A/B testing — comparing the performance of one creative against another.

This can be anything from calls to action, button color, images, tone … you name it, you can test it. Today, I’m going to focus on the power of CTAs and matching content to intent. Then, I’ll dig into the overall testing process — while testing can be intimidating, I have some tips for success.

Is one CTA better than another?

Can one CTA rule them all? Probably not, because motivation is not the same for everybody.

What motivates you to listen to a podcast? Is the host or guest someone you’ve heard of? Did someone recommend it to you?

Why did you provide your email to download that e-book? Did you want to get a statistic for your proposal? Does it tackle a topic that you know little about?

Motivations are many, but here’s the universal secret: The cost must be less than the benefit.

In content and marketing, forms and other conversion points (where people act) are the best places for testing the audience’s motivations. Some in the industry refer to this as conversion rate optimization (CRO) — a means to understand and improve the conversion rate for subscriptions, downloads, follows — anything you can measure as a rate as the number of people who took an action over the number who could have taken an action (i.e., page views, clicks, emails sent).

Finance can be scary – but Chime taps into user motivation

The financial services industry constantly tests all its content, landing pages, and ads because it knows motivation can change dramatically depending on a person’s financial situation. I have a few examples of how Chime approached testing for its debit card and savings account features.

They know that some people are looking for a free product that doesn’t have monthly fees. That’s who this ad targets, using the copy “no monthly fees” above the simple form requesting an email address.

However, they have also built a brand as a “popular” choice for banking — and here they use the power of peer pressure with the copy, “Join the millions using Chime,” above the email entry box.

Chime ad that reads “Join the millions using Chime,” above the email entry box.

You can start to see how there can be 100 variants of copy and imagery, each appealing to a different segment of the audience.

Chime leans into each of these concepts, with each ad matching a corresponding themed landing page. Here’s the landing page for the “Join millions using Chime” ad:

The landing page for the “Join millions using Chime” ad

The ad’s landing page uses the same phrasing and image followed by more explanation: “Your account shouldn’t cost you money — no overdraft fees, no minimum balance, no monthly service fees, no foreign transaction fees, no transfer fees.” Below that, they show features of the Chime checking account and Visa debit card.

Chime has tested dozens of these variants — a recent AI-driven test even found different versions worked for different people at different times of day in different geographies. Isn’t that wild? Let’s take a look:

The first three versions include the same image — a picture of a person’s hand holding a Chime debit card on top of a restaurant bill.

In this home page version, the headline says, “Banking made awesome,” with the subtitle “Make the switch to the bank account that helps you save money automatically.” This version won in a straight A/B test.

In this home page version, the headline says, “Banking made awesome,” with the subtitle “Make the switch to the bank account that helps you save money automatically.”

But in this version, the headline reads: “Meet the bank account that has your back,” followed by the same subtitle as the first version. It earned a 30% performance lift over the best overall performer from people viewing it between 5 and 7 p.m.

In this version, the headline reads: “Meet the bank account that has your back” followed by the same subtitle as the first version.

The third version uses a different headline, “Save effortlessly and automatically,” and the subtitle, “Saving money is hard. Chime makes it easy.” This one worked best for people on mobile devices (a 23% lift in account openings compared to the overall best performer).

The third version uses a different headline, “Save effortlessly and automatically,” and subtitle, “Saving money is hard. Chime makes it easy.”

The final version includes a visual of the Chime app on two mobile phones with the headline to the left, “Banking that has your back,” and the subhead, “Signing up takes less than five minutes.” This one worked best for people in Illinois.

The final version includes a visual of the Chime app on two mobile phones with the headline to the left, “Banking that has your back,” and the subhead, “Signing up takes less than five minutes.”

These examples illustrate that your audience doesn’t live in a bubble and is not all motivated by the same things. But you need good ideas before you can test.

How do you get ideas? The first step is to go out into the world and look at the work other people are doing! Write them down or save screenshots in a folder. Bring them up in a brainstorming session.

What should you not do? Don’t assume what worked for someone else will work for you. It might, but you won’t know until you test it. If it doesn’t, move on. Remember, there are no bad ideas in brainstorming. The only way you can be creative and come up with great ideas to test is to stop saying, “That won’t work.” Instead, give the team a chance to build on fun ideas and then figure out how it could work with certain changes.

All testing starts with data

The best testing programs start with data. What data do you see every day? How many subscribers did you get? How many clicked? How many people came to the website?

In Google’s Universal Analytics days, this device category was a favorite report of mine because it showed observable differences between the performance of desktop users versus mobile users.

This report illustrates, mobile users converted at about half the rate of desktop users.

Click to enlarge

As this report illustrates, mobile users converted at about half the rate of desktop users. Why? Is something broken, or is there something about how we talk to mobile users that doesn’t resonate? Should we just give up on trying to convert mobile users and stop paying for mobile ad traffic?

HOT TAKE: If you don’t look at the difference between desktop and mobile users, you are failing as a marketer. People on their phones are usually multitasking and see a much smaller screen. They are in a different headspace researching a product, going through a sales funnel, or dealing with a shopping cart than a desktop visitor. The latter has a bigger screen (and maybe more than one) and is less likely to be distracted.

Ask why, then test your hypotheses

Your team should brainstorm why there’s such a difference between mobile and desktop users. Next, take your observation and add some test ideas. For example:

  • Could we nudge mobile users toward a higher purchase?
  • Could we suggest things for them to add to their cart?
  • Could we reframe our benefits?

Remember those Chime examples? The ads that performed well for a mobile audience focused on easy and fast banking. The results make sense because when someone’s on their phone, they’re distracted, they’re on a train, etc.

Again, it’s all about what benefit(s) motivate your audience to act.

Once you have some ideas based on your observations and research, formulate your testing hypothesis around why something would work, such as:

This [change in approach] will result in [change in key metric(s)] because [theory as to why the change will have the specific effect.]

A well-crafted hypothesis can teach you something even if it doesn’t prove true. You’ve still learned something about your audience’s motivations. You can cross that idea off the list and move on to another idea about what might motivate them more.

A hypothesis also helps you stay focused. So many times, bosses say things like, “I want to change these 18 things. Let’s run it as a test, and let me know how it goes by the end of the week.”

Tell them, “No. If you change 18 things and the performance stays the same, we’ve learned nothing. Did one thing have a positive impact and another a negative impact, so they canceled each other out? That’s a lot of wasted time and energy.”

Consider this sample hypothesis:

Including customer reviews on our landing page [change] will increase conversions [metric] because potential customers will feel that their decision to purchase has been validated by their peers [motivation-based theory].

Simple, right?

How to run organized tests

To test that hypothesis, you would add customer reviews to the page and test their impact.

Before you start the test, set yourself up for success. Every test should have:

  • A unique name that makes clear what you tested
  • A judgment on the difficulty of implementing the changes (to help you prioritize)
  • Primary metric(s) for success, such as a click-through rate, conversion rate, or performance metric.

Spreadsheets work well for organizing your test. This example shows how I lay out a testing spreadsheet. (You can download the template from my website). 

Click to download

The columns and some of their corresponding lines include:

  • Test name — competitive rate comparison, headline order, interactive elements, etc.
  • Barrier to overcome — aggressive competition, make high interest rate more attractive, etc.
  • Test description — emphasize a competitive comparison, reorder the headline to show rate first, etc.
  • Hypothesis — emphasizing how competitive our rate is in the market, more users will view us as the most attractive option, improving applications and deposits
  • Audience/channel — all landing page visitors, display ads, search, etc.
  • Technical effort — low, medium, high
  • Expected impact — low, medium, high
  • Learning priority — low, medium, high
  • Total score — sum of technical effort, expected impact, and learning priority scores

The three columns before the total score are used to calculate an effort-benefit score. Here’s how to calculate it:

Expected impact + Learning priority – Technical effort = Effort-benefit score

I give a high-impact test a rating of three. A low-impact test gets one (or a zero when I don’t think it will have any impact, but my boss wants to run it anyway). A medium impact gets a two. Here’s what it looks like with numbers:

High Impact (3) + Medium Priority (2) –Medium Effort (2) = Effort-benefit score (3)

Sort the spreadsheet by the effort-benefit score to help with testing prioritization.

Testing tips

Consider these three suggestions to help your testing run more smoothly:

  • Run only one test at a time. If you run multiple tests at the same time, you won’t have any idea which change created the impact.
  • Sample size calculators, such as this one from Optimizely, can help you understand how much traffic you need and at what conversion rate to determine if the test and its results are significant.
  • Don’t frequently check the numbers. I ran a test that had a clear winner early on. But two weeks later, the results were the opposite. So, let your test run for at least two weeks.

Bringing it together

Remember, you need ideas to test. Look out in the world for inspiration, gather examples, and take your screenshots. Next, do some research to help you understand what motivates your audience. Prioritize your ideas. Write a hypothesis about what would make a difference. Test that hypothesis. Evaluate it based on your success metric.  Which thing did better? Validate your hypothesis or toss it out and try another one.

And then do it all again.

Register to attend Content Marketing World in San Diego. Use the code BLOG100 to save $100. Can’t attend in person this year? Check out the Digital Pass for access to on-demand session recordings from the live event through the end of the year.

Cover image by Joseph Kalinowski/Content Marketing Institute

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Post