Artificial intelligence (AI) ad testing solutions are attracting more attention, and budget, than ever before. Our research tells us that 49% of global marketers plan to spend more on AI creative testing next year. And with good reason; promising quick answers and lower costs than survey-based approaches, the potential to test more content more quickly has clear appeal. But where should AI solutions sit alongside survey-based testing? What are the strengths and advantages of each? And when should you use one or the other? These are questions marketers need to grapple with, to ensure that the right product is being used in each situation and, ultimately, to get the best return on ad spend.
Creative quality is vital to the success of ad campaigns – probably more so than you think. Kantar data shows that creative quality is the second most important factor in profitable advertising, whereas marketers think it’s the fourth most important. We also know that great creative drives both brand equity and sales. So the right creative approach, coupled with getting the executional details right, add up to memorable elements that form the basis of winning campaigns.
AI tools can augment the creative testing fundamentals
From establishing your strategy and kicking off ideation, through to executing your campaign and optimising in-flight, getting creative right is a journey, with testing and learning involved at every stage. This process can take months or days depending on whether it’s a new landmark campaign being devised or changing assets in a digital campaign. But the fundamental process remains the same.
Add into the mix that things are constantly changing, meaning that staying still isn’t an option. There are new platforms emerging all the time, and new digital formats to explore. We have heard so much this year about the potential of the metaverse, with some brands blazing a trail in finding a role that they can play in these virtual worlds. This is something we expect to see more brands exploring in 2022 and beyond.
Additionally, even traditionally offline channels are moving online with the unstoppable rise of VOD viewing, programmatic audio, and innovation within digital out of home (DOOH). In DOOH, we are seeing brands get really creative, using dynamic creative approaches which change according to the weather conditions, time of day, traffic levels or location, as well as advanced screen technologies, to deliver really engaging experiences. All of which require insight at each stage to understand where the creative is delivering or could be improved further.
AI allows for more iterative testing throughout the campaign lifecycle
With all of this change, marketers and agencies need to constantly test and learn, and each business and campaign will require a framework to support it. Part of that framework is an understanding of what is the right testing approach for each stage of a campaign.
At Kantar, we see the role of AI-powered creative testing as a predictive tool. It can give you a go/no go result on a creative approach and the lower costs open up the possibility of testing competitors’ advertising – something that has largely been cost-prohibitive before now. We see this as a primary use case for AI solutions. But crucial to the output of any AI tool is the data which feeds it, and the assumptions going into generating those speedy indicative results.
AI and survey-based ad testing work hand in hand
But it’s not all about AI. The advances in AI simply reinforce why and when marketers should use survey-based testing. That will always be needed to form the foundational data which the AI tool can draw on for relevant insights for the task at hand, whether that’s looking at new creative routes or an entire launch campaign. This foundational data is crucial, because an AI tool is only as good as the data sitting behind it. Here’s where scale matters, and quality. The data fed into Kantar’s AI tools comes from a database of 230,000 real-world ad tests.
When launching a new campaign, whether in TV or digital, granular insights are important because the details matter. At Kantar we recommend testing a TV execution three times, whether it’s for a new product, a new campaign, or a new creative theme, or even a cut-down: first of all, at an early stage, then again after some key edits and tweaks based on the findings, and finally at near-final stage, to refine. This means you’re maximising your chances of success and you can only get this granular understanding and optimisation, second by second, from survey-based pre-testing, as well as insight into key themes, and deep dives into specific requirements such as celebrity usage, music, I&D, and so on.
To facilitate – and accelerate – more iterative testing, we recently launched Link AI for Digital on Kantar Marketplace, which offers creative effectiveness predictions for digital video ads in as few as 15 minutes, assessing against the behavioural and creative metrics that drive ad performance. It gives marketers the ability to predict the performance of digital advertising before it goes to into the market, evaluate different versions of an ad, test competitors’ creative and test high volumes of ads to identify trends and build creative benchmarks. Link AI for Digital is part of a suite of AI-based capabilities on Kantar Marketplace, which also includes Link AI for TV. Clients like Google and Unilever already using these tools to predict how audiences will respond to their ads.
The future that we see is not one where AI supersedes survey-based testing. We see them both working together, in packages, for specific use cases. This hybrid approach will enable suitable pre-testing frameworks for any client who wants to maximise the return on their creative.