Optimisation and experimentation are key to marketing success, and AI is here to make them smarter. In this video, Claire Elsworth and Liam Wade discuss how AI is transforming the way we test and refine our strategies, giving marketers more power and control over their campaigns.
What you’ll learn:
- AI’s Role in Optimisation: Understand how AI takes the pressure off repetitive tasks, freeing up human specialists to focus on strategic thinking and creative problem-solving.
- Beyond the “Walled Garden”: Discover how AI helps marketers deploy media budgets more effectively and conduct experiments across multiple channels, not just within a single platform.
- The New Era of Planning: Find out why a reactive approach to media planning is becoming a thing of the past. Learn how AI agents can create proactive plans by analysing diverse data sources, from keyword search volumes to weather patterns.
- Avoiding Conflicting Experiments: See how AI can help you manage complex experimentation plans, phasing tests correctly to deliver the most robust and credible results without conflicting with one another.
Transcript
Liam Wade: It would be good to talk a bit about how AI helps us speed up decision-making and optimisation, particularly when we’re looking across multiple channels. I know we’ve been doing that a little bit already, haven’t we?
Claire Elsworth: Yes, I think the thing to remember with AI-driven experimentation or optimisation is that it’s not a new thing at all. These kinds of techniques have been used by our specialists and specialists across the industry for years and years now. There are distinct products that we’re all really familiar with, like Performance Max, for example, across the Google stack, that have helped us take the pressure off some of that manual, repetitive work. It helps us focus on some of the more strategic thinking that guides how we define experimentation and optimisation, really letting the technology do the hard work. So as we’re increasingly talking about AI-driven optimisation, it’s really an evolution of stuff that we’re already doing.
Liam Wade: I really feel like in the past we’ve relied on Google’s and Meta’s AI capabilities. Whereas now, brands are increasingly using the capabilities that agencies are developing, or even the large language models that exist elsewhere, and kind of using some of that capability across multiple channels. So rather than using Google’s AI on Google platforms, you’re using multiple AIs across the entire spectrum.
Claire Elsworth: Yes, and that’s really exciting, to be able to bring that optimisation outside of those really distinct ‘walled gardens’. It gives some of that control over optimisation decisions back to the marketers. Even though we’re letting the technology do even more of the lifting than we were before, that control over things like budget optimisation comes back to the marketers or the agency specialists, which I think makes our job much more powerful. When we think about how AI is powering in-channel and even cross-channel optimisation, being able to deploy media budgets more effectively across multiple channels is starting to get really, really powerful.
Liam Wade: Yeah, for sure. I think in the past, media planning could be considered a quite reactive art because you have to pull data from multiple different sources. You have to interpret it in slightly different ways. That takes time, and then analysing and putting together a plan off the back of it is a whole new process. But with AI agents, the idea would be that they would take all the data that’s in your warehousing software. It would take data outside of that too, such as keyword search volume, the weather, and what your competitors are doing, and it would create a plan in response to that. When brands come to me looking for practical advice on how to prepare for a world where AI agents will make those sorts of decisions, I say: start building that data infrastructure now. Secondly, start combining that with third-party data sources, so it’s all there, readily available. That will save you a ton of time doing the actual engineering that will lead to using those AI-powered solutions. You spoke earlier about experimentation and how AI can actually improve a performance marketer’s approach to it.
Claire Elsworth: I think the important thing to remember when we’re building an experimentation plan and a roadmap is that we’ve got to be really clear on how we’re guiding the technology, where we’re asking it to optimise, and what experiments we’re asking it to run—ultimately, what we’re trying to understand off the back of this experimentation roadmap. So that’s where the specialists, the human specialists, become even more valuable, because if that sort of day-to-day, repeatable, technology-driven work is being done automatically, as marketing practitioners, we need to be creatively problem-solving even more so that we’re really pushing and guiding that technology in the right way to get the most value out of our experimentation plan.
Liam Wade: For sure, and you can talk about automating the day-to-day as best practice. The trouble is, if everyone’s just doing that and they’re just doing the best practice, they’re kind of stagnant, and that’s how you end up in a performance plateau. So this idea of experimentation beyond best practice and kind of understanding what you can do between the machine—between the lines of what the machine is letting you do—can actually have a massive impact on performance.
Claire Elsworth: I think one of the really interesting use cases of AI in experimentation, particularly, is that as specialists, especially when we’re working on quite complex multi-channel marketing plans for our clients, there’s a risk that we’re so full of ideas, we’ve got loads of things that we want to test which might end up conflicting and corrupting results. For example, if we want to do a PPC test over here, but we also want to do a landing page experimentation test over here, in a different channel. What if those experiments end up clashing? There’s a really great use case for AI in this process to help us phase accordingly and make sure that when we are running complex experimentation plans, we’re actually delivering the most robust and credible results we possibly can. It’s all about adding that value and that credibility to our experiments.
