LLMs (Large Language Models) have quickly become a significant influence on a customer’s journey, and all signs point to this influence only increasing.
Even before we consider how to optimise for AI search, the first question we have to ask ourselves is: How do we measure the influence of LLMs on top-line results?
The challenge here lies in how LLMs function for users and how marketers need to change the mental model they adopted from traditional search. Beyond being productivity tools, AI assistants aid the discovery of information, typically within their own UIs. This differs from traditional search engines as, although they also aid discovery, their primary mechanism to do this is by referring users, i.e. via clicks, across the web.
This key difference highlights that referring users are a secondary consideration for LLMs, which explains why only a small proportion of prompts actually result in a click.
Enter: Zero-click search, where, at its worst, SEMrush estimates that ~93% of Google AI mode searches do not end in a click.
Even if we remain focused on prompts that result in a click, this still inevitably underestimates their true contribution. Since LLMs are often used at the start of a conversion journey, last-click attribution models will fail to pick up their commercial value. Additionally, cookie-based tracking is never perfect, meaning Analytics platforms will miss some referring users due to cookie consent settings.
Despite the above, these considerations shouldn’t question the value LLMs have on your brand visibility. Rather, they force us to change the KPI and how we measure their impact. Even without looking at industry reports, we know there’s more to LLMs than just clicks.
So, how can we measure the true impact of LLMs, beyond the click?
How can MMM help find the answer?
Over time, Media Mix Modelling (MMM) could be an ideal tool for understanding the impact LLMs have on your marketing investment.
Through MMM, you can identify long-term historical patterns and use them to estimate total contribution, even without a direct click path. By modelling the LLM activity as a key marketing input alongside traditional channels, MMM can detect correlations with overall business outcomes that direct click-based tracking misses.
Learn more about Media Mix Modelling
MMM is becoming an increasingly important tool for measuring marketing activity, not just for LLMs as discussed here. If you haven’t come across them before, take a look at our beginner’s guide to Media Mix Modelling.
On the face of it, then, an MMM could be exactly what’s required based on the problem outlined earlier. However, there are some considerations to bear in mind when thinking about using an LLM to understand LLM performance. These factors relate to user adoption, where the industry is now, and to the new technologies that still need to emerge to increase our data maturity.
Limited & unpredictable historical data
MMMs typically require at least 3 years of data to analyse correlations effectively. That means the volume of historical data for LLMs will be low since it’s still a relatively new medium. Certainly, how users have been using this technology to discover brands is younger than 3 years at the time of writing this article.
Furthermore, AI adoption has been growing exponentially, as MMMs perform best with consistent and long-term data to iron out anomalies. This rapid rate of change and uptake will make modelling more challenging.
Lack of first-party data sources
Bing Webmaster Tools’ launch of its AI Performance report was an industry-first and a welcome addition that helps marketers better understand their LLM performance. Though undoubtedly useful, we need to consider:
- There are more AI assistants within the LLM ecosystem than just CoPilot.
- CoPilot’s lower market share compared to other LLMs, and how this data alone under-represents total LLM visibility.
- Even with Bing Webmaster Tools launching a report of this nature, marketers require deeper insights beyond its citation data to understand audience behaviour, demand, and corresponding brand visibility (see Missing first-party impression data below).
Though this is an encouraging step in the right direction, what’s required are similar (and more developed) reports from other AI assistants. In particular, Google is under scrutiny due to its market dominance. Though these reports are “available” within Google Search Console, the data from AI Overviews and AI Mode is currently hidden alongside other result types from Google Search.
Missing first-party impression data
Closely linked to our lack of first-party data sources, missing impression data further complicates the use of MMM to demonstrate the commercial value of LLMs.
In place of low referral sessions, it’s this type of demand information and how it relates to prompts, topics and entities that can potentially hold the key. Impression data will be larger in scale and allow us to derive more meaningful brand visibility information to feed into an MMM. In turn, this will allow us to better represent LLM’s share of your media mix in a more proportional way.
Taking these limitations into consideration, we must consider alternative metrics and data.
What alternative metrics are available to represent LLM influence in an MMM?
Let us be clear: Despite the limitations we discussed above, you can use referral sessions from an LLM recorded by your Analytics platform within an MMM. Due to the metric’s first-party availability, this is likely the easiest solution. However, that doesn’t mean it’s the best option or the most representative option.
Instead, we can combine the following metrics with careful modelling techniques, leveraging our understanding of the LLM landscape to turn the data we have into a suitable proxy metric for an MMM.
Log files
Where referral sessions may under-represent LLMs’ influence on your media mix, a suitable alternative that still leverages first-party data is log files. Though historically used to understand how search engines crawl your site, we can also use log files as a proxy for AI website visibility.
This is achieved by filtering down to specific LLM user-agents, where we can even see how content is used for model training, retrieval-augmented generation (RAG), and real-time user responses.

The frequency of bot hits over time then highlights how often your site is served across LLMs, providing a larger, more representative picture of visibility beyond clicks.
Prompt tracking
At Impression, we use Otterly.ai to track prompts, enabling us to monitor brand mentions and citations over time.
This solution allows us to create a prompt portfolio that represents a brand’s visibility across the entire LLM ecosystem. Their algorithm for estimating the volume behind a prompt’s intent also provides an indication of impression share, which is critical for an MMM.

There are some caveats with the data tracking we need to be aware of when using an AI analytics solution like Otterly:
- Tracking only starts when you onboard onto the platform. For the data to be useful within MMM, we need time to record high-quality historical data.
- It’s contingent on synthetic prompts that you need to govern and update. Research is therefore required to ensure these resemble field prompts as closely as possible, and additional budget may be required to ensure they are exhaustive in capturing what your brand is relevant for across your owned media.
However, if historical data is collected and prompts are representative of your brand and how it can be discovered, this solution avoids many of the limitations discussed earlier in the article when creating your model.
Database tracking
To supplement our prompt tracking, we also use Ahrefs’s Brand Radar, which provides access to a broader database of prompts. This helps reduce reliance on synthetic prompts by leveraging Ahrefs’ AI visibility database, which contains 353m+ search-backed prompts across all AI assistants. Opting for this database approach is arguably a more efficient way to capture a more accurate picture of your true visibility, as prompt tracking is prone to missing topical gaps.

A similar shortfall regarding historical data applies here, too: Brand Radar only began tracking in mid-2025. As we know, more data is preferable, but this will become less of an issue over time.
Final thoughts
To wrap up, here are some things to remember when interpreting the results your MMM gives you.
MMMs Look Backward, Not Forward: MMMs aren’t sentient; they don’t have foresight regarding the predicted growth of LLMs. Just because an MMM can’t pick up a massive contribution yet doesn’t mean the channel isn’t worth pursuing. This applies to all channels, but it’s heightened here for LLMs. A channel that isn’t showing a good return now doesn’t mean it never will.
Be mindful of your confidence intervals: Given that LLMs are a relatively new marketing tool, the limited historical data means an MMM will likely be less “confident” in its results. Especially when comparing LLM influence with more established channels. Don’t let this scare you! Just keep this margin of error in mind when planning strategies and rerun your MMM at regular intervals. Very soon, we will have enough data spanning several years to make this no longer an issue.
Despite these words of caution, it’s clear that LLMs aren’t going away soon. Putting in place measurement frameworks and experimenting with approaches now will mean your GEO and SEO activity is ready for the future.
Critically, it also means this activity is prepared to secure continued buy-in from your stakeholders.
