OK don’t be mad. I know everyone is really excited about GEO. I get that for an industry long criticized for being immeasurable, dashboards and metrics feel like validation. I was also one of those people who was super excited about this light at the end of the measurement tunnel. So, I did some digging… and that light? Turns out, huge fire.

The reality is that AI is changing how people find information about companies. But what’s not real is most of the advice we’re getting about how we can influence the way we’re found through AI.

There’s a lot of reasons for this. Sales opportunities for fancy dashboards, an industry scared of what the future holds. But the majority of GEO insights coming from our industry really just signal that we don't really understand how LLMs work, which puts us in back in the seat of the ‘we don’t understand measurement’ persona and in dangerous ‘over-promise, under-deliver’ territory on how we’re pitching new playbooks and strategies.

Quick side bar: The one thing to know before we move into this big discussion on GEO is that LLMs (like ChatGPT and Claude) do not retrieve facts, they generate the most probable continuation of text-given context. That means LLMs don’t know things, they model how things are usually talked about. New models don’t “know more things” in the human sense, they just become better calibrated, less wrong in systematic ways, and are more aligned with how the world is now described. I dive into this a bit in this article if you’re curious to know more. Deeper dive on this topic coming!

So with that, let’s get to the facts and what all this can mean for comms.

What GEO actually is

Generative Engine Optimization (GEO) is the practice of influencing how AI systems (ChatGPT, Perplexity, Google's AI Overviews, Claude) represent your company when users ask questions.

Unlike traditional search, where you compete for a spot in a list of links, AI synthesizes information into direct answers. When someone asks "What's the best agency for healthcare?" the AI doesn't give ten blue links. It gives an answer. Your goal is to be part of it.

The term GEO comes from a 2023 Princeton study that found certain tactics could improve "visibility" by up to 40%. That study gets cited constantly, but what doesn't get mentioned is that it was conducted on a simulated system modeled on Microsoft Co-Pilot, validated only on Perplexity—not ChatGPT, Claude, or the systems we actually use (sorry MSFT).

A lot has changed with AI since that study was released in 2023. But you know what hasn’t changed? The way that LLMs work. There is actually no unified "AI algorithm" to optimize for. Different systems work differently. So you really can’t optimize for just one singular AI model.

How these systems actually work

Most GEO advice conflates two completely different mechanisms: training and retrieval.

Training happens before deployment. Models like like GPT-4 or Claude learned from massive datasets (made up of web pages, books, articles) scraped months or years ago. And, that knowledge is frozen (aka not continuously updated), meaning your press release from last week isn't in the training data and neither is your website update from yesterday. Content only enters training data during periodic retraining cycles, which happen on the model provider's schedule, not yours (and could be months ago).

Retrieval happens at query time. Some AI systems (Perplexity, Co-pilot, Google's AI Overviews, ChatGPT with web search enabled) actively search the web when you ask them something, pulling in current information and synthesize answers.

The distinction matters because tactics for retrieval-based systems don't work for base models, and vice versa. The next time someone tells you to "optimize for ChatGPT," you first questions should be: which version? With or without web search?

Measurement trap via Gemini lol

What this means for comms teams

The honest answer is that GEO is mostly uncertainty dressed up as strategy. I know that's frustrating. BUT it's also clarifying.

Strip away the hype and you're left with fundamentals that would serve you well regardless of AI:

  • Narrative consistency. AI systems synthesize information from multiple sources. If your website says one thing, your Wikipedia entry says another, and your press coverage contradicts both, the AI has to make a choice. It might make the wrong one. Audit your presence across sources and eliminate contradictions.

  • Entity clarity. AI systems need to understand who you are, what you do, and how you relate to other entities (people, products, categories). Ambiguity creates errors. If your CEO shares a name with a professional athlete, or your company name is a common word, you have extra work to do.

  • Canonical sources. Wikipedia, major publications, industry databases carry weight in retrieval systems. Your owned content matters less than what trusted third parties say about you. This isn't new, it's just more visible now.

  • Earned credibility that compounds over time. Creating more content won't help if it's not worth citing. This is where the earned coverage and podcasts go.

If we shift the conversation from AI tactics to storytelling fundamentals, we have the opportunity to see that investment compound over years as AI systems become more sophisticated and more ubiquitous.

OK Katie wrap it up

GEO is real in the sense that AI systems are becoming a primary discovery channel, and what they say about you matters. But most GEO advice treats AI like it's 2008 era Google (aka a single system with a single algorithm you can game if you know the tricks).

The companies that will do well in AI visibility are the ones doing the fundamentals well by using clear positioning, consistent narrative, earned credibility from sources that matter (imagine that!).

I’m sorry I can’t come to you with a new playbook… our current playbook is just the old playbook but with higher stakes and a deeper need for focus, while not being raccoons drawn to shiny distractions!

In short, the real opportunity isn't proving PR's value through AI metrics. It's being the team that understands AI well enough to set realistic expectations while everyone else chases certainty that doesn't exist. It’s the team that maintains credibility through authentic voices and storytelling, because those are the ones that are going to rise up from the AI beige content we’re seeing.

That's how you build credibility. Not by claiming control over probabilistic systems, but by being honest about what's controllable and what isn't.

Keep Reading

No posts found