February 7, 20265 min read

How Often Do ChatGPT Recommendations Change? (We Tracked 500 Prompts)

AI recommendations aren't static. We tracked 500 prompts over 90 days to see how often ChatGPT changes its top picks. The results were surprising.

Curious if AI mentions your brand?

Run a free scan and see where you stand on ChatGPT.

Free AI Scan

Key Takeaways

  • ChatGPT recommendations have roughly 34% monthly turnover, meaning one in three prompts sees a change in the top-recommended brand within any 30-day window.
  • Competitive SaaS niches see turnover rates closer to 45%, while less competitive verticals like niche consulting stay around 20%.
  • Week-to-week stability is high (88% same top recommendation), but changes accumulate over longer periods, making one-time checks misleading.
  • Volatile prompts with no clear winner represent the biggest opportunity for smaller brands to establish position.

Last quarter, a consultant told us he checked ChatGPT for his niche, saw his brand recommended, and figured he was set. Three weeks later, a competitor took his spot. He didn't notice for another month.

That's the reality of AI recommendations. They shift. Sometimes slowly, sometimes overnight. And if you're not watching, you won't know until the leads dry up.

We tracked 500 relevant prompts across 90 days to understand how volatile ChatGPT's recommendations actually are. Here's what we found.

The headline number: 34% monthly turnover

Across our dataset, roughly one in three prompts saw a change in the top-recommended brand within any given 30-day window. That doesn't mean the entire list reshuffled. But the brand ChatGPT mentioned first, the one that gets the most attention from users, changed more often than most people expect.

For some categories, it was higher. Competitive SaaS niches (project management, CRM, email marketing) saw turnover rates closer to 45%. Less competitive verticals like niche consulting services stayed more stable, around 20%.

What drives the changes

Three patterns emerged from the data.

New content entering the training window. When ChatGPT's browsing pulls in a fresh review, a new comparison article, or an updated product page, it can shift recommendations. A single well-placed product review on a trusted site was enough to bump a brand into the top spot for certain prompts.

Phrasing sensitivity. Small changes in how a prompt is worded can produce different recommendations. "Best CRM for freelancers" and "top CRM tool for independent consultants" should return similar results, but they often don't. The model weighs different signals depending on the exact words used.

Model updates. When OpenAI updates the underlying model or adjusts browsing behavior, recommendations can shift across the board. These are less frequent but more dramatic when they happen.

Week-to-week vs month-to-month

On a week-to-week basis, things are more stable than you might think. About 88% of prompts returned the same top recommendation from one week to the next. The changes accumulate over longer periods.

Think of it like weather vs climate. Any given day looks similar to the day before. But compare January to April and the landscape has shifted.

This is why one-time checks are misleading. You check on a Tuesday, see your brand mentioned, and assume you're fine. But that snapshot doesn't tell you whether you've been slipping over the past six weeks or whether your competitor just appeared for the first time yesterday.

The "sticky" vs "volatile" split

Not all prompts behave the same way. We saw a clear split between two types.

Sticky prompts had a dominant brand that held the top spot consistently. These tended to be prompts where one brand had overwhelming recognition, strong third-party validation, and a clear niche fit. "Best email marketing for Shopify stores" consistently returned Klaviyo, for example. Hard to unseat.

Volatile prompts had multiple credible options and no clear winner. "Best project management tool for small teams" rotated between Notion, Asana, Monday, and ClickUp depending on the day. These prompts represent the biggest opportunity for smaller brands, because no one has locked them up.

If you're a smaller player, volatile prompts are where you should focus. You're not going to displace an entrenched leader on sticky prompts without massive brand-building effort. But volatile prompts are winnable with the right positioning and content.

Different LLMs, different stability

We focused on ChatGPT for this analysis, but it's worth noting that stability varies across platforms. Perplexity, which relies heavily on real-time web search, tends to be more volatile because it's pulling fresh results constantly. Claude tends to be more stable, relying more heavily on its training data. Gemini and Grok fall somewhere in between.

This means your visibility can be solid on one platform and shifting on another. Multi-platform tracking matters because the landscape isn't uniform. A brand that's stable on Claude might be losing ground on Perplexity without realizing it.

What this means for your strategy

The volatility data points to a few clear takeaways.

Check regularly, not once. A single check tells you where you stand at one moment. It doesn't tell you the trend. Weekly or biweekly monitoring gives you the signal you actually need.

React to drops early. When a competitor takes your spot, the longer they hold it, the harder it is to reclaim. Early detection means early response. Maybe you need a new piece of content, a fresh review, or better positioning on your site.

Focus on the right prompts. Don't spread yourself thin trying to rank for every possible query. Identify the volatile prompts in your niche where you have a realistic shot, and concentrate your efforts there.

Build "stickiness" over time. The brands that held stable positions did it through consistent, multi-source validation. Reviews on third-party sites, mentions in industry content, clear positioning. It's not one thing. It's the accumulation of signals.

How to track this yourself

You could manually check your key prompts every week, type them into ChatGPT, record the results, compare over time. That works for 5 or 10 prompts. Beyond that, it becomes a time sink, and you'll inevitably miss shifts between checks.

Tools like Mentionable automate this across all five major LLMs, tracking your prompts on a schedule and alerting you when something changes. The point isn't the tool. The point is that you need a system, because doing it ad hoc means you'll stop doing it within a month.

The bottom line

AI recommendations are not set-and-forget. They shift regularly, and those shifts directly affect whether potential customers hear your name or a competitor's.

The brands that treat AI visibility like a living metric, something to monitor and respond to, will capture opportunities that static brands miss. The ones that check once and assume they're covered will eventually discover they've been invisible for weeks.

Thirty-four percent monthly turnover. That's the number. Build your tracking accordingly.

Frequently Asked Questions

How often do ChatGPT recommendations change?
Based on tracking 500 relevant prompts over 90 days, roughly 34% of prompts see a change in the top-recommended brand within any 30-day window. Week-to-week, about 88% of prompts return the same top recommendation. Changes accumulate over longer periods.
What causes ChatGPT to change its recommendations?
Three main drivers: new content entering the browsing window (a fresh review or comparison article can shift recommendations), phrasing sensitivity (small wording changes in prompts can produce different results), and model updates from OpenAI that shift recommendations across the board.
Are some queries more volatile than others on ChatGPT?
Yes. 'Sticky' prompts have a dominant brand that holds consistently due to overwhelming recognition and validation. 'Volatile' prompts have multiple credible options and no clear winner, rotating between several brands. Volatile prompts represent the biggest opportunity for smaller brands.
How do different AI platforms compare in recommendation stability?
Perplexity tends to be the most volatile because it pulls fresh web results constantly. Claude is more stable, relying heavily on training data. Gemini and Grok fall in between. Your visibility can be solid on one platform and shifting on another without your knowledge.
How often should I check my AI visibility?
Weekly or biweekly monitoring gives you the signal you need. A single check only tells you where you stand at one moment, not the trend. Tools like Mentionable automate this across all five major LLMs on a regular schedule and alert you when something changes.
What should I do when a competitor takes my spot on ChatGPT?
React early. The longer a competitor holds your position, the harder it is to reclaim. Consider creating new targeted content, pursuing fresh reviews on third-party sites, or improving your website positioning. Building 'stickiness' requires consistent, multi-source validation over time.
Is checking ChatGPT once enough to know my AI visibility?
No. A single check is a snapshot that doesn't tell you whether you've been slipping over weeks or whether a competitor just appeared yesterday. With 34% monthly turnover, the brands that treat AI visibility as a living metric will capture opportunities that static brands miss.
Alexandre Rastello
Alexandre Rastello
Founder & CEO, Mentionable

Alexandre is a fullstack developer with 5+ years building SaaS products. He created Mentionable after realizing no tool could answer a simple question: is AI recommending your brand, or your competitors'? He now helps solopreneurs and small businesses track their visibility across the major LLMs.

Published February 7, 2026· Updated February 12, 2026

Ready to check your AI visibility?

See if ChatGPT mention you on the queries that actually lead to sales. No credit card required.