The surprising data I’m sharing this week was actually the result of a happy accident.
In the last issue, I shared research showing that ChatGPT’s brand recommendations are the result of a popularity contest:
Among the URLs that ChatGPT cites, the more URLs that mention a brand — and the closer to first that brand is mentioned — the more likely ChatGPT is to recommend it.
For the sake of this post, I’m calling these “the frequency effect” and “the position effect.”
A couple of folks asked a follow-up question: Does the domain authority of those cited URLs influence ChatGPT's recommendations too?
The short answer is “no.” ChatGPT is seemingly really bad at judging the quality and credibility of a source, so it’ll cite just about anything that answers its question in the structure it’s looking for.
But while digging into that question, I stumbled onto something way more interesting and actionable.
When ChatGPT cites a vendor’s URL, the chances of that vendor’s brand being recommended by ChatGPT jump dramatically. (And, no, it’s not just because vendors always list their products first in their listicles.)
A brand is 6x more likely to be recommended by ChatGPT when ChatGPT cites the brand’s website
To dig in, I used the same dataset as the last study:
Four months of ChatGPT responses to “What’s the best…”-type prompts covering 46 different B2B product categories.
By design, I chose these prompts because their responses across all four months included web searches and URL citations. This gave me 2,881 URLs to study.
I extracted every brand mentioned in the cited URLs and in ChatGPT’s corresponding recommendations so I could compare.

The magnitude of the “vendor site effect” decreased noticeably over a three-month period (from 21.5x in October to 6.5x in December), but that’s largely because ChatGPT started citing more URLs per response, diluting the “power” of any one URL.

So the effect appears to be real. But how much of the effect is independent from the frequency and position effects?
(Spoiler: it’s mostly an independent effect.)

The vendor site effect holds, regardless of how many of cited URLs mention the brand.
But isn’t the position effect the more-suspicious phenomenon? Maybe the vendor site effect is simply the result of vendors always ranking their own products first in their cited listicles.
Turns out that’s part of the story, but it doesn’t explain the whole thing.
Brand position in cited content only explains ~29% of the vendor site effect.

Vendor sites do list their brand first, and that helps. But even after fully accounting for position, the vendor effect remains and is wildly significant (p = 10⁻⁶¹).
In statistics-speak: Both variables independently improve model fit with massive likelihood ratio tests (χ² > 190, p < 10⁻⁴³ in both directions).
By the way, if you're wondering why the brands with an average position of 1-3 don't have a higher recommendation rate than the brands in positions 4-7 and beyond, it’s the confounding influence of the frequency effect.
So let’s control for both the position and frequency effects at the same time.

There you have it. The vendor site effect appears to be real, at least as far as I’ve tested.
So why is this happening? My guess is that we’re seeing the result of ChatGPT favoring content that gives it more information to work with. Vendor listicles tend to be very detailed and informative when they’re talking about their own brands. Not so much with other brands.
But I’d love to hear thoughts from folks with bigger brains than mine on this stuff.
What you should do with this information
If you can get your own “Best [category] software” listicle cited by ChatGPT, you have a ~6x better chance of getting your brand recommended by ChatGPT.
For many companies, the main barrier to this happening is getting the content published at all. So if you can make that happen, you’re most of the way there in many B2B industries.
See you in a couple weeks.
Mike
