It Made Me Laugh. Then It Made Me Think.
- Carolina MIlanesi

- Feb 6
- 5 min read
Anthropic’s Super Bowl Ad Is Funny — But the Question It Raises Is Anything But Simple
Anthropic’s Super Bowl LX commercial is, by any measure, a brilliant piece of advertising. A guy in a park asks an AI assistant how to get six-pack abs and instead gets a pitch for height-boosting insoles so “short kings can stand tall.” Another spot features an AI therapist who pivots mid-session into hawking a dating app. The tagline lands like a punchline: “Ads are coming to AI. But not to Claude.”
But behind the humor sits a genuinely thorny question that deserves more than a thirty-second treatment, and the answer is far less clear-cut than either company would like you to believe.
The Transparency Problem Is Real, and Worse Than You Think
Let’s start with what Anthropic gets right. In traditional search, we’ve spent two decades training ourselves to spot the small “Sponsored” label sitting above Google results. It’s not a perfect system, but at least there’s a visual grammar we understand. We know what’s paid and what’s organic. We’ve learned to scroll past the ads.
In AI, that literacy doesn’t yet exist. When you ask ChatGP, or any agent for that matter, a question and it responds in fluent, authoritative prose, the line between genuine recommendation and paid placement becomes extraordinarily hard to detect. OpenAI has promised that ads will be “clearly labeled” and will appear “at the bottom of answers” without influencing the chatbot’s responses. But the company has also said ads will be conversation-specific, meaning the system will read what you’re talking about and serve a relevant sponsored product. That is, by definition, contextual influence, even if the model’s actual text remains untouched.
Now imagine this interaction happening through voice. As AI assistants become the default interface, through smart speakers, earbuds, in-car systems, there is no “bottom of the page.” There is no visual disclaimer. There is only a voice that sounds helpful, that you’ve come to trust, seamlessly weaving a product mention into what feels like personal advice. The Federal Trade Commission has spent years trying to regulate influencer disclosures on Instagram. This is that problem on steroids.
Two Companies, Two Business Models, Two Very Different Problems
To understand why OpenAI is introducing ads and Anthropic is making a Super Bowl spectacle of not doing so, you have to understand where these companies make their money.
ChatGPT launched as a consumer phenomenon. It reached 100 million users within two months and now has roughly 800 to 900 million weekly active users, processing over two billion queries per day. It became synonymous with AI in the same way Google became synonymous with search. OpenAI generated approximately $13 billion in annual recurring revenue by the end of 2025, with projections of $29 billion for 2026. But here’s the catch: in H1 2025, OpenAI had a multi-billion-dollar operating loss due to compute, R&D, marketing, and compensation. Only about 5% of those hundreds of millions of users pay for a subscription. The free-to-paid conversion rate hovers between five and six percent. That math leaves an enormous gulf between the people using the product and the people paying for it.
Anthropic took a fundamentally different path. Claude grew up in the enterprise. Around 80 percent of Anthropic’s revenue comes from business customers and API usage, with over 300,000 enterprise clients and large accounts growing nearly sevenfold in the past year. Anthropic’s annualized revenue surpassed $5 billion by August 2025, and the company projects $20 to $26 billion for 2026. Claude Code alone is generating over $500 million in run-rate revenue. With roughly 19 million consumer users, a fraction of ChatGPT’s base, Anthropic simply has a different shaped problem. Its users already pay. OpenAI’s mostly don’t.
So when Altman fires back that “Anthropic serves an expensive product to rich people,” he’s not entirely wrong, he’s just describing the other side of a real strategic tension.
The Access Question No One Wants to Confront
And this is where the conversation gets uncomfortable. Is it better to have AI with ads, or AI that millions of people simply cannot afford?
What we know about AI adoption is fairly consistent. Use skews toward people who are younger, better educated, more urban, and higher income, those already embedded in knowledge work and digital systems. Meanwhile, the same technologies are expected to reshape the labor market in ways that hit marginalized workers hardest, with disproportionate disruption in roles often held by Black workers and in sectors where women are overrepresented.
The cruel irony is that AI appears to deliver some of its biggest productivity gains to people with the least formal training. When access is broad, it can function as a real equalizer helping people write better, learn faster, and navigate systems that weren’t designed for them. When access is locked behind $17 or $20 monthly subscriptions, the people who stand to gain the most are the least likely to show up.
An ad-supported free tier, for all its imperfections, is at least an attempt to resolve that tension.
But It’s the Wrong People Who Pay the Price
Except the people most exposed to advertising in an ad-supported model are, by definition, those who can’t afford to pay for the ad-free version. And research on the AI divide consistently finds that these same populations, older, less educated, lower-income users, are also the least equipped to recognize when AI is steering them toward a commercial outcome.
We have seen this movie before. Free social media platforms monetized through advertising created an economy where the poorest users’ attention was the product, where algorithmic engagement optimization led to documented harms in mental health, political polarization, and misinformation, and where the cost was always borne by those with the fewest alternatives.
The question isn’t whether ads in AI are good or bad. It’s whether we’re about to reproduce the exact same extractive model in a medium that’s far more intimate than a news feed.
So Yes, I Laughed
Anthropic’s ad is clever, funny, and effective marketing. It drew a clear line in the AI landscape and forced a genuine conversation about business models and user trust. It also, whether intentionally or not, exposed the uncomfortable reality that this isn’t a simple story of good guys and bad guys.
The real challenge is designing an AI economy where access is broad, influence is transparent, and the most vulnerable users aren’t the ones subsidizing everyone else’s experience. Neither “no ads ever” nor “ads for the free tier” fully solves that problem.
It’s a Super Bowl ad. It’s supposed to make you laugh. But if it also makes you think, about who gets to use AI, who pays for it, and who pays the price, then maybe it’s doing more than selling a product. Maybe it’s starting a conversation we desperately need to have.



