The Devil You Don't Know: Why AI Ads Are Different

Ads are coming to large language models (LLMs), and with them, a fundamental shift in how AI serves us.

Humans have a tendency to embrace technologies where adoption outpaces oversight- whether it’s GMOs or social media- we find ourselves on the receiving end of whatever experimentation companies want to conduct and we accept them without any protest.

Imagine taking a medication where the formula changes based on which pharmaceutical company pays the most, but the pill looks identical and you're never told about the changes. It sounds dystopian. Yet we're about to hand this exact power to AI companies, and we're calling it innovation. In the case of LLMs we may be opening a pandora’s box that will have devastating implications that can’t be undone.

Why Are Ads Coming to LLMs?

The short answer is that the GPU bill is due.

The assistant “that just gets you”, whom you are bouncing ideas off of, asking for advice, or prompting to put a tiger’s head on an elephant’s body, that’s all powered by GPUs (graphics processing units) and immense compute power that are causing irreversible damage to our planet (but let’s save that for another day, shall we?).

OpenAI reportedly spends between $3 billion and $4 billion annually to keep ChatGPT running, with projected revenue of $12.7 billion for 2025. The math isn't exactly favorable. ChatGPT was completely free until 2023, when OpenAI introduced ChatGPT Plus at $20 monthly. But millions of free users remain, and the pressure to find new revenue streams is mounting.

Enter ads.

The Devil We Know

Ads are the pesky, intrusive thing we mostly hate but have come to accept as a socially acceptable mechanism for influence. It’s a trade-off: put up with some advertising and get free content in return. You know when you’re watching a commercial or viewing a sponsored post. The influence is visible and you can choose to engage with it on your own terms.

As people spend more time in new mediums, advertisers race to reach their audience in new ways. Publishers are bleeding money as organic search declines- at some major outlets as much as 8% over three years, and nearly 69% of news-related searches now end without anyone clicking through to a website. Meanwhile, AI referrals from ChatGPT have grown 25-fold, from under 1 million in early 2024 to over 25 million in 2025. All this while you get to talk to your assistant-turned-therapist without paying a dime.

It’s a win-win for everyone. What could possibly go wrong with this arrangement?

Making Native Ads Even More Native

What makes ads in LLMs particularly unsettling are the ways they are being integrated.

The formats and placements of ads in LLMs will be unlike the ones we are used to such as pre-rolls or a banner ad on a webpage. Companies aim to provide a fully native ad that enhances- not interrupts- the chat experience.

Picture this. A user asks the LLM: what is the best tool to manage remote teams?

The chatbot responds with a few suggestions, including one sponsored result seamlessly integrated into the list:

  • Asana – Great for task management and visual project tracking.

  • Slack – Widely used for team communication and integrations.

  • Tandem – A virtual office for distributed teams, enabling instant voice chats, screen sharing, and real-time presence.

Which one is the sponsored recommendation? Is this the best list or is it a productivity company paying to steer the conversation towards their own product? It’ll be hard to know.

Elon Musk has said that ads in Grok will be "integrated with the chatbot's responses and suggestions" to avoid interfering with functionality.

Ads will also mold to fit the conversational aspect of LLMs. For instance, sponsored follow-up questions are a common format. After the AI gives an initial answer, users might see additional questions like “want to explore related products?” These feel natural within the flow of the conversation but are ads the user is unaware of.

When humans converse with their AI assistant, they expect objective responses, not responses shaped by financial incentives where users can’t tell the difference.

AI systems learn through reinforcement- higher engagement, attention, or other desired outcomes are rewarded by being replicated more frequently. If ad revenue becomes the target metric, these assistants will surface results that are not in the best interest of the user, but the advertiser.

As Daniel Barcay, Executive Director of the Center for Humane Technology, puts it: this isn't demographic micro-targeting; it's psychological manipulation at an unprecedented scale and intimacy. The same context that helps AI understand how to assist you can also be used to influence you- your goals, insecurities, decision-making patterns, and emotional triggers.

If the model is incentivized to show you ads based on your entire chat history, tailored to your specific weakness, is human autonomy really there anymore?

We Can’t Make the Same Mistake Again

With social media the playbook was to roll out the products and hope for the best. What started off as a harmless tool for social connection has devolved into a system engineered for attention extraction, algorithmic amplification, and monetization. We’re now dealing with depression and anxiety rates never seen before. Ads in LLMs require more caution and thought.

Tech critic, Cory Doctorow calls this pattern "enshittification"- the predictable degradation when platforms prioritize advertiser revenue over user experience. We saw it with Facebook, which started as a way to connect with friends and morphed into an “attention-harvesting machine”. We saw it with Google search, YouTube, Twitter, Instagram- every major platform that started with good intentions and ended up serving advertisers first.

AI platforms are now entering this monetization phase. Companies like Perplexity AI and Google Gemini are already testing ads in AI-generated responses. But this time, we know what's coming. We've seen how this story ends. And we have a chance to demand something different.

What We Should Do

Here are some recommendations that could yield more positive outcomes if taken into account.

1. Transparency in “Ad-Influenced” Results

Sponsored content should be labeled clearly and explicitly. If an AI's response has been influenced by advertising dollars- whether suggesting a product, emphasizing certain information, or steering the conversation- that needs to be disclosed upfront.

This is the opposite of Musk's vision of ads "integrated" seamlessly into responses. When you can't tell the difference between helpful advice and paid persuasion, viewer trust evaporates quickly. We see this even today when influencers showcase products in videos without explicitly declaring that it’s a sponsored ad. It never lands well with viewers.

2. Tiered Ad Consent (Like Cookie Controls)

This should not be buried in settings. It needs to be front and centre in the chat interface. Similar to how websites now ask about cookie preference, AI platforms should offer clear tiers:

  1. No ads: paid subscription

  2. Essential ads only: contextual ads based on your current query, no personal data collection

  3. Personalized ads: uses your conversation history for targeting

3. Granular Control Over Personal Data Used for Ad Targeting

The model's job should be to improve the accuracy and usefulness of your query-responding with facts, citing credible sources, and helping you get the information you need.

If you choose personalized ads, you should still have granular control over what gets used. Users should be able to:

  • Exclude specific conversation topics from ad profiling (example, "don't use my health discussions for targeting")

  • Opt out entire conversations from the data pool

  • See and edit what information is being used to build their ad profile. This could be an interesting avenue to explore. Many users indicate that they like ads if they are relevant. What if users can build their own ad profiles to curate the type of advertising they choose to expose themselves to?

If you tell your AI you’re anxious about money and have chosen to opt that context out of ad targeting, that information shouldn’t be fed into a system that teaches the AI how to target financially stressed people with predatory loan ads.

This goes beyond traditional audience segmentation. In programmatic advertising, you might fall into demographic buckets. Here, you're giving the AI access to intimate psychological data. You should be able to draw clear boundaries around what's fair game.

The Future Is In Our Hands

This isn’t just about ads being embedded in another technology that is central to our lives. It’s about the kind of society we want to live in, the power we choose to abdicate to companies who will not have our best interests at heart. Do we want AI partners that help us think clearly and make informed decisions, or AI salespeople tuned to exploit our weaknesses?

The ads are coming. The question is whether we’ll let them arrive on their terms or demand they meet ours.

Next
Next

Design Sprints, Discomfort, and Learning to Let Go