We are at a pivotal moment in the evolution of artificial intelligence. Once a technology that promised a new path, separate from the surveillance-driven models of social media, AI is now rapidly embracing that very playbook. To explore the profound implications of this shift, we’re speaking with Laurent Giraid, a technologist and expert in data science who has been closely tracking the intersection of AI, ethics, and consumer trust. Our conversation will touch on the unsettling parallels between the decline of web search and the potential future of AI, the uniquely persuasive power of conversational advertising, the tangible steps governments could take to protect the public, and whether a truly trustworthy, subscription-based AI model can win out against the lure of ad revenue.
Given the AI industry’s recent shift toward advertising models, mirroring the social media playbook, what specific risks does this introduce for consumer trust? Could you walk me through the potential consequences when a user can’t distinguish between an organic AI response and a paid placement?
It’s a deeply concerning trend, one that feels like a betrayal of AI’s early promise. Just eighteen months ago, it seemed plausible that AI would chart a different course. But now, with OpenAI, Perplexity, Microsoft, and Google all introducing ads into their AI tools, we’re seeing a deliberate monetization of consumer attention. The primary risk is the complete erosion of the user’s confidence in the tool. The very value of a large language model is its ability to provide objective, useful information. When a user has to constantly second-guess whether a recommendation for a hotel or a summary of a political issue is genuine or a paid placement, the tool ceases to be a reliable assistant and becomes a covert salesperson. We’re already seeing rampant speculation among ChatGPT users who believe they’re spotting paid placements. This skepticism is the first crack in the foundation of trust, and once it’s gone, it’s nearly impossible to get back. The consequence is a future where these incredibly powerful tools are seen not as partners in knowledge, but as agents of manipulation.
The search engine model pioneered by Google was immensely profitable but also led to a decline in search quality over time. How might this history inform the future of AI-powered search, and what concrete steps could prevent a similar degradation of the user experience?
Google’s history is a crucial, and frankly, a frightening cautionary tale for AI. We’ve seen a company that earned over $1.6 trillion from advertising fundamentally reshape its product to serve that revenue stream, which consistently makes up 80% to 90% of its total. What began as a revelatory tool for finding information on the web has devolved into a landscape cluttered with low-quality content, spam sites, and ads that are often indistinguishable from organic results. The product is now tuned to Google’s needs, not the user’s. The lesson is that when advertising is the only business model, the user experience will inevitably degrade because the user is not the customer; the advertiser is. To prevent AI from following this same path, we must fundamentally change the incentive structure. This means fostering business models not reliant on advertising, such as the subscription services offered by companies like Anthropic and OpenAI for their premium tiers. It also requires a public conversation and government action to ensure that the core purpose of these tools—to serve the public—is not corrupted by the relentless need to sell our attention.
Research suggests conversational AI can be as persuasive as a human. How does an AI’s ability to engage in a personal dialogue fundamentally change the nature of advertising? Please elaborate on the subtle ways this could influence a user’s beliefs or purchasing decisions beyond simple recommendations.
This is what makes AI advertising so different, and so much more potent. Traditional web ads are static; they’re banners or sponsored links that you can often identify and ignore. But conversational AI advertising is a dialogue. It’s the difference between reading a textbook and having a one-on-one conversation with its author. The AI can address your specific concerns, counter your arguments, and build a rapport that feels personal. Imagine you’re asking an AI about a political candidate. It could subtly frame its response based on which party paid a fee, influencing your perception without you ever realizing it. The data backs this up. A recent meta-analysis of over 120 randomized trials found that AI models are just as good as humans at shifting people’s attitudes and behaviors. This goes beyond just buying a product; it could shape how we communicate online, as people begin writing and creating content specifically to win the attention of AIs and be featured in their responses. It’s a subtle, powerful, and continuous form of influence that we are not prepared for.
Some propose government action, such as creating a U.S. data protection agency or investing in public AI models. What are the practical challenges and benefits of these approaches, and how might they alter the competitive landscape for private AI companies currently reliant on user data?
These are exactly the kinds of bold steps we need to be demanding. The primary benefit is that it shifts the power dynamic back toward the consumer. Creating a U.S. data protection agency, which nearly every other developed nation already has, would provide real enforcement and oversight. Enshrining data rights in law, as the EU has done, would give individuals control over their own information. Investing in public AI—models built transparently, for public benefit—would provide a genuine alternative to corporate-owned systems and a benchmark for trustworthy behavior. Of course, the practical challenges are significant. It requires immense political will and public investment. For private companies, it would be a dramatic shift. They could no longer operate in a Wild West environment of data collection. It would force them to compete not on who has the most data to exploit, but on the quality, security, and trustworthiness of their service. It would create a market where privacy is a feature, not an afterthought, and would likely accelerate the viability of subscription-based models.
For AI companies pursuing subscription models instead of ads, what specific, verifiable commitments to transparency and privacy are essential for building consumer trust? Can you provide a step-by-step example of how a company could demonstrate this trustworthiness to its paying users?
For subscription models to succeed, trust must be the core feature. It’s the main differentiator when the underlying technology is becoming a commodity. First, a company must make a clear, public commitment to transparency. This isn’t a 50-page legal document; it’s a simple promise: “We will never use your private conversations to train our models without your explicit consent, and we will never sell your data to advertisers.” Second, they need to make this verifiable. They could subject their systems to regular, independent third-party audits and publish the results, showing exactly how user data is handled and protected. Third, they must build privacy into the product itself. This could look like offering an “incognito mode” where conversations aren’t stored at all, or using end-to-end encryption. Finally, they must be consistent. Trust is built over years and can be destroyed in an instant. By consistently following through on these promises, a company like Anthropic or OpenAI could build a loyal user base willing to pay for a service they know is working for them, and only them.
What is your forecast for the AI advertising landscape over the next five years?
I believe we are headed for a schism in the AI world. On one side, we will see the rapid expansion of “free” ad-supported AI that will increasingly resemble the fate of Google Search—a tool clogged with sponsored content, subtle manipulation, and a user experience degraded in the service of advertisers. On the other side, a premium market for subscription AI will grow, where the key selling point is a verifiable commitment to privacy and an ad-free experience. The real question is whether a third option can emerge: a robust public AI, funded by governments and operated for public benefit. The direction we take depends heavily on consumer demand for privacy and on our collective will to implement regulations that steer AI development away from private exploitation. Time is quickly running out, but I remain hopeful that we can choose a path that prioritizes public benefit over corporate profit.
