Trustpilot Pivots to Provide Data for AI-Driven Shopping

Trustpilot Pivots to Provide Data for AI-Driven Shopping

Laurent Giraid stands at the intersection of machine learning and consumer behavior, bringing a wealth of knowledge on how large language models are rewriting the rules of digital commerce. As traditional search engines yield ground to conversational interfaces, Giraid examines the seismic shift in how data is consumed and monetized. This discussion explores the massive surge in AI-driven referral traffic, the emergence of seamless agentic transactions, and the importance of high-quality human datasets in training the next generation of shopping assistants. We also look at the strategic maneuvers of retail giants as they build “walled gardens” to protect advertising revenue while navigating a world where the browser is no longer the primary storefront.

Referral traffic from large language models has increased by over 1,400% recently as AI search becomes the default for many users. How are these shifting traffic patterns impacting your technical infrastructure, and what specific steps are you taking to ensure that AI agents interpret your datasets accurately?

Seeing a 1,490% explosion in click-throughs from AI search is a wake-up call for anyone relying on old-school SEO. This massive surge, fueled by the decision of search giants to make AI-first search the default, forces us to rethink how we structure our information to be machine-readable. For a platform like Trustpilot, which ranked as the fifth most cited domain in ChatGPT this January, the pressure is on to ensure these algorithms don’t just find our data, but interpret it with nuance. We are moving toward a model where our datasets are treated as high-octane fuel for large language models, ensuring that when an agent summarizes a brand’s reputation, it captures the authentic pulse of human experience. It feels like a high-stakes race to build the most reliable bridge between raw consumer sentiment and the sophisticated logic of a chatbot.

Retailers are increasingly enabling “agentic storefronts” where transactions occur entirely within a chatbot interface rather than on a traditional website. What are the primary trade-offs regarding brand loyalty in these scenarios, and how can businesses mitigate the loss of direct consumer data when using third-party AI proxies?

The rise of “agentic storefronts” represents a fundamental shift where the traditional website is bypassed entirely in favor of a conversational flow. When a user buys a product directly inside a chatbot through a Walmart or Shopify integration, the retailer loses that visual connection that a custom-designed storefront provides. This creates a significant challenge for brand loyalty, as the experience feels more like a utility and less like a curated journey. To mitigate the loss of direct consumer data, businesses must find a way to make the trade-off worth it; for many, the sheer volume of transactions through these AI proxies outweighs the tactical data points lost at the checkout. It is a bittersweet evolution where we trade the deep analytics of a site visit for the immediate, friction-free conversion of a chat bubble.

While some experts predict a decline for traditional software platforms, others argue that proprietary datasets are becoming more valuable as training material for AI. How do you assess the long-term asset value of user-generated content, and what metrics are you using to track its influence on AI-driven shopping?

Despite recent tremors in the software market, proprietary datasets like user-generated reviews are actually entering a golden age of relevance. We view these reviews as a bedrock asset because, regardless of how a purchase is made, the underlying human experience with a business remains the ultimate truth that AI agents need to ingest. We are closely watching our operating margins, with a target of 30% by 2030, as a direct reflection of how successfully we can monetize this data through AI partnerships. The metric for success is no longer just page views, but the influence frequency—how often our specific data points are used by an AI to validate a purchase decision. It’s a shift from being a destination to being the authoritative source of truth that powers the entire ecosystem’s decision-making engine.

Large eCommerce platforms are beginning to block unauthorized AI agents while simultaneously developing internal assistants to retain advertising revenue. What are the practical implications of this “walled garden” approach for the broader market, and how can smaller companies ensure their information remains accessible to global AI models?

The decision by giants like Amazon to block unauthorized AI crawlers while building their own internal assistants is a classic defensive maneuver to protect their lucrative advertising ecosystems. By creating these walled gardens, they ensure that the “last mile” of consumer data and ad spend remains within their control, rather than being siphoned off by external models. For the broader market, this creates a fragmented landscape where a smaller company’s visibility depends on which “garden” they choose to partner with. To remain accessible, smaller players must embrace open protocols and strategic partnerships, similar to how Shopify is enabling merchants to sell across various chatbot interfaces. It’s a tense atmosphere of gatekeeping that requires businesses to be incredibly agile in how they syndicate their product information.

Strategic partnerships now allow users to purchase goods inside AI interfaces through protocols that bypass traditional checkouts. Could you provide a step-by-step breakdown of how these integrations handle financial security, and what anecdotes can you share about the challenges of maintaining a seamless user experience across different AI platforms?

Integrating financial transactions into a chatbot is a complex ballet of security and convenience, often handled through systems like Shopify’s Universal Commerce Protocol or the Microsoft-PayPal “Copilot Checkout.” Initially, the AI agent accesses the product data and, once the user confirms, the transaction is handed off to a secure tokenized payment gateway that keeps sensitive data away from the model itself. One of the biggest hurdles we’ve seen is maintaining a sense of trust; users often feel a flicker of anxiety when they don’t see the familiar checkout page of a brand they know. I recall an instance where a minor lag in the handshake between a chatbot and a payment processor caused a “phantom” transaction, leaving the user in a state of confusion until a manual confirmation arrived. These growing pains are the price we pay for trying to condense a multi-step web journey into a single, effortless sentence.

Operating margins for data-heavy companies are projected to reach 30% by the end of the decade due to increased integration with large language models. How do you balance the costs of infrastructure with the revenue potential of AI partnerships, and what specific investment areas are currently your highest priority?

Reaching that 30% operating margin target by 2030 requires a disciplined balancing act between the heavy costs of AI-ready infrastructure and the massive revenue potential of data licensing. We are seeing a significant shift in investment toward custom model development and high-speed delivery systems that can serve model requests in real-time. Our highest priority right now is ensuring that our vast repository of reviews is clean, structured, and instantly accessible to partners. The cost of running these high-performance environments is substantial, but the payoff comes from becoming an indispensable “data layer” for the entire shopping industry. It feels like we are building the digital nervous system for commerce, where every byte of data has a clear path to generating profit.

What is your forecast for AI-driven shopping?

My forecast is that we are moving toward a prompt-first economy where the traditional search bar feels as prehistoric as a physical catalog. Consumers are already beginning their shopping journeys by refining iterative prompts on AI platforms rather than clicking through pages of search results, and this behavior will only accelerate. We will see a world where the AI agent doesn’t just suggest a product, but proactively manages the entire lifecycle of a purchase based on a deep understanding of our personal preferences and past reviews. The friction of the search-and-click era will vanish, replaced by a continuous conversation that feels less like shopping and more like an ongoing collaboration with a digital concierge. It’s a future where convenience is king and the most trusted data will be the ultimate currency.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later