With a deep background in machine learning and natural language processing, Laurent Giraid is at the forefront of shaping how artificial intelligence integrates into our daily work. His focus lies in transforming AI from a passive tool into an active, collaborative teammate. Today, we’ll explore his insights on the critical role of shared memory in AI orchestration, the necessity of human oversight for building trust, and the complex security challenges enterprises face. We will also delve into the vision for a unified “multi-player” ecosystem where AI agents can collaborate seamlessly and the foundational standards needed to make this a reality.
When an AI agent joins a project, it can inherit the team’s entire history and context. What are the key steps to “onboard” an AI this way, and how does this shared memory approach compare to traditional, more siloed AI integrations?
It’s a fundamental shift in philosophy. Instead of treating AI as an external tool you feed information to, you treat it like a new team member. The onboarding process begins the moment you assign the agent to a project. It immediately manifests itself as a teammate and inherits all the same sharing permissions and historical context as any human would. It can see the entire record of completed tasks, what’s still pending, and even access connected third-party resources like a Google Drive or Microsoft 365 folder. This is a world away from siloed integrations where you have to constantly re-explain the project’s background every time you assign a task. This shared memory model removes that repetitive friction, allowing the AI to contribute meaningfully from day one.
Building trust in AI teammates is crucial. When an agent starts acting “in a weird way,” what does that human-in-the-loop oversight look like? Could you describe the specific tools and checkpoints admins use to redirect a model and ensure explainability in its actions?
Trust is everything, and it’s built on transparency and control. When an AI begins to behave unexpectedly, perhaps due to conflicting instructions, the system is designed for direct human intervention. The entire workflow incorporates built-in checkpoints where a person can review the AI’s work, provide feedback, and ask for adjustments. Everything the agent does is documented in a very clear, human-readable log, which creates an easy path to explainability. For more serious deviations, an approved administrator has direct access through the UI and API to pause or even edit the model’s behavior. If conflicting instructions are causing the issue, the admin with edit rights can simply delete the problematic directives and guide the agent back to its correct behavior. This isn’t some black box; it’s a transparent system designed for collaboration and course correction.
Connecting different applications and agents currently involves complex authorization challenges, such as managing individual OAuth grants. What are the primary security risks this presents for an enterprise, and how could a centralized “directory of agents” help mitigate these issues for IT teams?
The current model creates significant security vulnerabilities. Expecting every single knowledge worker to be a security expert who can discern which OAuth grants are safe and which are risky is simply not scalable or realistic. An employee could inadvertently grant a powerful, unvetted agent access to sensitive company data, creating a massive compliance and security hole. A centralized directory of agents would function much like an active directory for employees. IT teams could create and manage an authoritative list of known, approved AI agents, clearly defining their skill sets and access levels. This would centralize control, remove the guesswork for employees, and allow IT to proactively manage app-to-app integrations without fearing that someone will configure a dangerous or harmful agent.
Today’s agent-to-agent interactions are often described as “single-player,” where each one connects to a tool like Asana or Slack independently. How can we move toward a “multi-player” outcome where agents collaborate, and what foundational standards or protocols are needed to make that a reality?
This is one of the most interesting challenges in AI orchestration right now. In the “single-player” model, you might have one agent connected to Figma and another to Slack, but they operate in their own lanes without awareness of each other’s work. To achieve a “multi-player” state, where agents can truly collaborate on a shared work graph, we need a common language—a standard protocol for shared knowledge and memory. Right now, any such collaboration requires a custom, bespoke integration, which is incredibly inefficient. We need a foundational layer that defines how agents discover each other, share context, and pass tasks between themselves in a secure and understandable way. Without that standard, we’ll remain stuck in a world of highly capable but ultimately isolated agents.
The Modern Context Protocol (MCP) has been suggested as a promising step toward standardizing how agents connect to systems. What specific new capabilities could widespread adoption of a protocol like this unlock, and what hurdles remain before we see a truly unified ecosystem?
Widespread adoption of a standard like the Modern Context Protocol would be a game-changer. It promises to connect AI agents to external systems with a single action, eliminating the need for countless custom integrations for every single pairing. This could unlock incredibly exciting and complex use cases where agents from different developers could collaborate on sophisticated workflows across various platforms. Imagine an agent in your project management tool seamlessly handing off a design brief to another agent inside a design application. However, significant hurdles remain. Getting universal buy-in from the entire industry is a major challenge, and while MCP is promising, there probably isn’t a single silver bullet standard out there just yet. We’re in the early stages, and building that truly unified ecosystem will require immense collaboration and agreement across the tech landscape.
What is your forecast for AI orchestration?
I believe the future of AI orchestration lies in creating a seamless, collaborative fabric where AI agents and humans work together as a cohesive unit. We will move beyond the current “single-player” model and see the emergence of a standardized, “multi-player” environment where agents can communicate and collaborate across different applications and platforms. This will be built on a foundation of shared memory and context, governed by centralized security directories that give IT teams control and visibility. The ultimate goal is not just to automate tasks but to create a truly intelligent, trustworthy, and transparent system where AI teammates augment human potential in ways we are only just beginning to imagine.
