In today’s rapidly evolving tech landscape, securing agentic AI systems and non-human identities is a critical challenge for enterprises. I’m thrilled to sit down with Laurent Giraid, a renowned technologist with deep expertise in artificial intelligence, machine learning, natural language processing, and the ethical implications of AI. With a passion for ensuring that innovation doesn’t come at the cost of security, Laurent has been at the forefront of rethinking identity and access management for the AI era. In this interview, we explore how identity is becoming the cornerstone of AI operations, the risks of deploying agentic AI without proper safeguards, and practical strategies for building a secure framework for digital workforces.
Can you explain what you mean by ‘identity as the new control plane’ when it comes to AI and automation?
Absolutely. When I talk about identity as the new control plane, I’m referring to the idea that identity isn’t just about logging in anymore—it’s the central mechanism for managing and securing AI systems in an enterprise. Unlike traditional setups where identity was a gatekeeper for human users, AI and automation require a dynamic system that continuously evaluates who or what is accessing resources, why, and under what conditions. Agentic AI, which can plan and act across systems, needs this kind of robust framework to prevent misuse or breaches. It’s about making identity the core of decision-making for access and actions at scale.
How does this concept differ from the traditional identity and access management systems we’ve relied on for years?
Traditional IAM was built for humans—think static roles, passwords, and one-time approvals. These work fine when you’ve got a predictable number of employees with defined tasks. But with AI, especially agentic systems, you’re dealing with non-human identities that can outnumber humans ten to one. These systems don’t fit into fixed roles; their needs change constantly. Traditional IAM can’t keep up with that pace or scale, nor can it handle the continuous, real-time decisions needed to secure AI operations. It’s like using a paper map in the age of GPS—it just doesn’t match the terrain.
What are some of the biggest risks you’ve seen when companies deploy agentic AI without proper security measures in place?
The risks are enormous, primarily because AI agents operate at machine speed with access to sensitive systems. Without proper controls, you’re looking at data breaches, privilege creep, and even catastrophic business decisions happening before anyone notices. A single over-permissioned agent could extract massive amounts of data or trigger a flawed process—like approving fraudulent transactions—faster than a human could intervene. The lack of visibility and traceability in many setups means you might not even know something’s wrong until the damage is done.
Could you share a hypothetical scenario of what might go wrong if an AI agent has too much access or isn’t properly monitored?
Sure, imagine a customer service AI agent that’s been given broad access to a company’s database to resolve issues quickly. If its permissions aren’t scoped tightly or monitored, a malicious actor could exploit a vulnerability—like a prompt injection—and instruct the agent to pull sensitive customer data, such as credit card details, and send it to an external server. Because the agent operates autonomously and at scale, thousands of records could be stolen in minutes. Without proper logging or alerts, the breach might go unnoticed for days or weeks, amplifying the fallout.
Why do you think traditional IAM, designed for human users, falls short when applied to AI agents?
Traditional IAM assumes a human user with predictable behavior—someone who logs in, does their job, and logs out. AI agents don’t work that way. They’re active 24/7, their tasks evolve, and they interact with systems in ways humans don’t. Static roles or long-lived credentials, which are common in human-centric IAM, become liabilities. An agent might need access to one dataset today and a completely different one tomorrow. Plus, the sheer volume of non-human identities means manual oversight or periodic reviews—standard in traditional IAM—just aren’t feasible. It’s a mismatch of design and reality.
How does the idea of starting with synthetic data before using real data help in securing AI deployments?
Starting with synthetic data—essentially fake or masked datasets—is a game-changer for responsible AI deployment. It lets companies test their agents in a safe environment, validating workflows, permissions, and security policies without risking exposure of real, sensitive information. This approach helps uncover flaws or over-permissions early on. Only once you’ve proven the agent operates as intended, with proper guardrails, do you move to real data. It’s like learning to drive in a simulator before hitting the highway—you minimize the chance of a crash.
What does it mean to treat AI agents as ‘first-class citizens’ in an identity system, and how can companies make that happen?
Treating AI agents as first-class citizens means recognizing them as distinct entities in your identity ecosystem, just like human employees. In practice, this starts with giving each agent a unique, verifiable identity tied to a specific owner, purpose, and use case. No more shared accounts or generic credentials—they’re a security nightmare. Companies should use tools to assign and manage these identities, ensuring every action an agent takes is traceable. It’s about accountability and control, ensuring you know exactly who or what is doing what in your systems.
Can you break down the concept of just-in-time access for AI agents and why it’s a smarter choice than permanent permissions?
Just-in-time access is all about granting permissions only when they’re needed and only for as long as they’re needed. For AI agents, this means an agent gets access to a specific dataset or system for a single task, and once the task is done, that access is revoked automatically. Unlike permanent permissions, which can be exploited if credentials are compromised, just-in-time access minimizes the window of opportunity for misuse. It’s like lending someone your car key for a quick errand instead of giving them a spare to keep indefinitely—it reduces risk significantly.
How do you see the future of identity and access management evolving as agentic AI becomes more widespread in enterprises?
I believe IAM will undergo a profound transformation in the coming years as agentic AI scales. We’re moving toward fully dynamic, context-aware systems where access decisions happen in real time based on an agent’s behavior, purpose, and environment. Identity will become the backbone of AI operations, integrating continuous monitoring, purpose-bound data access, and tamper-proof logging. I expect we’ll see more automation in IAM itself—think AI managing AI identities—along with tighter integration of security at the data layer. The goal is to enable innovation at scale without scaling the risk, and I’m optimistic we’ll get there with the right focus and investment.
