As a technologist deeply immersed in the nuances of machine learning and natural language processing, Laurent Giraid has become a leading voice in the urgent conversation surrounding AI ethics. With the implementation of the EU AI Act approaching in 2026, his insights into how organizations can navigate high-risk systems through rigorous governance are more vital than ever. He joins us today to discuss the technical and regulatory frameworks necessary to ensure that autonomous agents remain transparent, accountable, and under human control.
To manage high-risk systems, organizations must integrate agent identity, human oversight, and rapid revocation. What specific protocols ensure these measures function together seamlessly, and how can teams generate the necessary evidence for regulators while maintaining operational speed?
To keep high-risk systems under control, we must treat risk management as a continuous, evidence-based process that spans development, preparation, and production. The protocol starts by assigning a unique agent identity to every instance, which is then monitored through constant policy checks and human oversight mechanisms. If a system deviates from its intended path, rapid revocation protocols allow us to pull the plug instantly to prevent any cascading errors. We generate the necessary evidence for regulators by documenting these interventions in real-time, ensuring that our compliance efforts do not stall the pace of operational innovation. Under Article 9 of the upcoming regulations, this process must be under constant review to ensure that every action taken by an agent is both authorized and recorded.
Implementing a Python SDK to cryptographically sign agent actions creates an immutable record similar to blockchain technology. Why is a centralized, encrypted system of record superior to standard text logs, and what metrics indicate that this verification process is actually improving security?
Standard text logs are often scattered across different software platforms and are easily manipulated, which is a major hurdle for any governance team trying to verify actions. By using a Python SDK like Asqav, we can cryptographically sign every single action and link it to an immutable hash chain, a technique most people associate with blockchain. This creates a centralized, possibly-encrypted system of record where, if any record is changed or removed, the verification of the entire chain fails immediately. We measure the success of this system by the integrity of the hash chain; if the verification holds, we have a verbose audit trail that provides data far beyond the reach of simple text files. This level of technical detail allows IT leaders to see exactly where and how agentic instances are acting throughout the entire enterprise.
Many organizations struggle to maintain a registry of every active agent, including unique IDs and specific permissions. How does building this “agentic asset list” facilitate evidence-based risk management during the development and production stages, and what happens when these registries are neglected?
Building an agentic asset list is the foundational step that many organizations unfortunately skip, leaving them blind to the automated activities occurring within their network. This registry must include every active agent with a unique ID, a clear record of its capabilities, and the exact permissions it has been granted. When these registries are neglected, organizations lose the ability to track instances in real-time, making it impossible to satisfy the “evidence-based” requirements of Article 9. This lack of oversight creates a vacuum where unauthorized actions can occur without detection until a major failure happens. By maintaining this list through every stage of deployment, we ensure that every agent is accounted for and operating within its legal and technical boundaries.
Regulatory standards require that high-risk AI systems remain interpretable rather than functioning as opaque code blobs. What specific documentation should vendors provide to ensure safe use, and how do these technical transparency requirements influence your choice of which AI models to deploy?
Regulatory standards, specifically Article 13 of the EU AI Act, demand that high-risk systems are designed so that those deploying them can fully understand the system’s output. Vendors are now required to provide enough documentation to ensure safe and lawful use, moving away from the “opaque code blob” model that characterized earlier AI developments. This requirement means the choice of which model to use is now as much a regulatory consideration as it is a technical one. We prioritize models from third parties that offer high levels of interpretability and clear documentation. If a model’s decision-making process is hidden or its methods of deployment are unclear, it simply cannot be used in a high-risk environment under the new legal framework.
What is your forecast for agentic AI governance?
I forecast that agentic AI governance will shift from being a reactive compliance hurdle to a core competitive advantage for technology firms. By 2026, the organizations that have mastered immutable records and transparent asset registries will be the only ones trusted to deploy high-risk systems at scale. We will likely see a standardized global framework emerge, heavily influenced by the EU’s rigorous requirements, where “governance-by-design” becomes the only way to build. Ultimately, the future of AI belongs to those who can prove their agents are as accountable as the humans who created them. This evolution will force a level of transparency in software development that we have never seen before, making systems safer for everyone involved.
