Why Your AI Strategy Is Stuck and How to Fix It

Laurent Giraid is a renowned technologist whose expertise in artificial intelligence has shaped the way enterprises navigate the complex landscape of machine learning, natural language processing, and AI ethics. With a deep understanding of both cutting-edge innovation and the practical challenges of deployment, Laurent offers invaluable insights into bridging the gap between rapid AI advancements and the slower, more cautious world of enterprise adoption. In this interview, we explore the hurdles large organizations face in implementing AI strategies, the critical role of governance in managing risks, and the innovative approaches leading companies are taking to stay ahead. From the impact of new regulations to actionable frameworks for operational success, Laurent sheds light on how enterprises can turn AI potential into real-world results.

How do you see the biggest obstacles preventing AI projects from moving forward in large organizations, even when the technical work is complete?

The biggest obstacles often aren’t technical but organizational and procedural. In many large companies, you’ve got a model that’s been trained, tested, and is ready to deliver value—like predicting customer churn with high accuracy—but it gets stuck in a bureaucratic limbo. Risk reviews, compliance checks, and sign-offs from committees unfamiliar with AI’s nuances can drag on for months. I’ve seen cases where a model sits idle because no one knows who’s ultimately accountable for approving it. This isn’t just a delay; it’s a missed opportunity for productivity and competitive edge. The disconnect between the speed of AI development and the slowness of enterprise decision-making creates a frustrating bottleneck.

Can you elaborate on the concept of a “velocity gap” in AI, and share an example of how it manifests in real-world enterprise settings?

The “velocity gap” refers to the stark contrast between the lightning-fast pace of AI research and innovation and the much slower speed at which enterprises can adopt these advancements. In the research community, new models, tools, and frameworks are released almost weekly—think of the constant evolution of open-source libraries or large language models. But in enterprises, deploying these innovations often requires navigating layers of risk assessments and approvals. I recall a financial services company I worked with that had developed a promising fraud detection model. By the time it cleared internal reviews, newer, more efficient models had already emerged, rendering their work outdated before it even launched. This gap costs companies not just time but also relevance in a fast-moving field.

Why do enterprises often find it so hard to keep pace with the rapid advancements in AI technology?

Enterprises are built for stability, not speed. Their structures prioritize risk mitigation over agility, which is understandable when dealing with regulated industries or sensitive data. But AI innovation doesn’t wait—it’s driven by a global community of researchers and startups iterating at breakneck speed. Most companies lack the internal processes to quickly evaluate and integrate these advancements. For instance, updating a production system with a new model might require retraining staff, revising policies, and ensuring compliance with laws that weren’t even written with AI in mind. This mismatch in tempo means many enterprises are playing catch-up, often missing out on the benefits of cutting-edge tools because their systems can’t adapt fast enough.

What challenges arise when companies try to establish AI governance roles after they’ve already started deploying AI solutions?

Retroactively formalizing governance is like building the foundation of a house after you’ve already moved in—it’s messy and risky. When AI is deployed without clear oversight, you often end up with inconsistent practices across teams, fragmented data usage, and no single point of accountability. I’ve seen organizations struggle to map out who owns responsibility for issues like model bias or data privacy because those roles weren’t defined upfront. This leads to duplicated efforts, wasted resources, and potential compliance violations. Establishing governance after the fact requires not just creating new policies but also untangling existing deployments, which can slow down operations even further and erode trust internally.

How do you think emerging regulations like the EU AI Act will affect companies that lack robust governance structures?

Regulations like the EU AI Act are a wake-up call, and companies without solid governance are in for a rough ride. The Act imposes strict timelines and requirements, such as transparency for general-purpose AI systems by mid-2025 and rigorous rules for high-risk applications shortly after. If a company hasn’t already mapped out its AI inventory or established risk-tiering processes, they risk hefty fines or operational shutdowns in Europe. Imagine a healthcare firm using AI for diagnostics without proper documentation or risk assessments—they could be forced to halt operations until they comply, losing revenue and patient trust. Without governance, these regulations become a hammer rather than a guide.

In your experience, what tends to be the most significant hurdle in getting AI models approved for production deployment?

The biggest hurdle is often the mismatch between traditional risk review processes and the unique nature of AI systems. Many enterprises use frameworks designed for static software, which don’t account for the dynamic, probabilistic behavior of models. For example, ensuring fairness or explainability in a model isn’t as straightforward as running a unit test on a microservice. I’ve seen delays pile up during risk reviews because reviewers demand documentation or guarantees that simply don’t exist in the same way for AI—like proving a model won’t drift over time without real-world data. This friction often stems from a lack of understanding or tools tailored to AI-specific risks, turning approval into a drawn-out battle.

Can you explain the idea of “audit debt” in AI and how it impacts the speed of deployment compared to traditional software?

Audit debt in AI refers to the backlog of unresolved compliance and validation tasks that accumulate when policies aren’t designed for machine learning systems. Unlike traditional software, where you can often ship with clear test results and version control, AI models involve ongoing uncertainties like data drift or bias that require continuous monitoring and documentation. This creates a heavier burden during audits because you’re not just proving what the model does now, but also anticipating how it might behave later. I’ve seen deployment timelines double or triple compared to software projects because teams have to retroactively gather evidence on data lineage or model fairness—work that should’ve been embedded from the start. It’s a hidden cost that grinds progress to a halt.

What are your thoughts on “shadow AI sprawl,” and how does it pose risks to organizations?

Shadow AI sprawl happens when teams or departments adopt AI tools—often embedded in SaaS platforms—without central oversight. It seems efficient at first because it bypasses slow internal processes, but it’s a ticking time bomb. Without coordination, you end up with fragmented data practices, unclear ownership, and no way to track where sensitive information is being processed. I’ve seen a marketing team use an AI tool for content generation, only to later discover during an audit that customer data was being stored in unapproved third-party systems. This sprawl risks data breaches, regulatory violations, and massive cleanup costs. It’s a false shortcut that undermines long-term security and trust.

How familiar are you with the NIST AI Risk Management Framework, and do you believe it’s a practical tool for most companies?

I’m quite familiar with the NIST AI Risk Management Framework, and I think it’s an excellent starting point for guiding AI governance. It provides a structured approach—govern, map, measure, manage—that helps companies think systematically about risks. Its adaptability and alignment with international standards make it broadly applicable. However, it’s not a plug-and-play solution. Many companies, especially smaller ones or those new to AI, struggle with translating its high-level principles into day-to-day operations. It requires investment in tooling and expertise to turn the framework into actionable controls. So, while it’s practical in theory, its success depends on a company’s readiness to build the necessary infrastructure around it.

What are the critical steps a company should take to transform a framework like NIST into repeatable, operational processes for AI governance?

Turning a framework like NIST into something operational starts with breaking it down into tangible components. First, companies need to create an inventory of their AI assets—models, datasets, and use cases—and map them to risk categories outlined in the framework. Next, they should develop specific control catalogs, like automated checks for data lineage or bias detection, that align with NIST’s principles. Then, integrate these controls into existing development pipelines so governance isn’t an afterthought but part of the workflow. Finally, assign clear ownership for each step and invest in training so teams understand their roles. It’s about building a system where compliance is routine, not a hurdle—think of it as embedding guardrails directly into the road rather than adding them later.

Given that the EU AI Act sets deadlines but lacks specific tools, what do you think will be the toughest area for companies to address in meeting these requirements?

I believe the toughest area will be ensuring transparency and accountability for high-risk AI systems. The Act demands detailed documentation on how models work, their potential risks, and mitigation measures, but many companies don’t have the infrastructure to track this at scale. For instance, explaining the decision-making process of a complex neural network in a way that satisfies regulators is no small feat, especially if data sources or training processes aren’t well-documented. Without pre-existing systems for model explainability or audit trails, companies will struggle to meet these mandates on time. It’s an area where technical capability and regulatory clarity haven’t fully aligned yet, creating a steep learning curve.

Looking at strategies from leading enterprises, which approach to closing the velocity gap do you find most effective, and why?

Among the strategies I’ve seen, codifying governance as code stands out as particularly effective. This approach involves creating a set of automated checks and balances—think mandatory dataset lineage or risk-tier selection—that are embedded directly into the deployment pipeline. It’s powerful because it shifts governance from a manual, ad-hoc process to a standardized, repeatable one. I’ve witnessed a tech firm reduce deployment times by nearly 40% after implementing such a system, simply because projects couldn’t move forward without meeting predefined criteria. It enforces discipline without relying on endless meetings or subjective approvals, making it a scalable way to balance speed and compliance.

How does pre-approving certain AI patterns or architectures help accelerate deployment, and can you provide a simple example?

Pre-approving AI patterns or architectures speeds up deployment by removing the need for bespoke reviews every time a new project comes up. Instead of debating the safety or compliance of each model from scratch, teams can align their work to a pre-vetted template that’s already been greenlit by risk and legal teams. A simple example might be a pre-approved pattern for a retrieval-augmented generation (RAG) setup using a specific vector store for data retrieval, with defined limits on data retention and built-in human oversight. If a team builds within this framework, they can skip months of back-and-forth and go straight to implementation. It’s like having a pre-built blueprint—you just fill in the details rather than designing the whole structure.

Why do you think staging governance by risk level is crucial for efficient AI deployment in enterprises?

Staging governance by risk level is crucial because not all AI use cases carry the same stakes. A marketing tool generating ad copy doesn’t need the same scrutiny as a system deciding loan approvals or medical diagnoses. Applying a one-size-fits-all review process wastes time and resources on low-risk applications while potentially under-resourcing high-risk ones. I’ve seen companies streamline operations by tailoring review depth to criticality—low-risk projects might just need a quick checklist, while high-risk ones undergo rigorous testing and documentation. This risk-proportionate approach ensures you’re protecting what matters most without bogging down every initiative in unnecessary red tape. It’s about focusing effort where the impact is greatest.

What is your forecast for the future of AI governance in enterprises over the next few years?

I foresee AI governance becoming a core competitive differentiator for enterprises in the next few years. As regulations tighten globally and public scrutiny of AI’s societal impact grows, companies that treat governance as a strategic asset—rather than a checkbox—will pull ahead. I expect we’ll see more automation in governance processes, with tools for real-time monitoring of model performance and compliance becoming standard. At the same time, I anticipate a push toward industry-specific standards, as generic frameworks like NIST evolve to address unique sector challenges. The enterprises that invest now in building robust, scalable governance systems will not only avoid regulatory pitfalls but also gain the agility to innovate faster than their peers. It’s going to be the foundation for sustainable AI success.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later