I’m thrilled to sit down with Laurent Giraid, a renowned technologist whose deep expertise in artificial intelligence has been instrumental in shaping innovative solutions for enterprises. With a focus on machine learning, natural language processing, and the ethical dimensions of AI, Laurent brings a wealth of insight into the rapidly evolving field of physical AI and robotics. Today, we’ll explore how these technologies are transforming industries, the cutting-edge tools driving this change, and the importance of responsible innovation in creating sustainable value for businesses and communities.
How does the new EY.ai Lab in Alpharetta, Georgia, stand out in advancing physical AI, and what kind of projects are being developed there?
Thanks for asking about the Lab—it’s really an exciting space. The EY.ai Lab in Alpharetta is unique because it’s entirely dedicated to physical AI, equipped with state-of-the-art robotics systems, sensors, and simulation tools that allow companies to test concepts in a virtual testbed before real-world deployment. What sets it apart is the ability to design and refine solutions for complex systems like humanoids or quadruped robots in a controlled, risk-free environment. For instance, we’re currently working on a project for a manufacturing client where we’re developing a robotic system to optimize assembly line logistics. Behind the scenes, we spend weeks simulating every possible scenario—think equipment malfunctions or unexpected human interactions—using digital twins to ensure the system adapts seamlessly. I remember walking through the Lab one evening, seeing the 3D models flickering on screens, and feeling this rush of anticipation knowing that what we’re building could cut downtime by double-digit percentages for our client. It’s that blend of cutting-edge tech and practical impact that makes the Lab a game-changer.
Can you explain how tools like NVIDIA Omniverse libraries and Isaac frameworks are helping companies bridge the gap between planning and real-world implementation of physical AI systems?
Absolutely, these tools are transformative in making physical AI practical for enterprises. NVIDIA Omniverse libraries allow us to create incredibly detailed digital twins—virtual replicas of physical environments or systems—where we can model and test everything before a single robot moves in the real world. The Isaac tools provide open models and simulation frameworks that let us design AI-driven robots in 3D settings, tweaking algorithms to handle specific tasks like navigating a warehouse or inspecting infrastructure. Take a logistics company we worked with: we started by building a digital twin of their distribution center, ran simulations to optimize drone delivery routes, and used Isaac to train the AI on handling dynamic obstacles. By the time we deployed, we’d already shaved off 20% of their operational delays in simulations, and the real-world rollout mirrored those results almost exactly. It’s like rehearsing a play until every line is perfect—except here, the stage is a factory floor, and the stakes are efficiency and safety. Seeing those drones buzz to life after months of virtual testing was honestly a proud moment for the team.
How is physical AI reshaping day-to-day operations in industries like industrials or energy, and can you share a specific example of its impact?
Physical AI is revolutionizing how these industries function by driving automation to new levels, and the impact is visible in everyday workflows. In the energy sector, for example, we’re seeing AI-powered drones and robots take over routine inspections of infrastructure like wind turbines or power lines, tasks that used to require significant human effort and risk. I recall a project with an energy client where we deployed drones equipped with AI to monitor remote transmission lines. Previously, their teams would spend days trekking through rough terrain, battling weather, and facing safety hazards. With the AI system, inspections that took a week now wrap up in under 48 hours, with data accuracy that’s blown their expectations—error rates dropped by nearly 30%. Walking through their control room and hearing the relief in the operators’ voices as they watched live feeds from the drones, it hit me how much this tech isn’t just about numbers; it’s about giving people safer, smarter ways to work.
With leadership like Dr. Youngjun Choi guiding EY’s physical AI initiatives, how does his background influence the direction of this work?
Dr. Choi’s background is a huge asset to our efforts, bringing a depth of experience that shapes our approach in meaningful ways. Having led the UPS Robotics AI Lab, he mastered the art of integrating digital twins and AI into massive operational networks, and his earlier work in aerospace engineering at Georgia Tech honed his expertise in autonomous systems like aerial robotics. That combination means he’s laser-focused on scalability and precision, pushing us to think about how physical AI can solve real logistical pain points. One story that stands out is his past work at UPS, where he spearheaded a robotics project to streamline package sorting with AI-driven systems—something that directly informs our current strategies for logistics clients at EY. I remember him sharing how those late-night debugging sessions taught him the value of resilience, a mindset he brings to our Lab every day. His vision ensures we’re not just chasing tech for tech’s sake, but building solutions that endure under real-world pressures.
How does EY ensure responsible physical AI, particularly with governance and safety controls in sensitive sectors like health?
Responsibility is at the core of everything we do with physical AI, especially in areas like health where the stakes are incredibly high. We’ve built a framework of governance and controls that prioritize safety, ethics, and compliance from the design phase through to deployment. For example, in a recent health sector project involving AI-driven robotic assistants for hospital logistics, we started by mapping out every interaction point—how the robot navigates crowded hallways, how it handles patient data, and even how it responds to emergencies. We embed strict protocols, like real-time monitoring and fail-safes, and work closely with regulatory experts to ensure every step aligns with industry standards. I’ll never forget the tension in the room during our first live test in a hospital setting—watching that robot glide past nurses with perfect precision, knowing we’d accounted for every ‘what if,’ was a relief beyond words. It’s a meticulous process, but it builds trust, and that’s something we can’t compromise on when lives are involved.
Can you share more about EY and NVIDIA’s goals for physical AI in areas like smart cities or energy, and what challenges or early wins have you encountered?
We’re incredibly excited about expanding physical AI into smart cities and energy, with a big focus on sustainability and reducing waste. One specific goal in the energy sector is to optimize grid management through AI-driven systems that predict and address inefficiencies—think reducing power outages or minimizing energy loss. We’re in the early stages of a project where drones and edge devices monitor grid infrastructure in real time, and while the potential is massive, the challenge lies in integrating vast amounts of data across disparate systems. I remember a brainstorming session where we hit wall after wall trying to align sensor data formats—it was frustrating, but when we got the first predictive alert during a test run, flagging a potential fault before it happened, the room erupted in cheers. Early outcomes show we could cut maintenance costs by a notable margin, though we’re still refining the model. It’s a reminder of how this work could reshape how we power our world, making it more resilient, and I can’t wait to see where it leads.
What does a strong data foundation mean for companies using EY’s physical AI platform, especially with synthetic data, and can you illustrate its importance with a real-world example?
A strong data foundation is the bedrock of effective physical AI—it’s about having high-quality, representative data to train systems that perform reliably in the real world. With synthetic data, we can simulate countless physical scenarios that might be too rare or risky to replicate physically, ensuring our AI models are robust before deployment. For companies using our platform, this means their robots or drones aren’t just reacting to a narrow set of conditions but are prepared for the unexpected. I recall working with a manufacturing client where synthetic data made all the difference—we generated thousands of virtual equipment failure scenarios to train their AI maintenance bots. Without that, we’d have been flying blind, and during implementation, we hit a snag with data integration that delayed us by a couple of weeks. But once resolved, the bots caught 15% more potential issues in the first month alone compared to traditional methods. Watching the client’s team light up as they saw those results on the dashboard—it drove home how data isn’t just numbers; it’s the pulse of trust and performance in AI.
What is your forecast for the future of physical AI in enterprise settings over the next decade?
I’m incredibly optimistic about where physical AI is headed in the enterprise space over the next ten years. I believe we’ll see a dramatic shift where robots and AI systems become as commonplace in industries like manufacturing, energy, and health as computers are today—seamlessly integrated into workflows, driving efficiency, and tackling challenges like labor shortages or sustainability. The technology will evolve to be more autonomous, with systems that don’t just follow instructions but anticipate needs, like robots in smart cities rerouting traffic in real time based on predictive data. But the real game-changer will be accessibility—tools like digital twins and simulation platforms will become more user-friendly, empowering even smaller enterprises to adopt these solutions. I’ve seen firsthand how a single drone deployment can transform a small energy firm’s operations, and I can’t help but feel a surge of excitement imagining that impact multiplied across thousands of businesses. What keeps me up at night, though, is ensuring we balance this growth with ethical guardrails—because the future isn’t just about what we can do, but how we do it responsibly.
