As we dive into the rapidly evolving world of artificial intelligence, I’m thrilled to sit down with Laurent Giraud, a renowned technologist whose deep expertise in machine learning, natural language processing, and AI ethics has positioned him as a thought leader in the field. Today, we’ll explore
Deep image models have dazzled with accuracy, yet the most consequential story sat just out of view: not single neurons lighting up for neat human concepts, but webs of interconnected units assembling meaning layer by layer into circuits that actually drive what the model predicts and why it
When a single prompt can trigger chains of reasoning, tool calls, and multi-modal outputs that ripple through customer experiences and compliance obligations, the hard part of AI no longer lives in model training but in proving that the whole agent behaves correctly under pressure and at scale.
Dustin Trainor sits down with Laurent Giraid, a technologist steeped in AI systems, machine learning, and the ethics that keep them safe and useful at scale. With MCP crossing its first year and surging to nearly two thousand servers, the conversation spans the hard edges of taking agentic systems
Why AI agents keep forgetting—and why it’s a business problem Long-running agents still behave like short-term guests: they arrive with a clean slate, work within a finite context window, and forget the conversation as soon as the session ends unless someone leaves breadcrumbs long enough to
Hospitalsfaceastarkrealityinmedicalimagingwherelabeleddataarescarceanddomainsdivergewildlyacrosscenters. Across scanners, protocols, and patient cohorts, the visual look of the same anatomy can shift just enough to trip up segmentation systems trained under tidy lab assumptions. A new training