As we dive into the rapidly evolving world of artificial intelligence, I’m thrilled to sit down with Laurent Giraud, a renowned technologist whose deep expertise in machine learning, natural language processing, and AI ethics has positioned him as a thought leader in the field. Today, we’ll explore
Central question and scope: genuine care, covert surveillance, or a contested middle ground? In a moment when employers promise compassion at scale, the rise of AI that listens, counsels, and infers feelings poses a stark question that refuses to go away: does this technology genuinely care for
Deep image models have dazzled with accuracy, yet the most consequential story sat just out of view: not single neurons lighting up for neat human concepts, but webs of interconnected units assembling meaning layer by layer into circuits that actually drive what the model predicts and why it
Dustin Trainor sits down with Laurent Giraid, a technologist steeped in AI systems, machine learning, and the ethics that keep them safe and useful at scale. With MCP crossing its first year and surging to nearly two thousand servers, the conversation spans the hard edges of taking agentic systems
Hospitalsfaceastarkrealityinmedicalimagingwherelabeleddataarescarceanddomainsdivergewildlyacrosscenters. Across scanners, protocols, and patient cohorts, the visual look of the same anatomy can shift just enough to trip up segmentation systems trained under tidy lab assumptions. A new training
A sharper way to ask the hard question What if the leap in robot reliability came not from ever-larger models but from a smarter split between thinking and doing that keeps language plans on a short leash and loops real-world feedback back into every choice the machine makes? The premise is blunt: