Today, we’re joined by Laurent Giraid, a leading technologist and thinker at the nexus of artificial intelligence, sport, and society. With a keen eye for the ethical and cultural dimensions of machine learning, he helps us navigate the complex terrain where algorithms meet athleticism. We’ll explore the International Olympic Committee’s ambitious AI agenda, delving into the delicate balance between technological precision and human artistry, the hidden biases in supposedly objective systems, and how this revolution is reshaping not just how sports are judged, but the very definition of excellence.
The International Olympic Committee’s AI Agenda aims to enhance judging, athlete preparation, and the fan experience. How will this technology be introduced in sports like figure skating at the 2026 Winter Olympics, and what are the primary challenges in ensuring its legitimacy among athletes and officials?
The rollout we’re anticipating for the 2026 Winter Olympics in Milano-Cortina is both ambitious and cautious. In figure skating, for example, the plan isn’t to have AI deliver a final score but to use it as a support tool, helping human judges precisely identify the number of rotations in a jump—a task that can be incredibly difficult for the human eye in real time. For events like big air and halfpipe, we’ll see automated systems measuring objective metrics like jump height and takeoff angles. The true challenge, however, isn’t just about the technology working; it’s about making people believe in it. Our research shows that trust and legitimacy are just as critical as technical accuracy. Athletes need to feel the system is fair, and officials must be confident it enhances, rather than undermines, their expertise. The core of the problem is cultural acceptance; we must prove that AI is a credible partner, not an infallible, unfeeling overlord.
Incidents like the 2024 gymnastics controversy highlight human judging fallibility. While AI promises greater precision, it can also penalize athletes for minor imperfections invisible to the naked eye. How can we balance the quest for accuracy with the reality of human physical limitations in performance?
That’s the central paradox we’re facing. The 2024 incident with Jordan Chiles was a perfect storm of human error and complex rules; her coach’s inquiry was just four seconds late, throwing the results into chaos and eroding public trust. AI can certainly prevent that kind of administrative fumble. However, it introduces a different kind of problem. In our studies on artistic gymnastics, we’ve seen how AI can be too exact. A human judge sees a beautifully held position, but an AI system can detect that a leg is off-angle by a few imperceptible degrees and apply a penalty. This pursuit of absolute mathematical perfection can feel punishing to athletes, whose bodies are not machines. The balance lies in using AI to support, not supplant, human judgment—to handle the objective data points while leaving room for the holistic assessment that only a human can provide. We must remember these are sports of human endeavor, not a quest for algorithmic flawlessness.
AI is often presented as a solution to human bias, but algorithms can introduce new prejudices if trained on limited data. What practical, step-by-step measures can federations take to ensure AI judging systems are developed to be fair across diverse body types and performance styles?
This is a critical area where proactive governance is essential. The first step for any federation is to demand transparency in the development process and insist on diverse training data. An algorithm trained predominantly on male gymnasts, for instance, might inadvertently penalize female athletes whose biomechanics are different. To counter this, federations must mandate that training datasets are globally representative, including a wide spectrum of body types, ethnicities, and performance styles—not just the ones that are currently winning. The second step is rigorous, independent auditing. Before any system is deployed, it needs to be tested against historical data and in shadow-judging scenarios to actively search for hidden biases. Finally, there must be a clear and continuous feedback loop involving athletes, coaches, and judges. These are the people who will feel the system’s flaws first. Creating a formal process for them to report anomalies and suggest refinements is the only way to ensure the AI evolves fairly alongside the sport itself.
Action sports like snowboarding value style and creativity, which are difficult to quantify. With AI judging being tested at events like the X Games, how can these systems evolve to reward an athlete for introducing a brand-new trick, and what is the risk of standardizing performances?
This is where the logic of AI runs into the very soul of action sports. The culture of sports like snowboarding was built on pushing boundaries and celebrating individual expression—think of Lindsey Jacobellis losing her 2006 Olympic gold for adding a stylish board grab. The risk is immense. An AI trained on past performances is, by its very nature, backward-looking. It excels at recognizing and scoring what it has already seen. When an athlete invents a completely new trick, as was a concern at the 2025 X Games trials, the AI simply wouldn’t have a framework to evaluate it. This could create a chilling effect, incentivizing athletes to perform well-established, high-scoring maneuvers rather than taking creative risks. For AI to evolve, it would need to be programmed not just to recognize patterns but to identify novelty and assign it a value, which is an incredibly complex task. Without that, we risk accelerating the standardization of these sports, hollowing out the very creativity that made them so appealing in the first place.
Beyond officiating, AI is influencing athlete training through analytics and shaping the fan experience with biomechanical overlays. What is the long-term risk that this focus on measurable data could redefine excellence and reshape how sports evolve, potentially sidelining artistry and human intuition?
The risk is a subtle but profound shift in what we value in sport. As motion tracking and performance analytics become central to training, athletes and coaches are naturally pushed to optimize what can be measured—angles, heights, speeds. Similarly, when the fan experience is dominated by biomechanical overlays and data-driven “storytelling,” the audience is also trained to see the sport through a quantitative lens. The danger is that artistry, flow, and raw intuition—qualities that are hard to put a number on—become devalued or even ignored. If AI-driven metrics become the primary definition of excellence, we could see sports evolve toward a kind of athletic Taylorism, where efficiency and measurable output trump creative expression. The effects could cascade down from the elite level, reshaping how young athletes are coached and how the next generation understands their sport. It’s a quiet revolution, but it could fundamentally change the spirit of the games.
What is your forecast for the future of AI in sports judging?
My forecast is that the integration of AI is inevitable, but its role will be far more collaborative and specialized than many people imagine. We won’t see human judges replaced wholesale in the near future, especially in sports with a strong artistic component. Instead, AI will become a powerful “third judge” in the booth, handling the objective, data-heavy tasks like counting rotations or measuring heights with superhuman accuracy, freeing up human officials to focus on the more nuanced aspects of a performance like style, execution, and overall impression. The greatest challenge ahead is not technological; it is institutional and cultural. The success of AI in sports will depend entirely on our ability to design systems that are transparent, fair, and, most importantly, aligned with the core values that give each sport its unique meaning and human appeal.
