AI’s Ethical Test Is How It Treats Its Workers

AI’s Ethical Test Is How It Treats Its Workers

The prevailing narrative surrounding Artificial Intelligence often conjures images of sentient machines and existential threats, yet a more immediate ethical reckoning is unfolding far from the public eye, rooted not in future hypotheticals but in the present-day treatment of human beings. Discussions at the recent UN Forum on Business and Human Rights have decisively shifted the focus from the technology itself to the vast, often invisible, human infrastructure that supports it. A powerful consensus emerged that the dialogue must move beyond fanning fears and toward creating actionable solutions for safe and fair deployment. This marks a critical evolution in the AI ethics conversation, broadening the scope of accountability from the technology’s creators to every single business and organization that implements these powerful tools, forcing a new and urgent question: if AI is to serve humanity, how can we ignore the humanity required to build it?

A Democratization of Responsibility

The central theme resonating from the Forum was a profound redistribution of ethical oversight, moving beyond the tech giants of Silicon Valley to the boardrooms of every industry. As companies increasingly integrate AI systems, often unconsciously through standard IT software updates, they are simultaneously adopting the inherent human rights risks embedded within them. This passive adoption creates a landscape of unexamined liability, where organizations may not even be aware of the biased or unethically sourced models operating within their systems. The UN Working Group on Business and Human Rights has underscored this danger, releasing guidance that warns of significant litigation risks for companies that fail to conduct proper due diligence. The report stresses that businesses must achieve far greater transparency into how their AI systems are developed and deployed, making a clear case that ignorance is no longer a viable defense when these complex technologies cause tangible harm.

Further complicating this landscape is the glaring inaction of governments and public institutions. As major purchasers and users of AI technology, they possess immense leverage to set and enforce high ethical standards through their procurement processes. However, research presented at the Forum revealed no evidence that public authorities are currently using this power to demand transparency from their technology partners. Instead, AI adoption often occurs in an ethical vacuum. This challenge is compounded by the “black box” nature of many AI models, whose decision-making processes remain opaque even to their developers. Experts argue that breaking into this black box requires sustained engagement between human rights specialists and technical teams. Without such collaboration, the risk of perpetuating and amplifying societal discrimination through models trained on biased data grows exponentially, threatening to widen social and economic divides, particularly among vulnerable groups with low levels of AI literacy.

The Invisible Workforce Behind the Code

Perhaps the most compelling and urgent issue brought to light was the “labor behind AI,” a concept that frames the hidden human element as a critical ethical battleground, much like the “labor behind the label” campaigns that exposed exploitation in the global apparel industry. This discussion brought the focus squarely onto the millions of data annotators, labelers, and content moderators who perform the essential, often psychologically grueling, tasks that make AI function. These individuals are the human engine of machine learning, meticulously cleaning datasets, identifying objects, and filtering harmful content to train algorithms. Yet, they typically work in precarious conditions, often as gig workers without the benefits, protections, or pay commensurate with the vital role they play in a multi-trillion-dollar industry. Their labor is the foundational input for AI, but their well-being has been treated as a negligible externality.

The profound human cost of this invisible work was powerfully illustrated by the firsthand testimony of a Portuguese content moderator. Identified only as Eliza, she described a “suffocating” work environment where her primary task was to watch and categorize thousands of deeply disturbing videos daily, including graphic depictions of self-harm and suicide attempts, to train AI content filters. She detailed being forced to label adult content, cataloging sexual positions and body parts, an assignment she found unacceptable but was compelled to continue despite her protests. This psychologically damaging work was paired with the immense pressure of unachievable performance targets, requiring her to process nearly a thousand videos per day while navigating complex moderation policies that could span fifty pages. Her story provided a harrowing glimpse into a world where human workers are systematically exposed to the worst of the internet to shield both the public and the AI systems themselves, often with little to no meaningful support.

Forging a Path Toward Fair and Dignified Labor

In direct response to these alarming working conditions, a clear and unified set of demands has emerged from labor unions and worker advocates. The central objective is a fundamental reclassification of this workforce, moving them from the precarious and unprotected status of casual gig work to formalized employment. This transition is seen as the essential first step toward securing basic labor rights, including fair wages, legally mandated benefits like health insurance and paid time off, and, crucially, the right to organize and engage in collective bargaining. By formalizing their employment, these workers would gain the legal standing and collective power necessary to negotiate for safer conditions and hold their employers accountable. This structural change aims to dismantle the exploitative model that has allowed the AI industry to build its technology on the back of a vulnerable and disposable workforce, insisting that innovation cannot come at the cost of human dignity.

Alongside the push for formal employment, consultations with workers have yielded a range of practical, immediately implementable solutions designed to mitigate the psychological harm inherent in their roles. These proposals focus on creating a more humane and sustainable work environment. Key recommendations include rotating staff between moderating psychologically harmful content and less damaging tasks to prevent prolonged exposure to trauma. Advocates are also calling for strictly enforced limits on working hours and the mandatory provision of adequate rest breaks to combat burnout and mental exhaustion. Perhaps most critically, there is a strong demand for universal access to independent, professional psychological support, ensuring that workers have a confidential and reliable resource to help them process the distressing material they encounter. These tangible measures represent a floor, not a ceiling, for what constitutes a safe and ethical workplace for the people performing AI’s most difficult jobs.

A New Understanding of Trust and Quality

The Forum’s intensive discussions ultimately culminated in a powerful synthesis that reframed the entire debate. It became clear that the ethical treatment of human workers in the AI supply chain was not merely a peripheral labor issue but was fundamentally and inextricably linked to the quality, safety, and trustworthiness of the AI systems themselves. One labor leader articulated this connection with stark clarity, stating, “If you treat people like garbage at the input, it’s bound to lead to garbage at the output.” This “garbage in, garbage out” principle crystallized the argument that a stressed, underpaid, and psychologically harmed workforce is incapable of providing the nuanced, high-quality data annotation and moderation required to build reliable and unbiased AI. The industry’s pursuit of “trust in AI” had been exposed as hollow so long as it ignored the human element responsible for its creation. This new perspective revealed a crucial alignment of interests: protecting the public from harmful content and ensuring businesses have accurate AI models are goals that can only be achieved by first protecting the workers who make it all possible. The path forward, it was suggested, required a basic multi-stakeholder initiative, bringing together companies, workers, and civil society to build a governance framework that finally brought the labor behind AI out of the shadows.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later