Study Finds News Media Rarely Humanizes AI

Study Finds News Media Rarely Humanizes AI

A comprehensive new analysis of language used in professional news writing reveals that the common assumption of artificial intelligence being pervasively and thoughtlessly humanized may be a significant overstatement. A team of researchers from Iowa State University, Brigham Young University, and the University of Northern Colorado has conducted a deep dive into the specific use of anthropomorphic language—words that attribute human traits to non-human entities—in formal reporting about AI. Their study, which scrutinizes the application of “mental verbs” such as think, know, understand, and remember, suggests that professional writers are far more restrained and context-dependent in their descriptions of AI than is widely believed. This research challenges the narrative that we are carelessly blurring the lines between human cognition and machine processing, pointing instead to a more nuanced and deliberate approach in formal communication that has significant implications for public perception.

The Perils of Misleading Descriptions

The central issue explored by the researchers is the profound potential for misunderstanding when mental verbs are applied to AI systems, as this linguistic shortcut, while a natural way for people to relate to new technologies, carries significant risks. A primary concern articulated by the study’s authors, Iowa State English professors Jo Mackiewicz and Jeanine Aune, is that such language can be deeply misleading. It can foster the false impression that machines possess human-like inner lives, complete with consciousness, beliefs, desires, or intentions. The researchers emphasize that AI systems do not “feel” or “want” in any human sense; they function by recognizing and replicating complex patterns within vast datasets to generate outputs. Attributing human cognition to them blurs the critical line between genuine, sentient thought and sophisticated algorithmic processing, which can lead to fundamental misinterpretations of the technology’s nature and function.

Furthermore, this anthropomorphic framing can inadvertently inflate public and professional perceptions of AI’s actual capabilities, creating a distorted reality of what these systems can achieve. Phrases like “AI decided” or “ChatGPT knows” can make a system appear more autonomous, intelligent, and reliable than it truly is, which could lead to its misuse or over-reliance in contexts where it cannot perform safely or dependably. Perhaps the most critical danger of this linguistic framing is its tendency to obscure human accountability. By assigning agency and decision-making power to the machine itself, it becomes easier to overlook the real human actors: the programmers who design the algorithms, the organizations that curate the training data, and the individuals who ultimately deploy and oversee these powerful systems. As the researchers note, certain anthropomorphic phrases can become ingrained in the public consciousness, shaping perceptions of AI in ways that are ultimately unhelpful and counterproductive to responsible development and deployment.

A Deep Dive into the Data

To investigate the actual prevalence of this linguistic phenomenon, the research team employed a rigorous, data-driven methodology that moved beyond anecdotal evidence. They utilized the News on the Web (NOW) corpus, an immense and continually expanding linguistic dataset that contains over 20 billion words drawn from a diverse collection of English-language news articles from 20 different countries. This powerful tool allowed the researchers to systematically analyze how frequently professional news writers paired specific mental verbs with the terms “AI” and “ChatGPT” on a massive scale. The sheer size and scope of the corpus provided a robust and reliable foundation for identifying broad, overarching trends in the formal, professional communication surrounding artificial intelligence. This empirical approach enabled the team to quantify the use of anthropomorphic language and compare their findings against prevailing assumptions about how technology is discussed in the media.

The results of this extensive analysis surprised the research team, revealing a reality that diverges significantly from common assumptions and existing research focused on informal speech. The study yielded a key finding that paints a much more nuanced picture: the terms “AI” and “ChatGPT” are paired with mental verbs with remarkable infrequency in professional news articles. This stands in stark contrast to the idea that anthropomorphism is rampant. The analysis produced specific quantitative data to support this conclusion. For instance, the mental verb most frequently associated with the term “AI” was “needs,” which occurred a total of 661 times, while “knows” was the most common verb paired with “ChatGPT,” appearing just 32 times within the entire 20-billion-word corpus. Mackiewicz and Aune speculate that this professional restraint may be influenced by established industry standards, such as the Associated Press (AP) style guidelines, which explicitly advise journalists to avoid attaching human emotions or capabilities to AI models.

Beyond the Verbs a Spectrum of Meaning

A second crucial finding from the study was that even when mental verbs were used in conjunction with “AI” or “ChatGPT,” the surrounding context often rendered the usage non-anthropomorphic, stripping it of any human-like meaning. The researchers’ close analysis of the most common verb, “needs,” revealed two predominant and non-humanizing applications. In many cases, “needs” was used simply to describe a functional requirement of the system, treating the AI no differently than any other inanimate object or process. A sentence like “AI needs large amounts of data to function” is functionally identical to saying “the car needs gas” or “the soup needs salt.” A second common usage involved suggesting an obligation or a necessary action that must be performed by humans. Phrases such as “AI needs to be trained on diverse datasets” or “AI needs to be implemented ethically” were frequently written in a passive voice. This grammatical structure subtly but effectively shifts the agency and responsibility back to the humans who must perform the action, rather than implying the AI possesses an internal sense of need or purpose.

Finally, the research team concluded that anthropomorphism is not a binary, all-or-nothing concept but rather exists on a spectrum of intensity and subtlety. While many documented uses of mental verbs were clearly non-humanizing, some instances edged into more ambiguous, human-like territory, demonstrating the complexity of the issue. The study cites the example, “AI needs to understand the real world,” as a case in point. This particular phrasing implies qualities and expectations that are typically associated with people, such as contextual awareness, ethical judgment, or a personal, nuanced grasp of reality. Such instances demonstrate that the degree of anthropomorphism can vary widely in strength, from weak, functional descriptions to strong, cognitive attributions. This finding underscores the critical importance of looking beyond simplistic verb counts to consider how the broader linguistic and situational context shapes the ultimate meaning and its potential impact on the reader’s understanding.

Implications for Future Communication

Ultimately, the study provided compelling evidence that the anthropomorphization of AI in professional news writing was less common and far more nuanced than had been widely assumed. The findings highlighted a critical principle of communication: that meaning is derived not from individual words in isolation but from their careful and deliberate contextual application. For writers, technical communicators, and journalists, this nuance proved to be critically important. It was asserted that the language chosen directly influences how readers understand AI systems, their true capabilities, and the human roles and responsibilities that underpin them. As AI technology continues its rapid evolution, writers now face an ongoing challenge to select their words carefully to frame these powerful tools accurately and responsibly. The research team suggested that future studies could further explore the impact of different anthropomorphizing words and investigate whether the relatively infrequent but powerful instances of strong anthropomorphism had an outsized effect on how society thinks about and interacts with artificial intelligence.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later