AI Tools Used by Journalists to Verify Images in Trump Assassination Attempt

July 15, 2024

The recent attempted assassination of former President Donald Trump in Pennsylvania has triggered intense scrutiny of images circulating on social media. Journalists relied heavily on artificial intelligence (AI) to determine the authenticity of these images, showcasing both the potential and limitations of such technology in curbing misinformation online. AI tools from the Seattle-based nonprofit TrueMedia.org were instrumental for reporters trying to navigate through a sea of possibly doctored photos depicting the controversial event. One prominent photograph, which showed a bullet hovering ominously behind Trump’s head, was verified as authentic and unmanipulated. This confirmation came from TrueMedia’s AI tools and was initially captured by New York Times photographer Doug Mills. Another widely circulated image showing Trump raising his fist was likewise authenticated and identified as an Associated Press photo, further affirming the reliability of AI in such high-stakes scenarios.

The Role of TrueMedia.org in Image Verification

TrueMedia.org, spearheaded by AI expert Oren Etzioni, has been at the forefront of developing advanced AI tools to verify images, videos, and audio files, collectively known as deepfakes. Despite the success of these AI tools in the immediate aftermath of the Trump incident, Etzioni was quick to emphasize that they are not a comprehensive solution to misinformation. He pointed out the persistent need for credible media organizations and diligent fact-checkers to scrutinize the authenticity of digital content. According to Etzioni, waiting for complete and thorough investigative results before forming any conclusions is pivotal—a point echoed by numerous national leaders as well.

Adding to its accolades, NewsGuard, a media verification service, recognized TrueMedia’s tools for identifying 41 TikTok accounts that were disseminating political disinformation through AI-generated narration. They labeled this phenomenon as a “TikTok AI content farm,” signaling the severity and scale of misinformation propagation that AI tools can help combat. TrueMedia.org, launched in January with funding from Uber co-founder Garrett Camp through his Camp.org foundation, has really made its mark. In April, it made its political deepfake detector available to journalists and fact-checkers, and since then it has garnered thousands of active users who depend on its tools to sort fact from fiction.

The Broader Implications for AI in Media Verification

Etzioni recounted how he first learned of the attempted assassination via a journalist using TrueMedia’s tools to verify the mid-air bullet photo. This instance encapsulates the urgency and necessity for AI in assisting human efforts to uphold media integrity. TrueMedia’s advancements extend beyond this incident to broader applications, including the launch of its AI tools aimed at detecting fake political content, which has proven to be a critical resource for media organizations battling the spread of digital falsehoods.

The organization’s initiatives have drawn attention to related efforts, such as a quiz they designed for users to test their ability to identify deepfakes. They have actively promoted the importance of AI experimentation and education, supported by tech veterans advocating for a well-informed public participation in the digital age. Moreover, the collaboration with University of Washington researchers to analyze rapid online reactions and rumors following the Trump incident has highlighted the collective responsibility to maintain informational integrity. These multifaceted endeavors underscore the pivotal role of AI, not as a solitary guardian but as a complement to human diligence, in preserving democratic processes and ensuring the credibility of news.

Balancing AI Advancements with Human Oversight

Etzioni shared how he first discovered the attempted assassination through a journalist using TrueMedia’s tools to verify a mid-air bullet photo. This emphasizes the urgent need for AI to support human efforts in maintaining media integrity. TrueMedia’s advancements aren’t limited to this incident—they also include AI tools designed to detect fake political content. These tools have become essential for media organizations combating digital falsehoods.

The organization’s initiatives have also highlighted additional efforts, including a quiz designed to help users recognize deepfakes. They actively promote AI experimentation and education, with tech veterans backing public participation in this digital era. Notably, their collaboration with University of Washington researchers to study the rapid online response to rumors following the Trump incident underscores the collective duty to preserve informational trustworthiness. These multifaceted efforts highlight AI’s vital role, not as a sole protector but as a crucial aid to human vigilance, in safeguarding democratic processes and ensuring news credibility.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later