In a significant turn of events, Australia has decided to halt its enforcement actions against Clearview AI over the controversial use of Australians’ images within its facial recognition service. Clearview AI, which possesses a database of over 50 billion faces scraped from various platforms on the internet, including social media, has been a tool utilized by law enforcement globally, including in Australia. Back in 2021, the Australian Information Commissioner concluded that Clearview AI had violated privacy laws by collecting images without consent. Consequently, the company was ordered to cease collection activities and delete its existing database of Australian faces within 90 days. Initially, Clearview sought to appeal the decision but withdrew its appeal in August 2022, thereby upholding the original ruling.
Despite the upheld decision, Clearview AI has neither confirmed compliance with the order nor responded to requests for comments. Privacy Commissioner Carly Kind revealed that the Office of the Australian Information Commissioner (OAIC) would no longer dedicate resources to enforcing the ruling against Clearview AI. She cited the widespread regulatory scrutiny clear across the globe and a concurrent class action lawsuit in the U.S. against Clearview AI as additional reasons. Kind also emphasized the growing prevalence of such privacy-invasive practices, particularly in an era marked by the rise of generative AI models, amplifying the importance of regulatory oversight.
Regulatory and Public Scrutiny on Privacy Concerns
Greens Senator David Shoebridge has advocated for additional inquiries into Clearview AI’s continuing activities and their possible privacy implications. He highlighted the necessity for a more profound understanding of whether Clearview continues to scrape individuals’ photos without consent. Senator Shoebridge stressed that the potential for artificial intelligence to exacerbate privacy harm justifies heightened public awareness and thorough investigation into the company’s operations. This sentiment reflects a broader concern about safeguarding individual privacy in an age where AI technologies are advancing at breakneck speed.
Moreover, the OAIC, along with eleven other regulators, has called for public platforms to implement protective measures against the unlawful scraping of personal information. These recommendations seek to address prevalent concerns over the unauthorized collection and use of individuals’ images, reiterating the need for robust privacy protections. Clearview AI, on its part, has claimed to be beyond Australian jurisdiction and insists it has taken measures to block its web crawler from accessing Australian servers. However, as Clearview AI undertook a re-scraping of the internet in early 2022, it did not ensure that Australian images on foreign servers were excluded, raising questions about the company’s compliance.
Unresolved Compliance and Future Implications
In a noteworthy development, Australia has chosen to stop its enforcement actions against Clearview AI concerning the disputed use of Australians’ images in its facial recognition service. Clearview AI, owning a database with over 50 billion faces sourced from various internet platforms including social media, has been employed by law enforcement agencies worldwide, including those in Australia. In 2021, the Australian Information Commissioner determined that Clearview AI had broken privacy laws by collecting images without consent, ordering the company to cease further collection and delete existing images of Australians within 90 days. Initially, Clearview aimed to appeal the decision but withdrew its appeal in August 2022, thereby upholding the original ruling.
However, Clearview AI has not confirmed whether it complied with the order nor responded to comments. Privacy Commissioner Carly Kind disclosed that the Office of the Australian Information Commissioner (OAIC) would no longer allocate resources to enforce the ruling against Clearview AI. She pointed to global regulatory scrutiny and an ongoing class action lawsuit in the U.S. as additional factors. Kind stressed the increasing frequency of privacy-intrusive practices, especially with the rise of generative AI models, underscoring the importance of vigilant regulatory oversight.