Quantcast
Channel: Cyber agencies urge organizations to collaborate to stop fast flux DNS attacks | CSO Online
Viewing all articles
Browse latest Browse all 1594

Australian data regulator backs off Clearview AI

$
0
0

The Office of the Australian Information Commissioner (OAIC) on Wednesday abandoned its multi-year effort against Clearview AI, which it had ordered to stop collecting images of people in Australia after accusing the company of improperly grabbing images of faces from “across the Internet.”

The OAIC decision went out of its way to not withdraw its accusations, and it even suggested that Clearview never stopped the practice, nor did it apparently delete images. But OAIC said that it decided it wasn’t worth the resources to pursue given that Clearview is facing so many other investigations. 

“We reiterate that the determination against Clearview AI still stands,” Privacy Commissioner Carly Kind said in a statement. “I have given extensive consideration to the question of whether the OAIC should invest further resources in scrutinizing the actions of Clearview AI, a company that has already been investigated by the OAIC and which has found itself the subject of regulatory investigations in at least three jurisdictions around the world as well as a class action in the United States,” Kind said. “Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.”

But Kind then stressed that Clearview is hardly alone and that many AI companies are capturing all manner of sensitive data found around the world. 

“The practices engaged in by Clearview AI at the time of the determination were troubling and are increasingly common due to the drive towards the development of generative artificial intelligence models. In August 2023, alongside 11 other data protection and privacy regulators, the OAIC issued a statement on the need to address data scraping, articulating in particular the obligations on social media platforms and publicly accessible sites to take reasonable steps to protect personal information that is on their sites from unlawful data scraping,” Kind said. “All regulated entities, including organizations that fall within the jurisdiction of the Privacy Act by way of carrying on business in Australia, which engage in the practice of collecting, using or disclosing personal information in the context of artificial intelligence are required to comply with the Privacy Act. The OAIC will soon be issuing guidance for entities seeking to develop and train generative AI models, including how the APPs apply to the collection and use of personal information. We will also issue guidance for entities using commercially available AI products, including chatbots.”

The original OIAC accusations, from October 2021, “found that Clearview AI, through its collection of facial images and biometric templates from individuals in Australia using a facial recognition technology, contravened the Privacy Act, and breached several Australian Privacy Principles (APPs) in Schedule 1 of the Act, including by collecting the sensitive information of individuals without consent in breach of APP 3.3 and failing to take reasonable steps to implement practices, procedures and systems to comply with the APPs,” the OIAC said.

The concerns are extensive. Back in 2021, the European Parliament explicitly called for bans on using facial recognition technology and specific bans on private facial recognition databases such as those created by Clearview AI. 

In 2022, the UK Information Commissioner’s Office fined Clearview £7.5 million for breaking data protection laws.

Martin Kuppinger, principal analyst for German consulting firm KuppingerCole Analysts, said he was taken aback by the decision.

“Although the decision is surprising, it illustrates the limitations data protection laws are facing in the internet and, specifically, in the age of AI. It also highlights the challenges of authorities in navigating through the dilemma of diverging targets of AI innovation, innovation based on AI, strengthening the cybersecurity posture and attack defense, and privacy and data protection. There is no simple answer here,” Kuppinger said. “Shall we hinder the defenders in using pictures of faces to train their models? Attackers don’t care, they just do and, for instance, use public pictures and videos from the Internet for training their deep fake models.”


Viewing all articles
Browse latest Browse all 1594

Trending Articles