Quantcast
Channel: When AI moves beyond human oversight: The cybersecurity risks of self-sustaining systems | CSO Online
Viewing all articles
Browse latest Browse all 1660

Complaints in EU challenge Meta’s plans to utilize personal data for AI

$
0
0

Meta is facing renewed scrutiny over privacy concerns as the privacy advocacy group NOYB has lodged complaints in 11 countries against the company’s plans to use personal data for training its AI models.

NOYB has called on national regulators to take immediate action against Meta in 10 European Union member states and in Norway, arguing that changes to the company’s privacy policy due to enter effect on June 26 would permit the use of extensive personal data, including posts, private images, and tracking information, for training its AI technology.

“Unlike the already problematic situation of companies using certain (public) data to train a specific AI system (eg a chatbot), Meta’s new privacy policy basically says that the company wants to take all public and non-public user data that it has collected since 2007 and use it for any undefined type of current and future ‘artificial intelligence technology,’” NOYB said in a statement.

“This includes the many ‘dormant’ Facebook accounts users hardly interact with anymore — but which still contain huge amounts of personal data,” the group added.

Meta announced the changes last month in an email to Facebook users inviting them to opt out of the changes. It read, in part, “AI at Meta is our collection of generative AI features and experiences, such as Meta AI and AI creative tools, along with the models that power them. … To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied from then on.”

Potential implications for other enterprises

The complaint once again highlights serious concerns about privacy and the use of consumer data in developing AI models. For enterprises, this also raises the question of who is responsible for compliance.

“The responsibility likely shifts to the entity providing the model, while the user might be exempt from liability,” said Pareekh Jain, CEO of Pareekh Consulting. “Using another company’s model, such as Meta’s or any large enterprise’s, places the responsibility on the creator of the model to use data wisely. Users typically don’t face legal issues.”

If stricter privacy laws are implemented, it might mean that only larger companies can afford the legal and privacy challenges, limiting who can produce large language models, Jain added. Smaller companies might find compliance too costly.

Enterprises would also need to ensure legal immunity in their contracts, like how OpenAI initially offered legal coverage to users.

“As more AI models are developed and more organizations are involved, it’s crucial they include legal safeguards in their operations,” Jain said. “This shifts legal liability to the model provider. While this may slow down innovation, it ensures that companies are also responsible for legal compliance, potentially restricting smaller players from entering the market.”

Enterprises will be forced to conduct regular audits of AI models to ensure compliance with data protection laws, said Thomas George, President of Cybermedia Research.

“Financially, enterprises should consider setting aside reserves to cover potential compliance-related costs, mitigating the impact of any necessary sudden modifications to AI models,” George said. “Operationally, investing in ongoing training and development for technical teams to stay abreast of the latest compliance requirements will enable more agile adjustments to AI systems when needed.”

The user data conundrum for AI companies  

The use of personal data in AI training is becoming a significant concern in the EU and beyond. Recently, Slack faced backlash over its privacy policies after a user exposed how customer data was being utilized in AI models, emphasizing the need for users to opt out.

OpenAI is facing a federal class action lawsuit in California, where it was accused of improperly using personal information for training purposes. Italy’s data protection authority, Garante, has also said that ChatGPT is in violation of EU data privacy standards.


Viewing all articles
Browse latest Browse all 1660

Trending Articles