Skip to content
The European Union’s AI and Data Privacy Regulation, Explained
Go to my account

The European Union’s AI and Data Privacy Regulation, Explained

Regulators in Europe have become the most aggressive in the world.

(Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)

Meta’s AI assistant will not be available in the European Union—at least for now. After clashing with Ireland’s data regulators over privacy concerns, Meta announced earlier this month it will delay releasing its AI assistant in the EU. 

Meta had been collecting publicly shared content from Facebook and Instagram users across the world to train its large language model (LLM), Meta Llama 3. Such LLMs are trained on large datasets to generate, summarize, translate, and predict digital content. Meta’s new AI assistant, powered by Llama 3, integrates these features into Meta’s social platforms.

Meta’s AI Assistant first launched in the United States in September 2023, and has since expanded to Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe. Europe was on the horizon—until Ireland’s Data Protection Commission (DPC) requested (with the implicit threat of fines and further legal action) that Meta stop using social media posts from Europeans to train Llama 3, stymying Meta’s plans to launch its AI assistant in Europe. 

While AI regulation is still a largely theoretical concept in the United States, the European Union has taken a much more aggressive stance in recent years with two main schemes: the General Data Protection Regulation (GDPR), implemented in 2018, and the EU AI Act, which went into effect in March. Both regulations apply to all EU member countries, and the GDPR also covers Iceland, Liechtenstein and Norway, which are part of the European Economic Area but not the EU. 

What does the GDPR do?

The GDPR, which the EU itself describes as “the toughest privacy and security law in the world,” sets standards for the protection of personal data and imposes strict financial penalties on violators, which can reach into the tens of millions of euros. The GDPR applies to any organization that processes the personal data of European citizens or residents, including outside the region like Meta. 

Each of the 30 nations bound by the GDPR operates its own data protection authority responsible for monitoring compliance with the law. These bodies are subordinate to the European Data Protection Board, which ensures uniform application of the GDPR across the region and reviews appeals from those subjected to penalties. The GDPR also establishes a host of individual rights related to accessing, restricting, and erasing personal data, and gives internet users a private right of action to sue for damages in civil court.

The GDPR limits data processing, defined broadly as “any action performed on data, whether automated or manual.” This encompasses the recording, organizing, storing, using, or erasing of personal information. According to the law, personal data can only be processed if:

  • an individual clearly consents to the processing of their data, for example, subscribing to an email list;
  • the data processing is necessary for a contract, a personal legal obligation, or saving someone’s life;
  • the data processing serves the public interest or some official function;
  • there is some other “legitimate interest,” unless it conflicts with the “fundamental rights and freedoms of the data subject,” especially in the case of a child.

The GDPR outlines seven principles for data protection and accountability:

  • data processing must be lawful, fair, and transparent to the individual;
  • any processing of personal data must be related to the specific, legitimate purposes for which the data was originally collected;
  • data collection must be limited to what is necessary for a specific purpose;
  • processors must keep users’ personal data up to date and correct any inaccuracies “without delay”;
  • processors may not store personally identifying data longer than necessary for the originally stated purpose; 
  • processors must ensure “appropriate” security and confidentiality;
  • at any given time, data processors must be able to prove compliance with all GDPR requirements.

The GDPR’s terminology is somewhat nebulous by design. The EU itself describes the regulation as “fairly light on specifics” and justifies this ambiguity as a hedge against obsolescence. With technology evolving so quickly, some generality is needed in order for the law to remain applicable, according to the EU.

The law’s vagueness also grants broad discretion to regulators. The Irish DPC, for example, has stood out as particularly zealous in its enforcement of the GDPR. With Meta’s European headquarters located in Dublin, Irish regulators have led the charge against the tech giant as evidenced by this month’s move. According to its 2023 annual report, the Irish DPC was responsible for 87 percent of all GDPR fines across the EU, most of which were aimed at Meta for privacy infractions. 

This contentious dynamic is likely to endure. In a June 14 statement, the DPC declared that it “will continue to engage with Meta on this issue” to enforce the GDPR alongside its fellow EU data protection authorities. Meta, meanwhile, expressed “disappointment” with the DPC’s request, claiming its LLMs need to access the public content shared on its social media platforms in order to “accurately understand important regional languages, cultures or trending topics on social media.” Meta noted that several competitors—including Google and OpenAI—still train their LLMs on data from users in the EU and stressed that it does not use private posts or messages to train its software.

Though the DPC’s move has paused development of Meta’s AI Assistant in Europe, negotiations between the company and regulators are still underway. “We remain highly confident that our approach complies with European laws and regulations,” Meta said in a statement.   

What does the EU AI Act do?

The EU AI Act, passed in March 2024, is considered the world’s first comprehensive AI regulation. It restricts certain forms of AI with the aim of ensuring that “AI systems respect fundamental rights, safety, and ethical principles,” according to the European Commission. 

The act primarily targets AI developers but also covers individuals and organizations using AI systems in a professional capacity—for example, websites with customer service chatbots or personalized shopping recommendations. Like the GDPR, the AI Act applies to entities operating in the EU, regardless of location. The AI Act, in contrast to the GDPR’s scattered regulatory framework, is enforced centrally by the European AI Office.

The EU AI Act classifies AI into four categories of risk. 

  • “Unacceptable risk”: banned. This refers to AI used for manipulation, biometric categorization, creation of facial recognition databases, or social scoring by public authorities to assess a person’s trustworthiness, as implemented by the CCP.
  • “High risk”: tightly regulated. This applies to AI that profiles individuals—for example, a resume scanner for job applicants—and AI used in critical areas, such as infrastructure, education, employment, law enforcement, and the judiciary.
  • “Limited risk”: minimal transparency requirements. This covers general purpose AI systems (GPAI), such as chatbots and deepfakes. Providers of such systems must make people aware they are interacting with AI-generated content. 
  • “Minimal risk”: largely unregulated. This includes spam filters and AI-enhanced video games. 

Most of the EU AI Act addresses “high risk” AI systems, requiring providers to develop guardrails to monitor risk, ensure accuracy, enable human oversight, and help “downstream providers” who integrate the GPAI into other platforms to comply with the AI Act’s requirements. They must also keep detailed records to document compliance with the AI Act to enforcement authorities at the European AI Office. 

For GPAI systems designated as “limited risk,” the regulatory burden is significantly lighter. Still, providers must document the training processes for their GPAI systems, abide by the EU’s Copyright Directive, and inform “downstream providers” about the system’s capabilities and limitations so they may comply with the AI Act.

The EU AI Act is scheduled to take effect in stages based on each level of risk. “Unacceptable risk” AI systems will be prohibited by September, six months after the act’s passage. “High risk” AI systems, depending on their type, will have between 24 and 36 months to comply with the regulatory requirements, while “limited risk” GPAI systems, like Meta’s Llama 3, will have just 12 months. 

Though the law has yet to be enforced, tension between the nascent European AI Office and the tech giants is already brewing. Executives at Amazon and Meta warned regulation could cripple AI research and development in the EU. “We need to make sure that innovation continues to happen and that the innovation doesn’t just come outside Europe,” Werner Vogels, Amazon’s chief technology officer, told CNN. “We already have a very long history in Europe of underinvesting in R&D.”

What does this mean for AI regulation going forward?

The standoff between Meta and Ireland’s DPC illustrates the ongoing struggle to balance innovation and ethics in the EU’s complex regulatory landscape. The need for massive amounts of data to train AI systems runs headlong into the EU’s privacy restrictions, leading to a zero-sum battle between regulators and developers. With the GDPR in full force and the AI Act looming on the horizon, more clashes are likely to come. 

Meta isn’t the only tech company sparring with regulators over development of AI features. Apple recently announced it will withhold Apple Intelligence from the EU, citing concerns about the Digital Markets Act.

Michael Frank, founder and CEO of the AI consulting company Seldon Strategies, doubts the new AI Act will truly establish the new “global standard” EU regulators have proclaimed. “I don’t think it will be extraterritorial,” Frank said. “Either the EU waters down the regulation in the implementation phase, or AI providers will exit the market.”

Anna Kriebel is an intern at The Dispatch, based in Washington, D.C. She attends the University of Virginia and has previously contributed to National Review, The Virginia Review of Politics, The Virginia Undergraduate Law Review, and The Richmond Times-Dispatch. When Anna is not writing, she is probably gardening or getting creative in the kitchen.