Breaking Down the Risks: EU Demands More Information from Tech Giants on Generative AI

The European Commission has issued a series of official information requests (RFI) to Google, Meta, Microsoft, Snap, TikTok, and X regarding their approach to managing risks associated with the use of generative AI.

EU increases oversight of large platforms for GenAI risks before elections | ENBLE

The European Commission is not taking any risks when it comes to the potential dangers of generative AI. In a move that has sent shockwaves through the tech industry, the Commission has formally requested information from several major players, including Google, Meta, Microsoft, Snap, TikTok, and X, about how they are handling the risks associated with this cutting-edge technology.

What’s the Big Deal?

The European Commission’s demands come under the umbrella of the Digital Services Act (DSA), which outlines the rules and regulations governing ecommerce and online governance in the European Union. These tech giants, also known as very large online platforms (VLOPs), have been designated as such by the Commission, meaning they have additional responsibilities beyond mere compliance with the rulebook.

The Commission is particularly interested in how these platforms are mitigating risks related to generative AI. This includes concerns such as the creation of false information, the viral spread of deepfakes, and the potential manipulation of services that could mislead voters. The EU regulators are leaving no stone unturned, also requesting information on the impact of generative AI on electoral processes, dissemination of illegal content, protection of fundamental rights, gender-based violence, protection of minors, and mental well-being.

Stress Testing AI for Political Deepfakes

The European Commission is not stopping with information requests. They are planning a series of stress tests to assess the readiness of these platforms to handle generative AI risks, such as the possibility of a flood of political deepfakes. With the June European Parliament elections approaching, the EU is determined to ensure that the platforms are fully prepared to detect and combat any incidents that could compromise the integrity of the elections.

“We want to push the platforms to tell us whatever they’re doing to be as best prepared as possible… for all incidents that we might be able to detect and that we will have to react to in the run-up to the elections,” said a senior Commission official, speaking on condition of anonymity.

Building an Ecosystem of Enforcement

Election security is one of the Commission’s top priorities, and they are working on producing formal guidance for VLOPs in this regard. The forthcoming Election Security Guidelines aim to go “much further” than the recent tech industry accord on combatting deceptive use of AI during elections. The EU plans to leverage a triple whammy of safeguards: The DSA’s clear due diligence rules, the Code of Practice Against Disinformation, and the transparency labeling/AI model marking rules under the incoming AI Act.

The goal? Building “an ecosystem of enforcement structures” that can be tapped into when it comes to election security.

The Broader Spectrum of Generative AI Risks

While the EU’s focus is on election security, they are well aware that generative AI poses risks beyond voter manipulation. Harms related to deepfake porn and other malicious synthetic content generation, regardless of the type of media produced, are also on the Commission’s radar. In addition to electoral risks, the DSA enforcement on VLOPs covers issues such as illegal content (including hate speech) and child protection.

The Commission has set a deadline of April 24 for the platforms to provide responses to their requests for information regarding these other generative AI risks.

Going Beyond VLOPs

The EU is not only concerned about the major platforms but also about the smaller players that can potentially distribute misleading or malicious deepfakes. While these smaller platforms and AI tool makers are not under the explicit oversight of the DSA, the Commission plans to apply indirect pressure through larger platforms acting as amplifiers or distribution channels. They will also rely on self-regulatory mechanisms, such as the Code of Practice Against Disinformation and the AI Pact.

Q&A: What You Might Want to Know

Q: What is generative AI?

A: Generative AI refers to artificial intelligence technologies that can create new content or information autonomously. This can include generating text, images, videos, or other forms of media.

Q: What are deepfakes?

A: Deepfakes are realistic, manipulated videos or images created using AI algorithms. They can make it seem like someone said or did something that never actually happened, leading to potential misinformation and manipulation.

Q: How do deepfakes affect elections?

A: Deepfakes can be used to spread false information about political candidates or to manipulate the public’s perception of a political event or statement. This can undermine the integrity of elections by misleading voters and creating confusion.

Q: What are the potential risks of generative AI beyond election security?

A: Generative AI can pose risks such as the creation of malicious synthetic content, including deepfake porn, hate speech, or other harmful media. It can also have implications for child protection and the dissemination of illegal content.

The Future of Generative AI Regulation

The European Commission’s actions highlight the growing recognition of the risks associated with generative AI technology. As the capabilities of AI continue to develop, it is essential to establish clear guidelines and enforceable regulations to ensure that these technologies are used responsibly and ethically. The EU’s focus on building an ecosystem of enforcement structures demonstrates their commitment to staying ahead of the curve and safeguarding elections and online content.

With the impending stress tests and the forthcoming Election Security Guidelines, the EU is sending a clear message to tech giants: The time for enhanced responsibility and proactive measures is now. Let’s hope their efforts result in a safer, more transparent digital landscape.

References