What is the EU AI Act?
The EU AI Act is a proposed law to regulate AI as part of the European Union’s AI strategy. The proposal would classify AI systems by the level of risk they pose and subject them to commensurate regulation:
-
Non-high risk: Systems like those used in spam filters and video games will continue to be minimally regulated. However, companies will be required to make it clear when users are interacting with an AI, such as a chatbot.
-
High risk: Critical systems, including those used in transportation, education, and law enforcement, will be required to undergo extensive risk assessment and review to test for robustness, security, and accuracy. Companies will also be required to maintain detailed documentation to show compliance.
-
Unacceptable risk: Systems that directly threaten people’s lives and fundamental rights will be completely banned. Such systems include governmental social scoring and children’s toys which encourage dangerous behavior.
There have been a number of analyses of the proposal, including recommendations to improve it. Critiques include concerns that this regulation will prevent Europe from competing in developing and deploying AI, concerns about enforcement, and on the other hand concerns that this act will be insufficient to prevent major harms including existential risks. There are also concerns that the framework isn’t flexible enough to deal quickly with unexpected risks that could arise from new AI technologies.
As of May 2024, the EU AI Act has been ratified by the European Parliament and is expected to enter into force next month, with the obligations entailed by the act rolling out gradually over the following 36 months.