The Internal Market Committee and the Civil Liberties Committee endorsed on Thursday a draft negotiating mandate for the initial set of regulations known as the "AI Act." It received 84 votes in favor, seven against, and 12 abstentions, indicating broad support.
The primary objective of the "AI Act" is to establish rules that promote transparency and effective risk management for AI systems, thereby modifying the Commission's initial proposal from 2021.
Under the proposed regulations, AI systems deemed to pose an unacceptable level of risk to public safety would be explicitly banned.
This includes systems that employ subliminal or manipulative techniques, exploit vulnerabilities, or facilitate social scoring—classifying individuals based on their behavior, personal characteristics, and socio-economic status. The aim is to prevent the deployment of AI systems that could harm individuals or unfairly discriminate against specific groups.
In addition to the prohibition on real-time and remote biometric facial recognition systems (with exceptions for law enforcement agencies investigating serious crimes, subject to judicial warrants), the ban extends to various other AI systems, such as:Â
Biometric categorization systems that rely on sensitive characteristics like gender, religion, race, ethnicity, and citizenship status.
Predictive systems used by the Police that rely on profiling, location data, or past criminal behavior.
AI systems that attempt to analyze and interpret people's emotions or mental states, banned in law enforcement agencies, border control, workplaces, and educational institutions.
The collection of biometric data from social media and security camera footage (CCTV footage) to create facial recognition databases.
The proposal also includes the creation of a "high-risk AI" list, encompassing AI systems that can cause harm to people's health, safety, fundamental rights, or the environment. Additionally, AI systems used to influence voters and social media recommender systems with over 45 million users would be categorized as high-risk.
To protect copyrighted data and prevent the production of illegal content using AI tools like generative foundation models (e.g., ChatGPT), the regulations mandate compliance with additional transparency requirements. Specific disclosures would be required if the content was generated using AI.
The proposed regulations represent a significant advancement in safeguarding individual rights and privacy, especially with the ban on live remote biometric identification systems.
However, concerns have been raised by Mher Hakobyan, an Advocacy Advisor at Amnesty International, who stated that the draft law might allow providers to bypass the regulations by categorizing their systems as low risk.
This potential loophole could undermine the effectiveness of the AI Act and potentially implicate the EU in human rights abuses outside its borders.
Before final negotiations with the Council on the "AI Act" can take place, the draft must receive approval from the entire Parliament. The vote on the mandate is scheduled for June 12-15.