Skip to main content
Join Now
A laptop screen displaying an AI related graphic for the ASE Survey

< Back to All

Europe Has Established AI Regulatory Framework

April 6, 2024

On March 13, 2024, the European Parliament approved the EU Artificial Intelligence Act (the “AI Act”), which will be the world’s first comprehensive set of rules for artificial intelligence. “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential,” Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote.

This law will not just apply to the European Union (EU) but has a broad reach across borders.  For the EU Law, it initially was applying to consumers.  The way the law works is that the riskier the AI involvement is, the higher the level of scrutiny required. The definition of AI includes both predictive AI as well as generative AI (such as ChatGPT).

The AI Act sets out a broad definition of “AI systems” to mean a:

“machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Most AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.

High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.

Some AI uses are banned because the AI practices pose an “unacceptable risk.” An example is social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces.  To give an example, China employed a system to rate its citizens and on the basis of the rating are allowed to do or prohibited from doing things, like travel.

Other prohibited uses include police scanning faces in public using AI-powered remote “biometric identification” systems, except for serious crimes like kidnapping or terrorism.

For users of AI, some of the main requirements for high-risk AI include as identified by Littler:
  • Establishment, implementation, documentation, and maintenance of a “risk management system” which must be a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system.  A one-off risk assessment would not, therefore, fulfil this obligation.
  • Training of AI models must be developed to ensure that training, validation, and testing datasets comply with the quality criteria set out in the AI Act and are relevant, sufficiently representative and free of errors to the best extent possible.
  • The systems must be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately.
  • Users must establish a documented quality management system to ensure compliance with the AI Act to include putting in place written policies, procedures and instructions.

And more. Generally, these provisions will apply to creators of AI systems.

For users, they must:
  • Ensure that the AI system is being used in accordance with its instructions for use.
  • Assign human oversight to individuals who have the necessary competence, training, authority, and support.
  • Ensure that the input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system by employers.
  • Monitor the AI system on the basis of its instructions for use and, where relevant, inform the provider of any issues.

Again, and more.

The penalties for non-compliance of the AI Act are up to the higher of EUR 35 million (USD 38 million) or 7% of the company’s global annual turnover in the previous financial year. These penalties are almost double the maximum penalty for GDPR breaches.

The next step is to have the European Council approve the law in June.  It will generally come into effect two years after or in 2026.  American employers with global, and especially European reach, will be subject to this new law as long as the AI systems are used or output thereof is used in the European Union.  Employers should work with legal counsel now to set up the compliance parameters for the use of AI, as AI is being introduced in many HR tools without HR knowledge, and the vendors are not providing indemnity provisions in their contracts.

Source:  Littler 3/18/24, Time.com 3/13/24

 

By Anthony Kaylin, courtesy of SBAM-approved partner, ASE.

Click here for more News & Resources.

Share On: