Artificial Intelligence has seamlessly integrated into various aspects of our lives, often without our conscious recognition. Whether it’s the AI-powered features on our smartphones or the autonomous capabilities of modern vehicles, we have embraced AI’s presence. However, the emergence of Generative AI (GenAI) systems like ChatGPT has profoundly impacted all domains. With the soaring popularity of GenAI, there is a pressing need for comprehensive regulatory guidance and intervention to safeguard consumers and foster innovation simultaneously.
Leading the global front, the European Union (EU) introduced the EU AI Act in April 2021. Following extensive deliberations and amendments spanning two years, the EU Parliament’s foremost Parliamentary Committees have successfully passed the EU AI Act. Given AI’s intricate and sensitive nature, it is anticipated that the approval process and subsequent enaction into law may extend until the conclusion of 2023 or even 2024. Once it attains legal binding, this Act will mark a significant milestone as the initial all-encompassing artificial intelligence regulation framework.
Major Highlights of the EU AI Act
- The objective of the Act is to provide a legal, regulatory framework that balances the safe usage of AI and innovation in the sector.
- The regulations define AI as “a machine-based system designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
- The rules follow a risk-based approach and establish obligations and penalties for providers and users depending on the level of risk the AI can generate.
- The regulations encompass all organizations (EU or Non-EU) if they provide AI systems/uses to EU citizens.
Risk-Based Approach
Under the regulatory framework, the restrictions and obligations would depend on the risk of using AI. The Act divides activities into four types of risks:
- Unacceptable Risk AI:
This category includes uses of AI that are banned due to being a threat to people’s safety or their fundamental rights. Initially, the category included activities like social scoring or mass surveillance systems. However, the list has been extended to include AI systems for biometric classification, predictive policing, or biometric data scraping.
- High-Risk AI:
This list includes all the activities that can significantly impact people’s health, safety, fundamental rights, or the environment.
AI systems falling into eight specific areas would be under the High-Risk AI category (Annex III):
-
- Biometric and biometrics-based systems
- Management and operation of critical infrastructure
- Education and vocational training
- Employment, worker management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Assistance in legal interpretation and application of the law.
The category also includes AI systems used in products under the EU’s product safety legislation, such as toys, aviation, cars, medical devices, lifts, etc.
- Limited Risk AI:
AI systems that generate or manipulate images, audio, or video content, like deepfakes, fall under this category. The systems must comply with transparency requirements allowing users to make informed decisions. While interacting with AI, the user should be aware of it.
- Minimal Risk AI:
AI systems not covered in the above three categories posing minimal risk can operate in the EU without major restrictions.
Regulations/Requirements for High-Risk AI
The High-Risk AI systems should comply with the below-mentioned requirements. A conformity assessment (CA) is conducted before deployment to assess compliance with the requirements.
- Data and Data Governance – Appropriate techniques should be deployed to ensure training, testing, and validation data is relevant, representative, complete, and free of error and bias.
- Risk Management – the providers should identify the associated risks and develop a robust risk management system meeting the requirement of the EU AI Act.
- Record Keeping – Automated recording of events like recording of resource and energy usage and environmental impact throughout the life cycle.
- Robustness, accuracy, and cybersecurity – the system should maintain appropriate accuracy, robustness, and cybersecurity levels throughout the lifecycle.
- Transparency – ensure an appropriate degree of information transparency.
- Technical Documentation – there should be detailed documentation of technical requirements updated regularly.
- Human Oversight – human interface to identify anomalies and shut down, if necessary.
Additionally, the CA is not a one-time exercise. The providers must develop a robust monitoring plan to ensure compliance with the EU AI Act post-deployment.
Regulations/Requirements for GenAI Models
The requirements for Foundation Models have been covered in Article 28(b) of the EU AI Act:
- The provider should demonstrate mitigation of risks to health, safety, fundamental rights, the environment and democracy, and the rule of law.
- Demonstrate compliance with the requirements defined for high-risk AI systems.
- Involvement of independent experts at various levels
In addition to these, providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“Generative AI”) need to comply with:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
Penalties under the Act
For successful implementation and compliance with the regulatory requirements, the EU AI Act establishes a sanctions structure as penalties against infringements. The penalties could be a fixed sum or percentage of the total worldwide annual turnover of the offender. Factors such as the type and severity of the offense and the profile and conduct of the offender will be assessed to determine the number of fines.
How Evalueserve Can Help
With the EU AI Act, the European Union will present its first comprehensive regulatory framework/guidelines for the AI spectrum to the world. The objective of the regulatory framework is to safeguard users while promoting innovation and competition in the sector. The EU expects providers to identify significant risks within their framework through the guidelines and provide robust risk management structures.
Evalueserve’s risk experts have the necessary domain knowledge to help financial institutions navigate the regulatory landscape and understand and implement the guidelines. We have experience working with regulatory guidelines published by European regulators like the EBA. Evalueserve can support the model life cycle from testing to documentation. Our pre-defined testing suites and the pre-defined thresholds can be altered based on the nature and size of the activity. With Evalueserve accelerators like AIRA, MRMOne, and InsightFirstTM, we have helped our clients in KYC/AML and Model Risk Governance. Evalueserve can also provide solutions to automate your risk management framework.