Module 7: Existing and Emerging AI Laws and Standards: the EU AI Act/Module 6 V 2.0 Flashcards
What is the EU AI Act and what are the aims of the act?
The EU AI Act is the world’s first comprehensive AI regulation. It aims to:
1) Ensure that AI systems in the EU are safe with respect to fundamental rights and EU values
2) Stimulate AI investment and innovation in Europe by providing legal certainty
How does the EU AI Act define “AI Provider”?
An entity that develops AI systems to sell or otherwise make available.
How does the EU AI Act define “AI Deployer”?
An entity that uses an AI system under its authority.
To whom does the EU AI Act apply?
The EU AI Act has extraterritorial scope. It can apply to AI providers and users outside of the EU in some cases (e.g. if the AI system is placed in the market in the EU and if the output generated by the AI system is used in the EU).
What are the exemptions to the applicability of the EU AI Act?
AI used in:
- A military context (national security and defense)
- Research and development (including R&D for products in the private sector)
What does the EU AI Act require of AI Providers (and in some cases AI Deployers)?
- Process AI use in accordance with the risk level
- Document AI use
- Audit AI use
What are the 4 classifications of risk under the EU AI Act?
1) Unacceptable risk
2) High risk
3) Limited risk
4) Minimal or no risk
What are the two subcategories of high risk AI?
1) Product safety - AI systems that are safety components of a product, or are themselves products covered by EU product safety laws.
2) Systems that pose a significant risk of harm to health, safety or fundamental rights.
Which techniques, systems and uses are deemed to have an unacceptable risk level under the EU AI Act?
- Social credit scoring systems
- Emotion recognition systems in the areas of workplace and education institutions
- AI that exploits a person’s vulnerabilities, such as age or disability
- Behavioral manipulation and techniques that circumvent a person’s free will
- Untargeted scraping of facial images to use for facial recognition
- Biometric categorization systems using sensitive characteristics
- Specific predictive policing applications
- Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
What are the 8 high risk areas set forth in Annex III of the EU AI Act?
1) Biometric identification and categorization of natural persons
2) Management and operation of critical infrastructure (such as gas and electricity)
3) Education and vocational training
4) Employment, worker management and access to self-employment
5) Access to and enjoyment of essential private services and public services and benefits (e.g., emergency services dispatching)
6) Law enforcement
7) Migration, asylum and border control management
8) Assistance in legal interpretation and application of the law
What are the requirements for Providers and Deployers of Limited Risk AI Systems?
- Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots).
- Deployers must:
- Inform and obtain the consent of those exposed to permitted emotion recognition or biometric categorization systems
- Disclose and clearly label visual or audio deepfake content that was manipulated by AI
The requirements for Limited Risk AI Systems apply to which techniques, systems, and uses?
- Systems designed to interact with people (e.g., chatbots)
- Systems that can generate or manipulate content
- Large language models (e.g., ChatGPT)
- Systems that create deepfakes
Provide some examples of minimal or no risk AI systems.
- Spam filters
- AI-enabled video games
- Inventory management systems
What are the data governance requirements for Providers of high risk AI systems under the EU AI Act?
Data quality is critical to the accuracy and fairness of high-risk AI systems. Providers must ensure that they source, clean and process data in ways that mitigate bias and maintain the integrity of the system’s outputs by:
- Ensuring input data is relevant for the purpose, free of errors, representative and complete.
- Implementing robust data management practices: collection, annotation, labelling, cleaning; examination for biases (providers may process special category personal data to monitor, detect and correct bias).
- Monitoring performance and safety; taking corrective steps for nonconforming systems.
What are the data governance requirements for Users/Deployers of high risk systems under the EU AI Act?
- Users must follow the instructions for use
- Users must monitor high risk AI systems and suspend the use of them if there are any serious issues
- Users must update the Provider about serious incidents or malfunctioning
- Users must keep automatically generated logs
- Users must assign human oversight to the appropriate individuals
- Cooperate with regulators
What are the data governance requirements for Importers/Distributors of high risk systems under the EU AI Act?
- Ensure the conformity assessment is completed and marked on the product
- Ensure all technical documentation is available
- Refrain from putting a product on the market that does not conform to requirements
What are the registration and notification requirements for Providers under the EU AI Act?
Registration:
- Register the system in the EU-wide database for high risk AI systems which is owned and operated by the European Commission (includes contact info, conformity assessment, and instructions)
Notification:
- Establish and document a post-market monitoring system
- Report any incidents or malfunctioning to their local market surveillance authority which could affect fundamental rights within 15 days of discovery
What is the definition of General Purpose AI (GPAI)?
An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market
- Can be integrated into a variety of downstream systems or applications
- A new categorization was created: high-impact GPAI models with systemic risk, and all other GPAI
What are the obligations of GPAI with systemic risk (where the definition is based on computing power and substantial compliance requirements)?
- Assessing model performance
- Assessing and mitigating systemic risks
- Documenting and reporting serious incidents and action(s) taken
- Conducting adversarial training of the model (also known as “red teaming”)
- Ensuring security and physical protections are in place
- Reporting the model’s energy consumption
What are the obligations of “all other” GPAI?
- Maintaining technical documentation
- Making information available to downstream providers who integrate the GPAI model into their AI systems
- Complying with EU copyright law
- Providing summaries of training data
What are the key elements of the EU AI Act governance?
- All relevant EU laws still apply
- European AI Office and AI Board established centrally at the EU level
- Sectoral regulators will enforce the AI Act for their sector
- Providers can combine or embed AI Act requirements in existing oversight where possible, to prevent duplication and ease compliance
What are the penalties for noncompliance with the EU AI Act?
- The highest penalty applies to using prohibited AI: up to 35,000,000 € or up to 7% of global turnover for the preceding fiscal year
- A lower penalty applies to noncompliance that does not involve using prohibited AI: up to 15,000,000 € or up to 3% of global turnover for the preceding fiscal year
- More proportionate caps on fines for startups and small- or medium-sized enterprises
What is the timeframe for the EU AI Act?
The EU AI Act entered into force on August 1, 2024. It will be fully applicable two years later, with some exceptions:
- 6 months out (February 2, 2025): prohibitions on AI in the “unacceptable risk” category apply
- 12 months out (August 2, 2025): rules apply that relate to:
- Notified bodies (Chapter III, Section 4)
- GPAI models (Chapter V)
- Governance (Chapter VII)
- Confidentiality (Article 78)
- Penalties (Articles 99 and 100)
- 24 - 36 months out: obligations apply for high-risk AI systems
- (August 2, 2026): the remainder of the AI Act applies, except Article 6(1), Classification Rules for High-Risk AI Systems (point 1)
- (August 2, 2027): Article 6(1) and corresponding obligations apply
What is the intention of the EU AI Pact?
A voluntary commitment of industry to begin complying with EU AI Act requirements before legal enforcement begins.