Module 7: Existing and Emerging AI Laws and Standards: the EU AI Act/Module 6 V 2.0 Flashcards

1
Q

What is the EU AI Act and what are the aims of the act?

A

The EU AI Act is the world’s first comprehensive AI regulation. It aims to:
1) Ensure that AI systems in the EU are safe with respect to fundamental rights and EU values
2) Stimulate AI investment and innovation in Europe by providing legal certainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How does the EU AI Act define “AI Provider”?

A

An entity that develops AI systems to sell or otherwise make available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does the EU AI Act define “AI Deployer”?

A

An entity that uses an AI system under its authority.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

To whom does the EU AI Act apply?

A

The EU AI Act has extraterritorial scope. It can apply to AI providers and users outside of the EU in some cases (e.g. if the AI system is placed in the market in the EU and if the output generated by the AI system is used in the EU).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the exemptions to the applicability of the EU AI Act?

A

AI used in:
- A military context (national security and defense)
- Research and development (including R&D for products in the private sector)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does the EU AI Act require of AI Providers (and in some cases AI Deployers)?

A
  • Process AI use in accordance with the risk level
  • Document AI use
  • Audit AI use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 4 classifications of risk under the EU AI Act?

A

1) Unacceptable risk
2) High risk
3) Limited risk
4) Minimal or no risk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two subcategories of high risk AI?

A

1) Product safety - AI systems that are safety components of a product, or are themselves products covered by EU product safety laws.
2) Systems that pose a significant risk of harm to health, safety or fundamental rights.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which techniques, systems and uses are deemed to have an unacceptable risk level under the EU AI Act?

A
  • Social credit scoring systems
  • Emotion recognition systems in the areas of workplace and education institutions
  • AI that exploits a person’s vulnerabilities, such as age or disability
  • Behavioral manipulation and techniques that circumvent a person’s free will
  • Untargeted scraping of facial images to use for facial recognition
  • Biometric categorization systems using sensitive characteristics
  • Specific predictive policing applications
  • Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the 8 high risk areas set forth in Annex III of the EU AI Act?

A

1) Biometric identification and categorization of natural persons
2) Management and operation of critical infrastructure (such as gas and electricity)
3) Education and vocational training
4) Employment, worker management and access to self-employment
5) Access to and enjoyment of essential private services and public services and benefits (e.g., emergency services dispatching)
6) Law enforcement
7) Migration, asylum and border control management
8) Assistance in legal interpretation and application of the law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the requirements for Providers and Deployers of Limited Risk AI Systems?

A
  • Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots).
  • Deployers must:
    • Inform and obtain the consent of those exposed to permitted emotion recognition or biometric categorization systems
    • Disclose and clearly label visual or audio deepfake content that was manipulated by AI
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The requirements for Limited Risk AI Systems apply to which techniques, systems, and uses?

A
  • Systems designed to interact with people (e.g., chatbots)
  • Systems that can generate or manipulate content
  • Large language models (e.g., ChatGPT)
  • Systems that create deepfakes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Provide some examples of minimal or no risk AI systems.

A
  • Spam filters
  • AI-enabled video games
  • Inventory management systems
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the data governance requirements for Providers of high risk AI systems under the EU AI Act?

A

Data quality is critical to the accuracy and fairness of high-risk AI systems. Providers must ensure that they source, clean and process data in ways that mitigate bias and maintain the integrity of the system’s outputs by:

  • Ensuring input data is relevant for the purpose, free of errors, representative and complete.
  • Implementing robust data management practices: collection, annotation, labelling, cleaning; examination for biases (providers may process special category personal data to monitor, detect and correct bias).
  • Monitoring performance and safety; taking corrective steps for nonconforming systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the data governance requirements for Users/Deployers of high risk systems under the EU AI Act?

A
  • Users must follow the instructions for use
  • Users must monitor high risk AI systems and suspend the use of them if there are any serious issues
  • Users must update the Provider about serious incidents or malfunctioning
  • Users must keep automatically generated logs
  • Users must assign human oversight to the appropriate individuals
  • Cooperate with regulators
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the data governance requirements for Importers/Distributors of high risk systems under the EU AI Act?

A
  • Ensure the conformity assessment is completed and marked on the product
  • Ensure all technical documentation is available
  • Refrain from putting a product on the market that does not conform to requirements
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the registration and notification requirements for Providers under the EU AI Act?

A

Registration:
- Register the system in the EU-wide database for high risk AI systems which is owned and operated by the European Commission (includes contact info, conformity assessment, and instructions)

Notification:
- Establish and document a post-market monitoring system
- Report any incidents or malfunctioning to their local market surveillance authority which could affect fundamental rights within 15 days of discovery

18
Q

What is the definition of General Purpose AI (GPAI)?

A

An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market
- Can be integrated into a variety of downstream systems or applications
- A new categorization was created: high-impact GPAI models with systemic risk, and all other GPAI

19
Q

What are the obligations of GPAI with systemic risk (where the definition is based on computing power and substantial compliance requirements)?

A
  • Assessing model performance
  • Assessing and mitigating systemic risks
  • Documenting and reporting serious incidents and action(s) taken
  • Conducting adversarial training of the model (also known as “red teaming”)
  • Ensuring security and physical protections are in place
  • Reporting the model’s energy consumption
20
Q

What are the obligations of “all other” GPAI?

A
  • Maintaining technical documentation
  • Making information available to downstream providers who integrate the GPAI model into their AI systems
  • Complying with EU copyright law
  • Providing summaries of training data
21
Q

What are the key elements of the EU AI Act governance?

A
  • All relevant EU laws still apply
  • European AI Office and AI Board established centrally at the EU level
  • Sectoral regulators will enforce the AI Act for their sector
  • Providers can combine or embed AI Act requirements in existing oversight where possible, to prevent duplication and ease compliance
22
Q

What are the penalties for noncompliance with the EU AI Act?

A
  • The highest penalty applies to using prohibited AI: up to 35,000,000 € or up to 7% of global turnover for the preceding fiscal year
  • A lower penalty applies to noncompliance that does not involve using prohibited AI: up to 15,000,000 € or up to 3% of global turnover for the preceding fiscal year
  • More proportionate caps on fines for startups and small- or medium-sized enterprises
23
Q

What is the timeframe for the EU AI Act?

A

The EU AI Act entered into force on August 1, 2024. It will be fully applicable two years later, with some exceptions:
- 6 months out (February 2, 2025): prohibitions on AI in the “unacceptable risk” category apply
- 12 months out (August 2, 2025): rules apply that relate to:
- Notified bodies (Chapter III, Section 4)
- GPAI models (Chapter V)
- Governance (Chapter VII)
- Confidentiality (Article 78)
- Penalties (Articles 99 and 100)
- 24 - 36 months out: obligations apply for high-risk AI systems
- (August 2, 2026): the remainder of the AI Act applies, except Article 6(1), Classification Rules for High-Risk AI Systems (point 1)
- (August 2, 2027): Article 6(1) and corresponding obligations apply

24
Q

What is the intention of the EU AI Pact?

A

A voluntary commitment of industry to begin complying with EU AI Act requirements before legal enforcement begins.

25
How can organizations prepare for the EU AI Act?
1) Identify which of your AI systems will likely be classified as high risk by the act 2) Determine whether the AI systems are within the territorial scope of the act 3) Determine whether your organization is a Provider or User/Deployer 4) Consider the AI procurement policies and processes used 5) Perform a gap analysis comparing your existing AI policies, processes and standards with the act’s requirements 6) Keep up to date on technical standards from international and European standards organizations
26
What are the requirements for high risk AI systems laid out by Chapter II, Chapter 2 of the EU AI Act?
- Implementation of a continuous risk management system - Data and data governance - Technical documentation - Record keeping - Transparency and provision of information to deployers - Human oversight - Accuracy, robustness and cybersecurity
27
What are the requirements of the risk management system for high risk AI systems?
- Identify and analyze risks of the AI system; add measures to minimize and mitigate risks. - Provide technical documentation of the risks and mitigatory processes. - Maintain and update documentation even after product release. - Conformity assessments; post-market monitoring; serious incident reporting (within 2-15 days, depending on severity).
28
What are the technical documentation requirements for high risk AI systems?
- Event logging (period of use, databases referenced, input data, identity of human verifiers, etc.) to be retained for a minimum of six months. - Specifying technical and functional applications in relation to the AI system. - Documenting and ensuring quality-management procedures or following additional documentation obligations.
29
What are the record keeping requirements for high risk AI systems?
Keep logs in an automatic, documented manner (e.g., inputs and outputs should be traceable).
30
What are the requirements for transparency and provision of information to deployers for high risk AI systems?
- Must be clear, concise and relevant. - How to use system safely: system maintenance, capabilities and limitations, how to implement human oversight.
31
What are the requirements for human oversight for high risk AI systems?
- Humans must be able to oversee processes, understand how the AI system works, and understand and interpret output. - Human operators must have the ability to intervene and override AI systems, when necessary, especially in high-stakes scenarios like health care or law enforcement.
32
What are the requirements for accuracy, robustness and cybersecurity for high risk AI systems?
Ensure the system performs consistently to achieve its intended purpose: - Test regularly for accuracy and robustness - Build with resilience to cybersecurity threats
33
What are the Provider obligations for high risk AI systems?
- Compliance: Compliance with Chapter III, Section 2 of the EU AI Act - QMS: Implementation of a Quality Management System which covers all aspects of the AI system’s lifecycle. - Documentation: Maintain comprehensive documentation, including technical specs, risk management processes, and changes made to the system. - Automated logs: AI systems must be designed to automatically generate logs that record their operational outputs and decision-making processes. - Corrective actions and duty of information: Implementation of corrective actions for any system malfunctions or violations of the EU AI Act (includes informing regulators and users of any significant malfunctions or risks posed by the system). - Conformity assessments: Must be conducted prior to placing the AI system on the market. - Registration: Providers must register high risk systems in a publicly accessible EU-wide database before they are placed on the market. - Serious incident reporting: Providers must report any serious incidents or malfunctions that could result in harm or breach fundamental rights.
34
What is a Conformity Assessment and when is one required per Article 43 of the EU AI Act?
Definition: The process of verifying or demonstrating compliance with the requirements for high-risk AI systems as set out in Chapter III, Section 2 of the Act. This requirement ensures that AI systems are thoroughly evaluated for compliance before being made available to the public, reducing the likelihood of deploying unsafe or non-compliant AI technologies. CAs are particular to high-risk AI systems and must take place before the system is put on the market, as well as over the life cycle of the system. They must be performed depending on the AI system or technology’s risk to health, safety and fundamental rights of individuals. CAs apply to the use of AI in recruitment, biometric identification surveillance systems, safety components (e.g., medical devices), access to essential private and public services (e.g., creditworthiness, life insurance) and safety of critical infrastructure (e.g., energy, transport). The requirement is not just for cases where personal information is being processed.
35
How could a Deployer become a Provider?
- If the Deployer makes a substantial modification to a high risk AI system, or - If a Deployer makes a modification to an AI system that causes it to become high risk, or - If a Deployer alters the original intended purpose of the AI system, causing it to become high risk In any of these cases, the Deployer would take on the additional obligations of Providers/Developers.
36
What are the obligations of Deployers of high risk AI systems?
- Conduct a Fundamental Rights Impact Assessment (prior to using the AI system) - Ensure proper use of the AI system - Implement adequate human oversight - Monitor AI system performance (and report any issues to the Provider) - Maintain logs and documentation
37
What is the primary obligation of Importers of high risk AI systems?
To ensure foreign tools meet EU standards before entering the market.
38
What is the primary obligation of Distributors of high risk AI systems?
To ensure conformity and proper handling of AI systems within the supply chain.
39
What are the requirements for Importers and Distributors of high risk AI systems?
- Ensuring compliance before placing the AI system on the market (importers) - Verifying that the AI system has been registered (importers) - Checking for compliance with import requirements (importers) - Providing documentation and support to authorities (importers and distributors) - Ensuring no modifications affect compliance (distributors)
40
What are the key elements of EU AI Act Governance?
- All relevant EU laws still apply - European AI Office and AI Board established centrally at the EU level - Advisory forum to provide technical expertise to the AI Board and the Commission - Sectoral regulators will enforce the AI Act for their sector - Providers can combine or embed AI Act requirements in existing oversight where possible, to prevent duplication and ease compliance
41