2: AI ACT Flashcards
(12 cards)
Ex post <=> ex ante regulating
Ex ante: to prevent, before something happens
Ex. Transparancy obligations
Ex post: if something happens that there is liability and accountability
Predecessor of the AI act
1) 2018: first official EU approach to AI and robotics, highlights:
* the need to stay ahead of tech developments and encourages uptake by public and private sector
* need to prepare for socio-economic changes that will come
* explores the need of having a legal framework
=> EU strategy: making EU a world class hub for AI
=> focus on preserving, protecting and respecting human rights
2) 2018: high-level expert group on AI
Important step in regulating process
=> group of experts created a set of non-binding recommendations, and ethic guidelines for thrustworthy AI
=> recommendations were transformed into a assessment list for thrustworthy AI
3) EU regulatory approach to AI: White paper (2020) focussed on 2 pillars:
1) Excellence: strengthening AI research investment and innovation in the EU
2) Thrust: establishing risk based regulatory framework to protect fundamental rights
=> goal to set the stage for AI proposing the risk based approach: assess whether we need legal framework and whether it is possible to enforce it or should we focus on adjusting existing laws to make changes that would cover AI systems
=> ensuring compliance with EU values: AI should be fair, human-centered and accountable
==> talked about banning AI application that violate fundamental human rights
AI act when
2021 proposal for AI act issued by the European commission: legislative process starts now (everything before the proposal were just ideas, visions,..)
European commission issues a proposal, European parlement and council prepare their own version and sit together
Final step: vote (2024), text was adopted
What does the AI act consider as AI?
1) Machine based system
2) designed to operate with varying levels of autonomy
3) that may exhibit adaptiveness after deployment
4) that for explicit or implicit objectives infers from the input it receives how to generate an output such as predictions/content/recommendations/decisions
5) that influence physical or virtual environments
What are the 5 important exceptions for the AI act
1) Open source and free software, unless they created high risks
2) Research and development before AI is put on the market
3) Systems that are used for personal non-professional activities
4) AI systems for military use
5) AI for national security purposes
IMPORTANT Q: what is the core of the AI act
Risk-based approach to regulation: definition of what is considered as AI is so broad that not all AI-systems can be regulated in the same way
=> Classifies AI systems into 4 risk categories with different levels of regulations based on their risk of potential harm to the health/safety/fundamental rights of individuals and society
1) Unacceptable risk: Some risks are considered so extreme that they are unacceptable so they are prohibited
2) High risk: allowed but they have to comply to a lot of obligations => pose significant risk of harm to the health, safety and fundamental rights of natural persons
3) Specific transparency obligations: deepfake, genAI, chatbots; allowed but with additional obligations to transparancy so that the user is aware
4) Minimal or no risk: allowed without any restrictions but they have to comply to other laws that apply to them like GDPR or consumer protection
=> goal: balance innovation with protection of fundamental rights and safety
What are prohibited unacceptable AI systems?
- Manipulative systems
- Systems that exploit human vulnerabilities (age, gender, disability)
- Biometric categorisation systems that categorise individuals based on sensitive information
- real-time biometric identification systems in the public for law enforcement
- Profiling to predict criminal behavior
What are the types of high risk AI systems?
Biggest part of the AI act focusses on the high-risk systems
=> have significant impact of safety, security, and human rights
=> must comply with transparency oversights and risk management requirements before deployment
1) AIS: product or a component of a product that needs to go through independent third party certification => conformity assessment: whether these products contain AI or not, they should receive a label of conformity
=> types of product that already fall under different sectoral legislation
AIS: AI systems when they are used in specific sectors for specific uses (both need to apply)
What are examples of high risk AI systems: type 1?
And which are excluded?
- Medical devices
- Machinery
- Elevators
- Protection equipment
=> AI or not, because of the fact that these products need to go through assessment they are considered high risk when they contain AI
Excluded: bikes, trains, boats, planes (they are subjected to other legislation outside the AI act)
What are examples of high risk AI systems: type 2?
Specific sector for specific use
* AI as safety component in critical infrastructure: road traffic/supply water, gas, electricity
* AI to determine admission or to assign natural persons to education/ evaluate learning outcomes
* To evaluate creditworthiness of natural persons
* to be used as judicial authority in interpreting facts and law
2 most important actors in AI-ecosystem
- Provider: developer, most important category with the biggest amount of obligations
- Deployer: professional user, company that buys AI system and uses it for professional purposes
Examples of AI-systems with specific transparancy obligations?
- interactive AI systems (P)
- synthetic content (P)
- deep fakes (mark as artificially generated or manipulated) (D)
- emotion recognition or biometric categorisation system (D)