week 4: AI Act Flashcards

(26 cards)

1
Q

Why should use of AI be regulated?

A

1.Data protection + Right to privacy

2. Discrimination and biases

Ex: NL childcare benefits scandal
□ parliamentary report showed tax authorities unfairly targeted poor families over childcare benefits -> P.M and gov resigned

Ex: racism and AI
□ Risk assesment (?) systems flagging black offenders as high risk and white as low, despite the white offenders crime much more severe

Ex: Amazon gender discrimination
□ Amazon scrapped secret recruiting tool that showed bias against women

**3. Misinformation/fake **

Deepfakes
□ porn
□ announcements/news from important figures

AI generated images, news
□ EX: Trump stuff
□ Ex: AI generated nudes from childhood photos

74% on avg across 29 countries think AI making it easier to generate very realistic Fake news, stories, images

**4. Technological singularity **

idea that once AGI is created, it might quickly improve itself, becoming much smarter than humans

**Artificial General Intelligence (AGI): **type of AI that can think, learn, and solve problems like a human across many different areas not just one task
- could lead to rapid and unpredictable changes in society, as machines become more intelligent and powerful than we can control/understand

Brain rot? AI art? Idk what this slide is trying to say

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

AI potential

A

AI has a lot of potential

AIA Recital. 4 :
□ AI is fast evolving family of technologies that contributes to wide array of economic, environmental, and social benefits
□ (…) AI can provide key competitive advantages to undertakings and support socially and environmentally beneficial outcomes
□ (…)

But…
AIA Recital. 5:
AI may generate risks and cause harm to public interests and fundamental rights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AIA Aim

A

Aims to harmonise MS national legislation to eliminate potential obstacles to trade on internal AI market and protect citizens and society against AI adverse effects
- Legal basis on TFEU Art, 114 - functioning of internal market

AI Act Art. 1(1):
Purpose of regulation is to:
1. improve the functioning of the internal market,
2. promote the uptake of human-centric and trustworthy AI
3. while ensuring a high level of protection of health, safety, fundamental rights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Context of AIA globally:

A

US and China dominate ‘global AI race’ while Europe lags behind
Response -> Regulate!

			□ 2018 Commission published European strategy for AI -> promote and increase investment in ambitions to become global AI powerhouse
		
		□ Turning perceived weakness into opportunity by making a virtue of political ideals and creating unique brand if AI w European values (distinct from US and Chinese)

(Intended) outcome?
□ Brussel’s effect: EU spreading it’s regulatory standards through soft coercion enabled by its strong internal market, even if trade partners don’t favour those
- De-facto: influence manufacturing of AI
- De-jure: influence regulation in other jurisdictions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

AIA Scope vs. Compare to GDPR Territorial scope

A

who AI act applies to
**AI Act Art. 2(1): **

*		(a) Providers: those who introduce or put into service AI in the EU, located in/outside EU
			(b) Deployers of AI systems within Union
			(c) Providers and deployers outside EU, if output used in Union
			(d) Importers and distributors of AI systems
			(e) Product manufacturers placing on the market/putting into service AI system together w their product under own name/trademark
			(f) Authorised representatives of providers, which are not est. in the Union
			(g) Affected persons located in Union*

Compare to GDPR Territorial scope:
* Art. 3 GDPR : Territorial scope
1. Regulation applies to processing of personal data in context of activities of an establishment of a controller or processor in Union, regardless of whether processing takes place in Union or not
2. Regulation applies to processing of personal data of data subjects who are in Union by a controller or processor not established in the union, where processing activities related to
(a) Offering of G/S, irrespective of whether payment of data subject required to such data subjects in Union; or
(b) Monitoring of their behaviour as far as behaviour takes place within Union*

—> AI Act:
Focuses on AI systems** use/impact in EU**
□ applies broadly to developers, deployers, and even those outside EU if the AI affects people in EU

—> GDPR: **
Focuses on
personal data of individuals in EU** applies to anyone processing such data, inside or outside EU

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Timeline of Europe’s AI strategy

A

Spring 2018:
- Commission adopts Communication on AI
- Starts pilot project on explainable AI

End 2018:
- Commission creates and operates European AI alliance
- Develops plan on AI w MS
- Drafts AI ethics guidelines for MS

Mid 2019:
- Commission publishes report on implications for and potential gaps in AI liability and safety frameworks

End 2020:
- Commission increases AI investment from € 500 mil (2017) to € 1.5 bill (2020)
- Develops ‘‘AI-on-demand platform’’ to encourage uptake of AI by private sector

Beyond 2020:
- Commission strengthens AI research centres, supports digital skills and creates data sharing centre

High-level expert group on AI - institution set up by Commission
○ Published ethics guidelines for trustworthy AI
○ Published assesment list for trustworthy AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI Act birth - timeline

A

○ Pressured by parliament
○ Based on HLEG reports Commission issued:
1. The White Paper on AI (2020)
2. The Proposal for the AI Act (2021)

Timeline AI Act:
- 2021: Commission Proposal
- 2022: Council adopts general approach
- 2023: Parliament’s negotiating position + (dec): provisional agreement b/w Council and Parliament
- 2024: Act officially adopted
□ 12 July : publication in journal
□ 1 Aug: entry into force (none of requirements apply yet)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Entry into force and application AIA

A

AI Act Art. 113:
- Shall apply from 2 Aug 2026
(a) CH I and II (II is on prohibited AI practices) shall apply from 2 Feb 2025
(b) CH III Section 4, V, VII, XII and Art. 78 shall apply from 2 Aug 2025, w exception of Art. 101
Art. 6(1) (on classification rules for high risk AI systems) shall apply from 2 Aug 2027

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Material scope AIA

A

‘AI system’

AI Act Art. 3 : definitions:
AIA Art. 3(1) : AI system
□ Machine-based
□ Operates w autonomy
□ May exhibit adaptiveness
□ Infers from input it receives how to generate outputs

AIA Rec. (12) :
□ Notion of AI systems should be
□ Based on Key characteristics of AI systems that distinguish it from traditional software systems/programming approaches
□ And should not cover systems based on rules defined solely by natural persons to automatically execute operations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Exceptions to AIA Scope

A

AI Act Art. 2 (3-10 around):
1. areas outside Union law scope

			2. AI systems for military, defence, national security purposes, regardless of type of entity carrying out those activities
			
			3. public authorities in third country nor IOs using AI systems for international cooperation or agreements for law enforcement and judicial cooperation (must provide adequate safeguards for fundamental rights)
			
			4.  AI systems/models/their output, specifically developed and put into service for sole purpose of scientific research and development
			
			5.  research/testing/development activity regarding AI systems/models prior to their being placed on market/put into service (unless tested in real world)
  1. deployers who are natural persons using AI systems in course of purely personal non-professional activity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

AIA: Risk-based approach

A

AI systems and/practices classified into series of graded tiers

Proportionately more demanding legal obligations that vary w EU perceptions of severity of risks they pose:

1. Unacceptable risk (prohibited)
  1. High risk (Ethics guidelines)
  2. GPAI (obligations on transparency, intellectual property protection, systemic risk mitigation)
  3. Limited risk (transparency requirements)
  4. No significant risk (no new legal requirements)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

AIA: Unacceptable risk

A

Unacceptable risk -> Prohibited

AI Act Art. 5: banned because they violate fundamental rights, autonomy, or dignity
- Manipulate human behaviour to circumvent free will

Types:
- Social scoring
□ AI systems that evaluate or rank people’s trustworthiness/behaviour based on personal characteristics or actions -> discrimination, bias

		- Facial recognition
			□ Real-time remote biometric identification in public spaces for law enforcement, except in strictly defined exceptions (e.g. searching for a missing child, preventing terrorist threats)
		
	-  Dark-pattern AI 
			□ design interfaces, espc. online that trick users into taking actions they otherwise would not
			□ Examples:
				- Making it hard to cancel a subscription
				- Hiding opt-out buttons or consent options
  • Manipulation
    □ AI systems that exploit vulnerabilities to distort human behaviour
    - (e.g. age, mental disability, financial distress)
    □ Example:
    - AI chatbot that pressures children into spending money in game
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Prohibited AI Practices AI Act

A

AI Act Art. 5(1)

(a): subliminal or manipulative/deceptive techniques

Dark-pattern AI, manipulation

Example: exploiting subconscious triggers, AI subtly influencing user’s purchasing decision without them realizing

(b): exploitation of vulnerabilities that materially distort behaviour and cause significant harm

Manipulative AI (especially affecting vulnerable groups)

Example: toy using AI to emotionally pressure child into making in-app purchases

(c): social scoring, leading to unfavourable treatment in unrelated contexts or in disproportionate and unjustified manner

Social scoring

Example: Penalizing someone in education/housing bc of behaviour in diff context (political opinions, social media activity)

**(d): predictive policing for criminal risk assesment **(human-in the loop exception)

Facial recognition (in some predictive systems), bias in policing

Example: Predicting someone will commit crime just bc of where they live/associate with

(e): untargeted scraping of images to create facial recognition databases

Facial recognition

Example: company scraping social media photos/CCTV to train facial recognition system w/o consent

(f): emotion recognition in the workplace or education institutions (except for medical/safety reasons)

Manipulation, privacy concerns

Example: AI monitoring student’s face during exam to assess focus/stress levels

(g): biometric identification to infer certain sensitive characteristics

Facial recognition, profiling

Example: AI identifying someone’s sexual orientation based on facial structure

(h): ‘real-time’ remote biometric identification for law enforcement
(exceptions for protection, prevention and prosecution)

Facial recognition

Example: Real-time scanning of all faces in a crowd to match against a watchlist (except in strict conditions like imminent real terrorism threat, missing child)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

High risk (AIA) - types

A
  1. High risk -> Conformity Assesment

AI Act Art. 6 & ss
○ Types:
- Education
- Employment
- Justice
- Immigration
- Law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

AIA High Risk Systems (categories)

A

AI Act Art. 6:

high-risk AI systems split into 2 categories:

** 1. Safety Component**

Means AI system is part of a larger regulated product as safety component
◊ (e.g., medical device, toy, machinery)

hese products already covered by existing EU product laws
(e.g. Machinery Directive, Medical Devices Regulation, etc.)

If the AI is part of that product and affects safety:
◊ Must meet Annex I (essential requirements of the AI Act)
◊ Must undergo conformity assessment to make sure it’s safe and compliant
- ✅ This is where CE marking comes in — showing it meets EU safety standards.

2. Self-standing AI:

Means the AI system not embedded in a regulated product, but still used in high-risk areas

types listed Annex III :
1. Biometrics and biometrics-based systems
2. Management of critical infrastructure like road, water, gas, electricity, internet
3. Educational and vocational training
4. Employment, workers management and access to self-employment tools
5. Access to public and private services, which incl. life and health insurance
6. Law enforcement
7. Migration, asylum and border control management tools
8. Administration of justice and democratic processes which incl. AI systems intended to be used for influencing elections and recommendation engines of Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs), as defined by the Digital Services Act (DSA)

Also must:
◊ Comply with Annex I requirements, and
◊ Undergo conformity assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Conformity assesment (AIA)

A

Conformity assesment -> for both types of high risk AI

requirements for high-risk AI (Title III Ch 2)
** 1. Foundational requirements (happen alongside/before):**

◊ Art. 8:  establish and implement **risk management processes** (continuous system)

◊ Art. 9:  risk management system defined in light of AI system **intended purpose**

2. Technical and operational

◊ Art. 10:  se high **quality training, validation and testing data** (relevant, representative, etc.)

◊ Art. 11 + 12:
establish documentation and design logging features (traceability and accountability)

◊ Art. 13:  ensure appropriate certain degree of **transparency** and provide users w info (on how to use system)

◊ Art. 14:   ensure **human oversigh**t (measures built into system and/or to be implemented by users 

◊ Art. 15:
ensure** robustness, accuracy, and cybersecurity**

17
Q

AIA High risk AI: Legal obligations 2 key groups involved w high-risk AI

both in testvision

A

Legal obligations 2 key groups involved w high-risk AI (operators) (Title III Ch 3) :

1. Provider obligations:
Art. 16:
◊ Establish and implement quality management system in its org.
◊ Draw-up and keep up to date technical docs
Logging obligations to enable users to monitor operation of the high-risk AI system
◊ Undergo conformity assesment and potentially re-assesment of system (in case of significant modifications)
◊ Register AI system in EU database
◊ Affix CE making and sign declaration of conformity
◊ Conduct post-marketing monitoring
◊ **Collaborate **w market surveillance authorities

2. User obligations
Art. 26:
◊ Operate AI system in accordance w instructions of use
Ensure human oversight when using AI system
Monitor operation for possible risks
Inform provider/distributor abt any serious incident or any malfunctioning
Existing legal obligations continue to apply (e.g. under GDPR)

18
Q

General - purpose AI models

A

AI Act Art. 51-55 (test vision)

GPAI has own separate category with tailored obligations bc:
- These models can power multiple applications (including high-risk ones)
- Even if used in minimal- or limited-risk contexts, their scale and influence can pose systemic risks

Examples: GPT-4, Claude, LLaMA, Gemini

Definition (Art. 3(63)) (testvision):
- ‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market

GPAI models:
- Limited obligations:
□ Transparency
□ IPR protection
□ Mitigation of systemic risks
-> AI Act §2 (obligations for GPAI providers)
-> AI Act §3 (obligations for GPAI w systemic risk providers)

19
Q

AIA Limited risk

A
  1. Limited risk -> Transparency

AI Act Art. 50: transparency obligations

1. for AI system Provider:

Direct interactions (e.g. chatbots)
- Info that an AI system is being interacted w (Art. 50(1))
◊ Exception: use of AI system is obvious

Generation of synthetic content
- Marking that content is artificially created (e.g. watermark) (Art. 50(2))
◊ Exception: supporting function for standard processing

2. for AI system Deployer:

Emotion recognition
- Info to natural person concerned (Art. 50(3))
◊ Exception: certain AI systems w safeguards

Deep fakes (Art. 3 (60))
- Disclosure that artificially created (Art. 50(4))
◊ Exception: limited disclosure (e.g. obviously artistic, satirical)

Text generation/manipulation
- Disclosure of artificially generated text if text is in public interest (Art. 50(4))
◊ Exception: Human review, editorial responsibility

Types:
- Chat bots
- Deep fakes
- Emotion recognition systems

20
Q

AIA Minimal risk

A

AI Act Art. 69
○ Types:
- Spam filters
- Video games
- Fall under existing legal frameworks

21
Q

You are working as an intern at a major law firm. Your task is to bring the team up to speed on the Artificial Intelligence Act

Write a memo explaining the aims of the AI Act and the mechanisms through which it seeks to achieve them.

A

📝 Memo: Aims and Mechanisms of the AI Act

🎯** Aims of the AI Act**

The Artificial Intelligence Act (AI Act) is the EU’s first comprehensive legal framework for AI. Its primary aims are:

  1. Ensure the protection of fundamental rights and public interests
    Including privacy, non-discrimination, and human dignity.
    Triggered by real-world harms like the Dutch childcare benefits scandal, biased policing tools, and gender-discriminatory hiring systems.
  2. Promote the development and use of trustworthy, human-centric AI
    Encourages innovation that aligns with European values.
    Aims to build trust in AI technologies by ensuring they are safe and explainable.
  3. Harmonise legislation across EU Member States
    Prevents fragmented AI regulations that could hinder the internal market.
    Legal basis: TFEU Article 114 – focuses on internal market functioning.
  4. Make the EU globally competitive in AI
    Establishes Europe as a global AI rule-setter.
    Leverages the Brussels Effect: using EU regulation to influence global standards.

⚙️Mechanisms to Achieve These Aims

The AI Act uses a risk-based approach to regulate AI systems, matching legal obligations to the level of risk posed:

  1. Prohibited AI Systems (Unacceptable Risk – Art. 5)
    AI uses that violate fundamental rights are banned, such as:

Social scoring (like China’s system)
Dark-pattern AI (trick interfaces)
Emotion recognition in schools/workplaces
Biometric surveillance in public spaces (with limited exceptions)

  1. High-Risk AI Systems (Art. 6 & Annex III)
    AI in critical areas (e.g. education, employment, health, justice) must:

Pass conformity assessments
Meet strict technical and legal requirements, such as:
Risk management (Art. 8–9)
High-quality training data (Art. 10)
Human oversight (Art. 14)
Transparency and traceability (Art. 13)
Providers must:
Register the system
Affix a CE mark
Monitor post-market performance
Users must:
Use the system properly
Report malfunctions or serious incidents

  1. General-Purpose AI (GPAI – Arts. 51–55)
    GPAI models like GPT-4 must meet tailored obligations:
    Transparency
    IP protection
    Mitigation of systemic risks if applicable
  2. Limited Risk AI (Art. 50)
    Applies to systems like chatbots and deepfakes:
    Must disclose AI use (e.g. “You are interacting with an AI”)
    Label synthetic content and deepfakes unless artistic or satirical
  3. Minimal Risk AI
    E.g. spam filters, video game AI
    No legal requirements
    Providers encouraged to follow voluntary codes of conduct (Art. 69)

📅** Application Timeline**
Entry into force: 1 August 2024
Prohibited systems rules (Art. 5) apply: 2 February 2025
Most rules (e.g. for high-risk systems) apply from 2 August 2026
Some specific provisions (e.g. Art. 6(1)): 2 August 2027

🔚** Summary**
The AI Act balances innovation with regulation. It protects rights, builds trust, and creates a level playing field through a tiered system of obligations based on the AI system’s risk level. By doing so, it positions the EU as a leader in ethical AI governance.

22
Q

Brussels effect requirements

A

Brussels effect requirements
1. Stringency
➔ The EU’s rules must be strict and demanding, meaning companies cannot easily
meet them with their existing practices — they have to upgrade.
2. Regulatory Capacity
➔ The EU must have the knowledge, technical expertise, and resources to design
and enforce detailed, high-quality regulations.
3. Inelastic Target
➔ The regulated product or service must be something companies cannot easily
avoid offering under EU rules (e.g., consumer products for the EU market).
4. Non-Divisibility
➔ It must be too costly or complicated for companies to maintain two separate
standards (one for Europe and one for elsewhere), so they apply the EU rules
globally.
Background: The European Union (EU) sees artificial intelligence (AI) as a crucial area for
its economy and society. The AI Act is Europe’s big attempt to create laws for AI, with two
goals:
1. Make AI systems safe and trustworthy inside the EU.
2. Set a global standard for AI regulation, exporting European values worldwide (this is
called the Brussels Effect).

23
Q

What is Brussels effect

A

What is the Brussels Effect? Normally, because Europe is such a huge market, companies
around the world adjust their products to meet EU rules—even when selling outside Europe.
For example, GDPR (Europe’s data privacy law) influenced many non-European companies
to upgrade their privacy practices globally.

24
Q

Brussels side effect

A

The Problem: The Brussels Side-Effect While the AI Act could trigger the Brussels Effect,
the authors argue that it will also cause a side-effect:
The Act is mostly a product safety law (like rules for making safe toys, cars, or phones), not
a fundamental rights law (like protecting privacy or fighting discrimination).

As a result, the AI Act might help spread a system that looks “safe” technically, but
misses deeper ethical issues.
8

Instead of raising global standards for democracy, fairness, and human dignity, it
could spread only a “safety-first” model—possibly making it harder to defend human
rights later.

25
How the AI Act relates to fundamental rights
How the AI Act Works It treats AI systems like products. It uses a risk-based approach The safety focus helps companies know what is legally expected. However, fundamental rights protections (privacy, fairness, democracy) are squeezed into a product safety framework, making them weaker. Why This Matters: Fundamental rights risks (like mass surveillance, discrimination by algorithms) aren't always about clear safety failures. These are often slow, invisible, or cumulative harms—not easy to "test" like a broken product. The AI Act struggles to regulate such risks well. Example: An algorithm used to hire workers might not physically harm anyone, but it could still deeply discriminate against minorities or reinforce social inequalities.
26
The Global Impact ai act
The Global Impact: ● ● ● Because of the Brussels Effect, countries outside Europe might copy the AI Act. But if they copy a version that is too focused on technical safety, they might neglect human rights protections even more. Worse, Europe might lose the chance to later create stronger global rules if the AI Act becomes the default model eve