Week 6: Analysis of AI use cases Flashcards

(26 cards)

1
Q

AIA Scope

A

AIA Art. 1 : subject matter
1. The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation
2. This Regulation lays down:
(a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
(b) prohibitions of certain AI practices;
(c) specific requirements for high-risk AI systems and obligations for operators of such systems;
(d) harmonised transparency rules for certain AI systems;
(e) harmonised rules for the placing on the market of general-purpose AI models;
(f) rules on market monitoring, market surveillance, governance and enforcement;
measures to support innovation, with a particular focus on SMEs, including start-ups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

AI system definition

A

Definition given in Art. 3(1):
(AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments)

Further defined in recently published guidelines from Commission as per AIA art. 96(1)(f) and (2)
□ Agility: the guidelines can be updated
□ In line w Recital 12: ‘’ the notion of AI system in this Regulation should be clearly defined (…) while providing flexibility to accommodate rapid technological developments in this field’’

Guidelines on AI system definition adopted in parallel to Guidelines on Art. 5 (prohibited practices)
Makes sense in light of AIA’s phased applicability

Final definition (AIA Art. 3(1)) greatly deviates from AIA proposal
Article 3(1) Proposal AI Act:
“‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Aim and legal status of the Commission Guidelines on AIA

A

Para 3:
‘‘By issuing these Guidelines, Commission aims to assist providers and other relevant persons, including market and institutional stakeholders, in determining whether a system constitutes an AI system within the meaning of the AI Act, thereby facilitating the effective application and enforcement of that Act”

Para 6:
already signals hurdles in achieving this aim
“The definition of AI system should not be applied mechanically; each system must be assessed based on its specific characteristics”

para 9:
Guidelines non-binding
“Any authoritative interpretation of AI Act may ultimately only be given by Court of Justice of the European Union (CJEU)”’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

AI system elements (AIA)

A

AI systems definition (Art. 3(1)) comprises 7 elements:

AI system means:
1. Machine-based system
2. Designed to operate w **varying levels of autonomy
and
3. That may exhibit adaptiveness after deployment
4. And that for explicit/implicit objectives
5.
Infers from the input it receives, how to generate outputs**
6.** Such as predictions, content, recommendations, or decisions**
7. that can influence physical/virtual environments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

AI system: autonomy

A
  1. Designed to operate w varying levels of autonomy

-> Recital 12:
□ AI systems designed to operate w varying level of autonomy
□ meaning they have some degree of independence of actions from human involvement and of capabilities to operate without human intervention.”

Para 15 Guidelines:
□ notions of autonomy and inference go hand in hand:
- inference capacity of an AI system (i.e., its capacity to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments) is key to bring about its autonomy

-> Key to bring about its autonomy:
- Interference capability (the ability to generate outputs from input data) is what makes an AI system autonomous - core feature that lets the system act without step-by-step human instructions
-
Analogy: calculator isn’t “autonomous” — does exactly what you type, But AI that takes job application and recommends candidates based on patterns is autonomous – infers and outputs decisions or suggestions without direct input at every step

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

AI systems : adaptiveness

A
  1. may exhibit adaptiveness after deployment

Adaptiveness ≠ autonomy

Para 22 Guidelines:
□ Recital 12 AI Act clarifies that ‘adaptiveness’ refers to self-learning Capabilities
- allowing the behaviour of the system to change while in use
□ new behaviour of adapted system may produce diff results from the previous system for the same inputs.”
-> From the previous system:
previous system = the same AI system in its earlier, pre-learning state (before it learned/adapted)

Conceptually challenging:
- Because we are used to systems behaving consistently (predictable)
- But adaptive AI consistent no longer guaranteed
It can evolve based on New data, feedback, real-world interaction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI system : Infers from the input it receives, how to generate outputs

A

Para 26 Guidelines:
□ key, indispensable condition that distinguishes AI systems from other types of systems”

Recital 12:
capability to infer refers to process of obtaining the outputs, such as predictions, content, recommendations, or decisions, which can influence physical and virtual environments,
-> Para 28: refers to capability (…) predominantly in use phase to generate outputs based on inputs
and to a capability of AI systems to derive models or algorithms, or both, from inputs or data”
-> Para 28: refers primarily, but not limited to, building phase of system and underlines relevance of techniques used for building system

Unclear concepts in the paras meant to clarify what inference entails -> did I run AI-powered platform?
Para 30:
- techniques that enable inference while building AI system include machine learning approaches that learn from data how to achieve certain objectives,
- and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of task to be solved
Para 39:
- logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved’
- (…) Based on the human experts encoded knowledge, these systems can reason’ via deductive or inductive engines or using operations such as sorting, searching, matching, chaining.
- By using logical inference to draw conclusions, such systems apply formal logic, predefined rules or ontologies to new situations.
- approaches include for instance
◊ (…) search and optimisation methods

Para 40:
some systems have capacity to infer in narrow manner but may nevertheless fall outside of scope of AI system definition because of their limited capacity to analyse patterns and adjust autonomously their output”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

AI system: can influence physical/virtual environments

A

Para 60 Guidelines:
□ seventh element of definition of AI system is that system’s outputs ‘can influence physical or virtual environments’
□ That element should be understood to emphasise the fact that AI systems are not passive, but actively impact the environments in which they are deployed.

Art. 3(1): can influence physical or virtual environments
□ Facultative, not decisive condition for determining whether system qualifies as AI system (like adaptiveness)
- Optionality seems to be left out of Guidelines
◊ AI systems should be understood as actively impacting their environments.

emphasizes the idea that AI systems do influence the world around them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Case Study 1: non-identifying emotional AI

Context: What is Non-identifying emotional AI

A

What is Non-identifying emotional AI :
○ AI that knows how we feel w/o knowing who we are:
○ Detects emotions (happy, sad, angry, etc.) anonymously (doesn’t link feelings to identity)

Possible contexts where Non-identifying emotional AI might be deployed:

	○ In store experience enhancement (retail)
		-  Purpose: Adjust the environment based on shoppers moods
			□ music, lighting, or promotional content
		-  non-identifying: Retailers don’t need to know who you are  only emotional state Example: 
			□ mall screen showing different ads based on mood of groups passing by

Retail:
Pizza Hut using mood detection to suggest pizzas

Healthcare:
Smart glasses helping autistic children recognize emotions

Emotion-aware automated vehicles
- growing real-world application of non-identifying emotional AI
- can adapt driving behaviour, comfort settings, or emergency responses based on detected emotions
□ how they drive or change seat temperature based on passenger emotions
- all without necessarily knowing who the passenger is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Case Study 1: non-identifying emotional AI

What kind of data is this (scope)

A

Emotional AI uses biometric data: information about physical or behavioural traits

Scope of analysis: Soft biometrics

Emotion data processed by emotional AI either amounts to:

1. Soft biometric data

term coined by A.K. Jain et al. (2004):

Soft biometrics are characteristics that provide some info abt individual

but lack distinctiveness and permanence to sufficiently differentiate any two individuals -> don’t uniquely identify

		*	OR*

2. Hard biometric data
E.g. fingerprints, facial recognition -> can uniquely identify

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Case Study 1: non-identifying emotional AI

Biometric Data legal definitions - Does AIA present new definition of biometric data compared w GDPR

A

** biometric data definitions:**

GDPR Art. 4(14):
biometric data
□* ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopy data*
-> Biometric data = identifying data
AIA Art. 3(34):
biometric data
* □ ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopy data*

-> Key difference to GDPR Art. 4(14) : doesn’t explicitly say that biometric data must be used for identification

Why definitions still end up same: **
Upon closer inspection however, also AIA biometric data limited to hard:
□ AIA Art. 3(34) : biometric data means ‘personal data’
□ GDPR Art. 4(1):
personal data:
◊ ‘
personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

-> So if the biometric data doesn’t identify someone then by definition it’s not personal data under GDPR/AIA —> and therefore not biometric data

** Gets Particularly vague:**

AIA Recital 14 :
* □ The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of biometric data as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 and Article 3, point (13) of Directive (EU) 2016/680. *
* □ Biometric data can allow for the authentication, identification or categorisation of natural persons and for the recognition of emotions of natural persons*

This is vague, because:
- mentions emotion recognition - even though emotion data doesn’t always identify someone

  • suggests biometric data might include non-identifying data (like moods)
  • but this contradicts the legal definition that it must be personal data (i.e., about an identifiable person)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Case Study 1: non-identifying emotional AI

definition of biometric data (AIA and GDPR) relation to emotion detecting AI

A

Because emotional AI often relies on things like facial expressions or voice tone, which:
- Might be biometric in nature, but
- Don’t always identify a person

-> Legal gap: Emotional data might not be covered by “biometric data” protections if it doesn’t identify you even if it’s still sensitive

KEY POINT:
If emotional data doesn’t identify a person, it’s probably not protected the same way under GDPR or AIA definitions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Case Study 1: non-identifying emotional AI

Emotion recognition system

A

key point: It only applies when emotions are tied to identifiable people
○ ‘Emotion recognition system’ only encompasses identifying emotional AI (AIA Art. 3(39) )

AIA Art. 3(39) : Emotion recognition system:
* ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.*

-> On basis of biometric data:
biometric data under the AIA must be personal data — relates to someone identifiable (either directly or indirectly)

So:
If AI analysing facial expressions, tone of voice, body language, but not linking it to an individual, that’s not biometric data → therefore not “emotion recognition system” under this definition

-> Inferring emotions/intentions:
includes systems that try to read feelings/ intent (e.g., angry, bored, interested)

BUT:
Only if tied to identifiable person

→ Left out: 
any AI system that detects emotions without identifying not covered by this legal definition
				- Ex: smart billboard that reads group mood (e.g., “this group looks bored”) and shows a different ad ❌ not covered

Remarkable/noteworthy:
this term ‘Emotion recognition system’ is defined in AIA Art. 3 but subsequently barely referred to in AIA
- Recitals aside only in Art. 50(3) is it mentioned (on transparency obligations for providers and deployers of certain AI systems)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Case Study 1: non-identifying emotional AI

Issue with emotion recognition system definition

A

AIA defines “emotion recognition systems” in a way that leaves out a lot of real-world systems that still impact people — even if they don’t know your name

Yet:
□ These systems can still manipulate, profile, or influence people
□ often rely on sensitive behavioural signals

But they escape the specific transparency, risk, and prohibition rules for “emotion recognition systems”

That’s why this narrow definition is seen as a regulatory blind spot

Chatgpt: This definition only includes identifying emotional AI (i.e., it links feelings to people), not the non-identifying kind used in stores or cars

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Case Study 1: non-identifying emotional AI

Where emotional AI is Strictly prohibited

A

** AIA Art. 5(1)(f):
Prohibits AI systems to infer emotions of a natural person in certain area**s (workplaces, schools)

AIA Art. 5(1)(f): Prohibited AI practices
The following AI practices shall be prohibited:
(f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;

AIA Art. 3(39) :
Emotion recognition system
‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.

Recital 44:
* therefore, the placing on the market, the putting into service, or the use of AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and education should be prohibited*

–> Banned in Workplaces and schools unless for safety/ medical purposes

Why: These are sensitive environments where emotional manipulation or surveillance could:
□ Violate privacy
□ Increase stress or discrimination
□ Undermine autonomy or fairness
Only allowed if: The emotional AI system is used for safety or medical purposes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Case Study 1: non-identifying emotional AI

Other Emotional AI may be banned if crosses these lines

A

AIA Art. 5(1)(a) and AIA Art. 5(1)(b) Prohibitions in are opaque to such extent it is hard to anticipate what emotional AI systems shall meet all the thresholds:

**1. Subliminal manipulation: **
AIA Art. 5(1)(a):
Prohibited AI practices
(a)the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm;

-> Subliminal manipulation:
If AI manipulates people unconsciously and harms decision-making, it’s banned
□ Example: An AI subtly plays emotionally manipulative music or visuals to push a user to buy something they wouldn’t otherwise without them knowing

**2. Exploiting vulnerable people: **
AIA Art. 5(1)(b):
Prohibited AI practices
* (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;
*
- >Exploiting vulnerable people: If AI targets children, elderly, poor, disabled people, etc., and manipulates them it’s banned
□ Example: An AI game targeting kids that detects sadness and pushes in-app purchases to “cheer them up” —> banned

**Note: **
These bans don’t target emotional AI specifically, but emotional AI can fall under them if it:
- Works unconsciously
- Manipulates vulnerable groups
- Leads to harm

17
Q

Case Study 1: non-identifying emotional AI

Emotional AI classified as high risk

A

AIA Art. 6(2) in conjunction w AIA Annex III (1)(c):
certain emotional AI systems are high-risk:

AIA Annex III (1)(c):
- High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:
(1) Biometrics, in so far as their use is permitted under relevant Union or national law:
(c) AI systems intended to be used for emotion recognition

-> Emotional AI systems used for biometric purposes (like emotion recognition in security settings) are considered high-risk

High-risk doesn’t mean forbidden
□ These systems must meet stricter legal requirements
- subject to extra oversight, transparency, risk assessment

Applies when:
□ AI used for emotion recognition
□ In biometric contexts (e.g., security, border checks, surveillance, etc.)
□ But only when permitted under EU or national law

Example: security system at airport that detects anger or stress to flag suspicious behaviour → high-risk

**Note: **
scope differs from ‘emotion recognition system’ under AIA Art. 3(39)
- AIA Art. 3(39) defines “emotion recognition system” narrowly (only identifying systems)
- But Annex III (1)(c) seems to apply to a broader range of emotional AI (even non-identifying ones if used in biometric contexts)

So there’s a legal grey area:
Some emotional AI systems might be high-risk even if they don’t meet the definition of “emotion recognition system.”

18
Q

Case Study 1: non-identifying emotional AI

In summary : non-identifying emotional AI

A
  • Non-identifying emotional AI mostly used in low-risk, anonymous settings (e.g., retail, vehicles)
    • The AI Act (AIA) mostly regulates identifying emotional AI
    • Emotional data is legally fuzzy: it’s biometric, but often not personal data if it doesn’t identify someone
    • Some uses are outright prohibited, others are high-risk and heavily regulated
    • Key Problems / Criticism:
      ○ Definition of “emotion recognition system” is too narrow (excludes non-identifying AI)
      ○ Many real-world emotional AI systems (e.g., in retail or cars) aren’t covered
      ○ Precautionary principle (especially for children) is not fully applied
      ○ Emotional AI can still be used in risky ways without adequate safeguards
19
Q

Case Study 1: non-identifying emotional AI

Issues with non-identifying Emotional AI (reading)

A

🧠** Main Argument**
Emotional AI that uses soft biometric data is not well-regulated under current EU laws (GDPR + AI Act),

even though it can manipulate people and violate fundamental rights like:
○ Human dignity (Charter Art. 1)
○ Privacy (Charter Art. 7, ECHR Art. 8)
○ Data protection (Charter Art. 8)

** Legal Gaps and Problems**
🛑 GDPR:
Only applies to personal (identifying) data → soft biometrics are excluded

📜 AI Act:
Defines biometric data as personal data → soft biometrics still excluded

The definition of emotion recognition system only covers systems using biometric (i.e., identifying) data

Some emotional AI banned in workplaces/schools, but only if the purpose is to infer emotions → vague and narrow

🧩** Fundamental Rights at Risk**
Human Dignity (Charter of Fundamental Rights Art. 1):
○ Using emotional AI to manipulate behaviour (e.g., in retail) treats people as objects, even if they’re not identified.

Privacy & Data Protection (Charter Art. 7 and 8, ECHR Art. 8):
○ Emotional states are deeply personal; even if anonymized, tracking them can still violate privacy.

⚠️** Major Critiques of the AI Act**

Too narrow:
Doesn’t clearly regulate non-identifying emotional AI

Unclear terms:
No distinction between “detect”, “identify”, “infer” emotions

Weak impact assessments:
Fundamental rights not meaningfully protected

High-risk loopholes:
Only covers AI intended for emotion recognition — companies could easily sidestep this

🛠️** Recommendations by Authors**

Include soft biometric data in the legal definition of biometric data

Expand bans to cover both inferring and identifying emotions

Lower legal thresholds for banning manipulative AI

Remove the “intent” loophole from high-risk classification

Strengthen fundamental rights impact assessments

✅ Conclusion
EU law currently fails to protect citizens against emotional AI that manipulates people using non-identifying soft biometric data. These systems are increasingly common, can undermine dignity and privacy, and should be more clearly and robustly regulated.

20
Q

Case 2: AI-mediated profiling of children for commercial purposes

What is AI-mediated profiling of children for commercial purposes

A

** AI-mediated profiling of children for commercial purposes **
when AI systems automatically analyse children’s personal data to learn about them to target or influence them for commercial gain
○ such as their interests, behaviour, preferences

**Profiling: **

GDPR Art. 4(4): profiling
profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements

-> Profiling means using personal data to predict or evaluate someone’s behaviour, interests, health, economic situation, etc.

In this case, children are being profiled, often without their understanding or consent, to influence what they see or buy online

21
Q

Case 2: AI-mediated profiling of children for commercial purposes

Children’s rights

A

The UN Convention on the Rights of the Child (UNCRC) and General Comment 25:

UNCRC:
- most widely ratified HR treaty in history - all UN MS except US

General Comment 25:
- General Comments:
□ non-binding, highly authoritative recommendations issues by UNCRC Committee
□ On any issue relating to children to which the Committee believes the state parties should devote more attention

GC25 (2021):
on children’s rights in relation to digital environment
□ Para 22:
Opportunities for the realization of children’s rights and their protection in the digital environment require a broad range of legislative, administrative and other measures, including precautionary ones

-> i.e. govs must take precautionary measures to protect children online (Even not 100% sure something is harmful yet — better safe than sorry)

22
Q

Case 2: AI-mediated profiling of children for commercial purposes

Types of commercial profiling of children

A

A proposed taxonomy (classification) distinguishing 6 types of commercial profiling of children:

1. For direct commercial purposes (1-4):

Profiling that directly leads to monetisation of data on which profiling is based either through personalisation of digital service to increase child’s out-of-pocket spend while engaging with that service, or through the trading in children’s profiles
-> directly leads to money for the company

  1. Offer personalised advertising clearly distinguished as such
  2. Engage children w ads that are not distinguishable as such - disguised as content (e.g. via interactive formats - games embedded w ads)
  3. Increase children’s expenditures within the online environment in which they are profiled - Encouraging kids to spend money within apps/games
  4. Sell and transmit children’s profiles to 3rd parties

2. Indirect commercial purposes (5-6):
no direct source of revenue associated with profiling itself, however such profiling still advances the economic interests of the company deploying it either part and parcel of the user proposition and/or boosts service’s stickiness
* -> No immediate money, but still helps the business*

  1. As a prerequisite for the delivery of a service - No immediate money, but still helps the business
  2. To increase the stickiness of a service through personalisation (keeping kids on the app/site longer through personalisation)
23
Q

Case 2: AI-mediated profiling of children for commercial purposes

Children’s rights at risk from profiling

A

Myriad of UNCRC rights likely to be violated by most manifestations of children’s commercial profiling

  1. UNCRC Art. 3: Child’s best interests principle
    - Issue: Profiling often prioritizes profit over wellbeing
  2. UNCRC Art. 16: Privacy (incl. data protection)
    - Issue: Profiling involves collecting personal data
  3. UNCRC Art. 13: Freedom of thought
    - Issue: Influencing kids’ ideas/choices
  4. UNCRC Art. 6 and Art. 26 : Right to development and health
    - Issue: Manipulative content may harm mental health
  5. UNCRC Art. 32: protection against economic exploitation
    - Issue: Using kids’ data for profit
  6. UNCRC Art. 31: Right to play
    • Issue: Digital manipulation may restrict free, healthy play
24
Q

Case 2: AI-mediated profiling of children for commercial purposes

Precautionary principle

A

** Precautionary principle: **
take action to prevent harm, even if we don’t have full proof yet

	-> In field of children's rights application of strong precautionary principle is the norm

3 levels:
1. (weak precaution (Rio Declaration) -> warn but wait for proof

  1. (moderate precaution (Commission Communication) -> act if there’s reasonable risk
  2. Strong precaution (Wingspread Statement) -> ban unless proven safe
    § WHO Wingspread Statement (1998):
    □ Where an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not established scientifically’
    -> Prompts regulators to prohibit a potentially harmful practice, until the proponent of the practice can prove its safety (Better safe than sorry)

**To what extent does AIA seek recourse to the precautionary principle to safeguard children and their rights? **

AIA Art. 5(1)(b):
Prohibited AI practices
(b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm;

-> Protects children bc age makes vulnerable to manipulation

Problem w commission Guidelines:
Commission Guidelines on Art. 5:
Para 115:
□ the need for precaution is explicitly accounted for – What may be considered an acceptable risk of harm for adults often represents an unacceptable harm for children and these other vulnerable groups

-> (i.e. Children need extra protection, because risks that might be “OK” for adults can seriously harm kids)
- A precautionary approach is therefore particularly warranted in case of uncertainty and potential for significant harms

BUT, Article 5(1)(b) merely prohibits AI systems with:
- a substantial impact amounting to materially distorting children’s behaviour, setting a higher threshold than children’s rights law prescribes
□ only bans profiling that causes serious harm not all potentially harmful profiling

So:
AIA threshold too high compared to what children’s rights law demands
some harmful AI profiling might still be legal under AIA even if it’s problematic for kids

25
# Case 2: AI-mediated profiling of children for commercial purposes AI-mediated profiling of children for commercial purposes - In summary
AI profiling of children for commercial purposes is widespread may violate many of their rights, including privacy, development, and protection from exploitation The UN and children’s rights law recommend strong precaution: ○ don't allow risky AI systems unless they're proven safe The AI Act does include protections, but they might not go far enough to meet the higher standards of children’s rights law
26
Article Summary :"Dissecting the Commercial Profiling of Children" (Reading)
🎯** Main Purpose of the Article** explores how children are commercially profiled online, and whether current EU laws (like the GDPR, DSA, and AI Act) protect children well enough — especially when we apply the precautionary principle (better safe than sorry). 🔍** Key Issues** Children are routinely profiled online for ads, purchases, and data trade. This profiling risks violating several of their rights under the UN Convention on the Rights of the Child (UNCRC). EU laws exist, but do not go far enough — and enforcement is weak. 🧩** Taxonomy of Commercial Profiling (6 Types)** The article creates a 6-part taxonomy: Direct Commercial Purposes (generate revenue directly) M1: Personalized ads that are clearly labelled M2: Ads disguised as content (e.g., games, influencers) M3: Profiling to push in-game or in-app purchases M4: Selling children's data to third parties Indirect Commercial Purposes (support profit indirectly) M5: Profiling needed to deliver the service (e.g., EdTech, health apps) M6: Profiling to increase app "stickiness" (time spent on the platform) ⚖️** Children’s Rights at Risk (UNCRC)** 1. Privacy & Data Protection Art. 16 : Violated in almost all types of profiling 2. Development & Health Art. 6 & 24: Can be harmed by profiling-induced stress, pressure 3. Freedom of Thought Art. 14: Undermined by manipulation and algorithmic targeting 4. Protection Against Economic Exploitation Art. 32: Profiling monetizes children’s behavior without benefit to them 5. Best Interests of the Child Art. 3: Most commercial profiling fails this standard 🛑** Key Findings** Profiling for direct profit (M1–M4) is clearly harmful and likely violates multiple rights. Profiling for indirect purposes (M5–M6) is more context-dependent — may be helpful in health/education, but risks still exist. Children's lack of understanding and consent makes legal protections like GDPR consent requirements ineffective. 🧠** The Precautionary Principle** Core idea: If there’s a risk to children, regulators should act, even if scientific proof is incomplete. The article argues for strong precaution, especially since profiling harms can be long-term and hard to measure. Cites UNCRC General Comment 25 (2021) as support. 🏛️** Assessment of EU Laws** GDPR : Strong on paper, but poorly enforced, especially around child consent UCPD: Covers unfair practices, but not tailored to children's profiling DSA . Prohibits profiling children for ads — a step forward, but scope limited AI Act: Bans emotion AI in schools/work, but does not cover profiling broadly **✅ Conclusion & Recommendations** Profiling children should be tightly regulated or prohibited — especially direct profiling (M1–M4). Precautionary principle must guide policymaking. Laws should explicitly protect against profiling practices even without full proof of harm. Enforcement and clarity need to improve across all legal instruments.