Week 8: harms of AI generated deepfakes Flashcards
(12 cards)
What is a deepfake - legislative definitions
** Legislation definitions:**
AI Act (2024): Article 3(60)
'’Deepfake’’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful
Digital Services Act (2022): Article 35 (1k)
An item of info whether it constitutes a generated or manipulated image, audio or video that appreciably resembles existing persons, objects, places or other entities or events and falsely appears to a person to be authentic or truthful
Proposal for Directive on combating violence against women and domestic violence (2022):
Recital 19: […] ‘deepfakes’, where the material appreciably resembles an existing person, objects, places or other entities or events depicting sexual activities of another person and would falsely appear to others to be authentic or truthful
what is a deepfake
Deepfake:
specific application of generative AI
Generative AI : AI systems that can generate new content based on patterns learned from existing date
GenAI techniques to create deepfakes include:
1. Faces swapping
□ Putting someone’s face onto another person’s body
- Attribute manipulation
- Identity swap
- Body puppetry - reenactment
2. Lip synching □ Altering lip movement to match audio 3. Voice cloning □ Generating synthetic speech in someone's voice 4. Text-to-image systems □ Converts written descriptions into realistic or artistic images Full image synthesis
what are some ‘good’ and not so good uses of deepfakes
Some ‘good’ uses
Unique opportunities for entertainment industry and art world
- Bringing historical figures back to life
- Creating Da Vinci meets Van Gogh cross-over art work
- Singing duet w fav singer
-> (is this good?)
** Some not so good uses**
Deepfake porn
- even sometimes minors
- Examples:
□ Teen questioned after Explicit AI deepfakes of dozens of schoolgirls shared online
□ ‘Nudify’ apps that use AI to undress women are increasing in popularity
□ Case: Clarkson v OpenAI – DALL-E enabled production of CSAM using
non-consensual child imagery.
Political deepfakes (disinformation)
Deepfake porn
Definition:
□ ‘Deepfake porn’, refers to the use of artificial intelligence (AI) to create hyper-realistic digital impersonations, either by superimposing the victim’s face and likeness into an already existing pornographic video, or using generative AI to create an entirely new video with the victim’s likeness”. (McGlynn & Toparlak 2025).
□ 98% of deepfake content is estimated to be pornographic; 99% of which is estimated to depict women and girls
** ISSUE:**
□ Violates sexual integrity, right to private life, human dignity
□ Victims may suffer serious mental health issues
□ Objectification of (all) women (?) (‘Frankenporn’)
-> Exacerbation of (toxic) machismo culture; reinforcement of negative gender stereotypes; potential spill over effects (-> gender based violence)
Political deepfakes
(disinformation)
Political deepfakes:
deepfake content with overwhelmingly political dimension, often serving far/-alt-right politics/rhetorics
Political officials/other events ‘backing up’ disinformation/conspiracy theories linked to certain political ideologies
ISSUE:
□ Further exacerbates issues relating to (political) disinformation
□ May foster global conspiracy theories
□ Undermines trust in democracy and its institutions
->spill over effects (- radicalisation)
Legal interventions for deepfakes
Legal instruments:
DSA?
- Art. 9 on illegal content
- Art. 34 on risk management
National criminal codes?
- E.g. Dutch CC Art. 254ba on ‘visual depictions of a sexual nature’
GDPR?
Directive on combating violence against women?
Regulation on the transparency and targeting of political advertising?
AI Act: Deepfakes
AI Act: Deepfakes
Definition
Definition:
Article 3(60):
□ ‘‘Deepfake’’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful
Recital 134:
Further to the technical solutions employed by the providers of the AI systems, deployers who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful (deepfakes) should also clearly and distinguishably disclose that (…)
deepfakes: Obligations for provider and deployer
Article 50 :transparency obligation for ‘‘certain’’ AI systems
Provider:
generation of synthetic content -> marking that the content is artificially created- >
exception: supporting function for standard processing (Article 50(2))
( mandates tagging; Recital 133 recommends techniques like C2PA watermarks.)
Art. 50:
2. Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate or prosecute criminal offences.
◊ Recital 133: (…) watermarks, metadata identifications, cryptographic methods for providing provenance and authenticity of content, logging methods, fingerprints or other techniques, as may be appropriate
Deployer:
deepfakes (Article 3(60)) -> disclosure that artificially created (Article 50(4)) -> Unless: For law enforcement, For fiction, satire, or art (Recital 134).
Art. 50(4):
Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work
Risk of loopholes in satire exemptions (Ajder & Glick 2021).
Are deepfakes ‘limited’ risks? and What about non professional, private use of deepfakes?
Limited to transparency obligations - how effective is this?
○ What is ‘‘satire’’
○ What about non professional, private use?
Art. 2(10):
This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the course of a purely personal non-professional activity.
can Political deepfakes potentially classify as high-risk?
Political deepfakes potentially high-risk (unlikely):
Annex III:
1. Administration of justice and democratic processes:
(a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;
(b)AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.
“Generative AI and deep fakes: a human rights approach to
tackling harmful content” (reading)
The paper critiques the EU AIA current deepfake provisions,
arguing they risk infringing rights under Articles 8 and 10 ECHR and the GDPR.
It proposes:
1. Mandatory use of structured synthetic data for deepfake detection.
- Reclassifying malicious deepfake applications (e.g. sexual abuse, extortion, electoral disinformation) as high-risk AI.
Background
Deepfakes emerged in 2017
now pose threats through voter manipulation, blackmail, and CSAM creation
EU AI Act
defines ‘deepfake’ (Art. 3(60)) and
imposes transparency obligations (Art. 50).
Cases:
Clarkson v OpenAI (US): Deepfakes used for misinformation, sextortion, and CSAM.
Pornographic deep fakes of Taylor Swift triggered public outrage
Conclusion
The EU AI Act’s deepfake provisions require urgent revision to:
- Clarify classification and platform duties.
- Enforce penalties.
- Prioritize structured synthetic data.
- Mandate fundamental rights assessments.
- Ensure proportionality in tracking and disclosure.
Only with these changes can the AIA align with Articles 8 & 10 ECHR and the GDPR, and
effectively protect democracy, safety, and human dignity
Deepfakes and the GDPR
Article 6 GDPR
Legal bases: consent (Art. 6(1)(a)) or legitimate interest (Art. 6(1)(f)).
Article 9 GDPR
Deepfakes may process special category data (e.g., sexual orientation).
No ‘legitimate interest’ fallback under Art. 9;
only explicit consent applies.
Article 22, 17, and 16 GDPR
Art. 22: Right to human review in automated decision-making.