Domain 2: applying frameworks Flashcards

(97 cards)

1
Q

Fair Information Practices (FIPs)

A

guidelines for handling data with privacy, security, and fairness in mind

*** Part of the OECD Guidelines (1980)

Also known as FIPPs
NOT FIPS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

FIPs Common principles (Mnemonic)

A

At = Access / individual participation
Paradise = Purpose specification
Dalmatian = Data minimization
Dogs = Data quality and relevance
Snooze = Safeguards and security
Near = Notice and openness
Aerial = Accountability
Unicorns = Use limitations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Privacy by Design principles (Mnemonic)

A

Robot = Respect for users
Pigs = Proactive and Preventative
Devour = Default setting
Enormous = Embedded in design
Purple = Positive sum, not zero sum
Eggplant = End-to-end security
Tacos = Transparent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Privacy laws to know

A

GDPR: EU
CCPA: California (2018)
CPRA: California (2020)
Biometric Info Privacy Act (BIPA): Illinois (2008)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Privacy requirements for AI Operators (8)

A

PbDD
PIAs and DPIAs
Human oversight
Data governance
Data disposition
Safeguards
Documentation
Authorities

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

GDPR AI-related articles

A

Art 23: Automated Decision Making (ADM)
Art 35: DPIA for high-risk and important processing
Recital 26: data pseudonymization and anonymization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

GDPR Art 23: ADM

A
  • prohibited when possibility of serious risk
  • right to human intervention

Exceptions:
- fulfillment of contract
- explicit consent
- necessity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Examples of sensitive data

A

Race, ethnicity
Political opinion
Religious, philosophical beliefs
Trade union membership
Genetic, biometric data
Health data
Sexuality, sexual orientation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

NYC Local Law 144

A

Requires all AI models be tested for adequate bias testing and audits

regulates automated employment decision tools (AEDTs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

GDPR obligations for special categories

A
  • prohibited unless exceptions apply
  • Art 6: lawful basis for all personal data
  • Art 9: special categories of data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

GDPR Art 6: Lawful basis (Mnemonic)

A

Crazed = Consent
Clowns = Contract
Vandalize = Vital interest (protect lives)
Long = Legal claim
Purple = Public interest (gov agencies)
Limo = Legitimate interest (flexible)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

GDPR Art 9: Special categories exceptions

A

Publicly available information
Research and archiving
Non-profit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Processing best practices (3)

A

Collect directly from data subject with consent

Infer insights from less sensitive data (proxies)

Commercially available info (CAI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Controller obligations (8)

A

PIAs/DPIAs
Third-party processor assessment
Cross-border data transfers
Data subject rights
Appropriate safeguards
Incident management
Breach notification
Record keeping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Data subject rights (GDPR) (7)

A

Restrict processing
Not subject to ADM
Data portability
Erasure
Access and rectification
Informed of processing
Object to processing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Appropriate safeguards types (3)

A

Administrative
Technical
Physical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Thaler v Vidal (2023)

A

US Court of Appeals stated that only humans can be named inventors on a patent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

European Patent Office (2020)

A

Inventors must have a legal personality

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Silverman v OpenAI

A

ChatGPT did not violate copyright of books because it trained on them but did not reproduce them. Summarizing books does not constitute copyright infringement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Thomason Reuters v ROSS Intelligence

A
  • legal summarization AI built off of TR’s data set (Westlaw)
  • AI merely studied language patters and stores relationship
  • transformative due to fair use
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Thaler v Perlmutter

A

US copyright office denied a copyright claim by Thaler where the work was created by AI “Creativity Machine”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

UK: Copyright Designs and Patents Act

A

allows protection without a human actor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Japan copyright law

A

permits AI use of copyrighted works without authors permission

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

United States Patent and Trademark Office (USPTO)

A

issues regulations and grants patents and trademarks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Types of patents (3)
Utility: new and useful Design: ornamental designs Plant: new varieties of plants
26
Pannu v Iolab (1993)
established the "Pannu Factor" for determining "substantial contributions" must have significance, quality, and addition of something new
27
Executive Order (EO) 14110 patent principles
- Natural person can be joint inventor contribution is significant - Significant contribution may be shown through construction of prompt - Significant contributions to output may be a proper invention - Planning, design, and development of AI system may be significant contribution - intellectual domination over AI system does not make a natural person an inventor
28
Exceptions to infringement indemnification (3)
- Modification - Unauthorized combination with other software - Use beyond the authorized scope and agreement
29
Categories of AI-based products (2)
Old function, new use: existing laws and regulations apply (credit scoring) New function: need to determine how laws apply (synthetic content generation)
30
Equal Employment Opportunity Commission (EEOC)
Established by the Civil Rights Act of 1964 and Americans with Disabilities Act (ADA) Administers, enforces civil rights in the workplace
31
Civil Rights Act (1964): Title VII
prohibits employment discrimination based on: - Race - Color - National origin - Religion - Sex - Disability - Age (40+) - Genetic Info
32
EEOC: AI and Algorithmic Fairness Initiative (2021)
ensures that technology used in hiring decisions complies with federal civil rights laws
33
Federal Trade Commission (FTC)
protects the public from deceptive or unfair business practices - antitrust - consumer protection - unfair or deceptive acts
34
FTC's Use of Unfairness Authority (2003)
codified in 1994, injury from unfairness must be: 1. substantial 2. Without offsetting benefits 3. One that consumers cannot reasonable avoid
35
FTC Act (1914): Section 5
regulates unfair and deceptive acts or practices Most important privacy legislation Does not apply to non-profits
36
Fair Credit Reporting Act
regulates the consumer reporting industry and all Consumer Reporting Agencies (CRA)
37
Big 3 Consumer Reporting Agencies (CRAs)
Transunion Equifax Experian
38
Equal Credit Opportunity Act (ECOA)
unlawful for creditors to discriminate against any applicant on basis of protected characteristics
39
Consumer Financial Protection Bureau (CFPB)
independent bureau within the Federal Reserve responsible for consumer protection in the financial sector
40
FTC vs Rite Aid
Deployed facial recognition without reasonable safeguards and was banned from using it
41
Federal Reserve SR 11-7 (2011)
regulatory standard for all banking organizations supervised by the fed Provides guidance on model risk management associated with advanced statical models
42
Occupational Safety and Health Administration (OSHA)
assure safe and healthy working conditions 2022 expanded the OSHA tech manual to include robotics safety
43
Food and Drug Administration (FDA)
monitors the safety of medical devices, including Software as a Medical Device (SaMD)
44
FDA risk levels for medical devices
Class I: glucose monitor Class II: analyze MRIs or x-rays (undergo 510k review) Class III: life-supporting, sustaining systems
45
ACA Section 1557
Prohibits discrimination in covered health programs or activities, including biased impacts of AI
46
21st Century Cures Act
not AI specific, but increases accessibility and transparency of health data
47
NAIC Model Law (2020)
guidelines for responsible AI usage in the insurance market (state level)
48
EEOC Guidance on AI and Hiring (2021)
Hiring and recruitment AI tools must comply with existing laws
49
California Generative AI: Training Data Transparency (AB 2013)
requires developers to publicly disclose training dataset info and include sources and types
50
California AI Transparency Act (SB 942)
mandates disclosure for labeling of AI-generated content and fee detection tool
51
California BOT Act
requires the disclosure of bot use in commercial or political comms
52
Colorado AI Act (SB 24-205)
Comprehensive legislation for developers and deployers of high risk systems
53
Utah AI Policy Act (SB 149)
creates liability for deceptive AI usage under consumer protection laws established the Office of AI Policy and AI Learning Lab Program
54
Consumer Product Safety Act (1972)
created the Consumer Product Safety Commission to - Protect consumers against risk of injury - Enables product evaluation - Establishes safety standards - Promotes research into cause and prevention of death
55
EO 14091
identify and remove bias from design and use of tech throughout the fed gov
56
Theories of liablitiy
- governed by state law Types: - Strict liability: defective product caused harm - Negligence: failure to exercise due care led to unintended harm - Breach of warranty: product promises not met and led to harm
57
CT Fair Housing Center v Corelogic Rental Properties Solutions (2019)
- screening software vendor was subject to Fair Housing Act nondiscrimination provisions and the software provided criminal records that led to discrimination - vendor has a duty not to sell products that allows customer to (un)knowingly violate law
58
DOJ v Meta Platforms (2022)
Meta developed advertising that targeted folks based on protected characteristics
59
Rogers v Christie
US District Court of NJ ruled that AI generates information which does not qualify as a product NJ law
60
Tow liability regimes (EU)
Fault liability: victim must prove action caused harm through noncompliance or negligence Strict liability: no-fault liability only to prove product defective or defect caused harm
61
EU Liability Reform (2022)
Not overlapping, so victims must choose which option to pursue
62
General Product Safety Regulation (2024)
modernizes current regime to address product safety and digitization
63
Reformed Product Liability Directive (PLD) (2024)
applies to AI with Strict liabilty and shifts burden of proof to defendants - Presumption of defectiveness (noncompliance) - Presumption of causation (damage consistent with defect) - Defendants must disclose evidence upon request
64
EU Digital Services Act (DSA) (2023)
prevent illegal and harmful activities online, including intermediaries and platforms Targets recommender systems and online advertising
65
EU AI Act (2024)
extraterritorial law that ensures the development and deployment of AI that is safe, trustworthy, transparent, and respects the fundamental rights of individuals
66
EU AI Act - Operators (Mnemonic)
Professor = Providers (Art 16-22) Duck = Deployers (Art 25) Diligently = Distributors (Art 24) Investigates = Importers (Art 23) Peculiar = Product Manufacturer (Art 25(3)) Artifacts = Authorized reps (Art 22)
67
EU AI Act - risk based approach
identify, assess, and mitigation risk based on prioritization with continuous monitoring Art 3(2): severity of harm x probability of occurrence Art 79: product presenting a risk
68
EU Market Surveillance Regulation (2019/2020)
established the national body responsible for monitoring and enforcing compliance and safety regulations
69
EU AI Act: risk levels
1. Unacceptable: banned 2. High risk: potential harm to safety with mandatory requirements 3. Limited: limited risk with transparency requirements 4. Minimal risk: low risk with voluntary requirements
70
EU AI Act: prohibited risks (Mnemonic)
Six = Social credit scoring Mummies = Manipulative behavior Eagerly = Emotion-recognition in ed/work Pat = Predictive policing Elephant's = Exploitative (age, disabled) Under = Untargeted scraping of facial Belly = Biometric categories (sensitive) and ID (in public) Exceptions: - Law enforcement in real time - Targeted search of victims - Prevention of specific imminent threats - Detection and prosecution of serious crimes
71
EU AI Act: High risk systems
Product safety Systems with significant risk to health, safety Annex II list (Mnemonic is BC MEEEAL) - biometric ID - critical infrastructure - migration - education - employment - essential private or public services (insurance, credit) - administration of justice or democratic process - law enforcement
72
EU AI Act: High risk exceptions
Exception: A: administrative support systems (sort by qualifications but no ADM) B: quality control systems (suggestion but final decision with person) C: monitoring systems (alerts but people make decision) D: pre-screening tools (gathers data but does not rank or evaluate)
73
EU AI Act: Limited risk systems
Art 40: transparency obligations of providers and deployers of certain systems: - direct human interaction (chatbots) - content generation (chatgpt) - biometric (think fingerprint reader) Transparency requirement: - inform users - mark AI generated outputs - follow GDPR
74
EU AI Act High-risk AI Providers requirements (Mnemonic)
Art 9-15: Ross = Risk management Draws = Data governance Tripping = Technical Documentation Robots = Record keeping Triggering = Transparency Happy = Human oversight Accidents = Accuracy, robustness Art 17-22 Queen = Quality management Dislikes = Document keeping Luxurious = Logs Cactus = Corrective actions Cake = Cooperation with authorities Really = Authorized representatives
75
EU AI Act: Art 4. AI Literacy (5)
Topics: - Understanding AI - Technical Foundations - Practical skills - Critical evaluation - Ethical considerations
76
EU AI Act: GPIA Provider Obligations
Tech documentation Transparency information EU Copyright law compliance Summary of training data EU Representative Exception: does not apply to open source GPAI
77
EU AI Act: Enforcement
National Level - Market surveillance authorities - Sector specific authorities EU Level - European Data Protection Supervisor (EDPS) - EU AI Office Individual rights Fines
78
EU AI Act: Authorized Representative Obligations
Mandate and documentation Due Diligence Compliance Record keeping Reporting Cooperation with authorities
79
EU AI Act: Importer Obligations
Due Diligence Compliance Record keeping Reporting Cooperation with authorities
80
EU AI Act: Distributor Obligations
Due Diligence Compliance Reporting
81
EU AI Act: Deployer Obligations
AI Literacy Due Diligence Compliance Human oversight Incident Reporting Transparency Record keeping Cooperation with authorities Fundamental rights impact assessment (FRIA)
82
Fundamental Rights Impact Assessment
Applies to: - BC MEEEAL - Public law entities - Private operators of public services (ed, health) - Private deployers that evaluate credit, life/health risk and pricing Includes: - purpose and use - frequency and period - persons or groups affected - specific risks and mitigation - human oversight measures - governance and compliance
83
Examples of principles
OECD AI Principles FIPs (Fair Information Practices) UNESCO Recommendations on the Ethics of AI
84
Examples of frameworks
ISO 42001, 22989 NIST AI RMF IEEE 7000-2021 HUDERAF
85
OECD AI Classification Framework
AI Classification framework: - People and planet - Economic context - Data and input - AI model - Tasks and input
86
OECD AI Principles (2019)
** First intergovernmental AI Standard Principles - Inclusive growth, sustainable dev, well-being - Human rights, democratic values, fairness, privacy - Transparency and explainability - Robustness, security, safety - Accountability
87
NIST AI RMF
*** Required by National AI Initiative Act (2020) - Voluntary - Rights preserving - sector and use-case agnostic
88
NIST AI RMF - Foundation
Characteristics of trustworth AI - valid and reliable - Safe - Secure and resilient - Accountable and transparent - Explainable and interpretable - Privacy-enhanced - Fair, with harmful bias mitigated
89
NIST AI RMF - Core (4)
Govern Map Measure Manage Circle for last 3 with Govern in the middle
90
NIST Assessing Risk and Impact of AI (ARIA .1)
evaluation environment to assess models and systems Satisfies EO 14110 Three levels: - Model testing - Red teaming - Field testing
91
ISO 42001:2023
Requirements: - Context (orgs role) - Leadership (indiv roles) - Planning (assessments) - Support (training) - Operations (risk management) - Evaluation (audit) - Improvement
92
ISO 22989
AI concepts and terminology - Standardize terms - Conceptual framework - Ethical and societal considerations
93
ISO 31000:2018
Risk management guidelines 8 principles: - Continuous improvements - Customized - Inclusive - Integrated - Dynamic - Best available info - Human and cultural factors - Structured and comprehensive 6 principles to guide leadership: - leadership - Integration - Design - Implementation - Evaluation - Improvement
94
IEEE 7000-21
Addressing ethical concerns during design process Human values: - Transparency - Sustainability - Privacy - Fairness System values: - Efficiency - Effectiveness System life cycle stages - concept exploration - development
95
HUDERAF: Human rights, Democracy, and Rule of law Assurance Framework
Risk based approach that focuses on human rights to assess and grade likelihood
96
HUDERIA: Human rights, Democracy, and Rule of law for Impact Assessments
impact assessments for determining risk under HUDERAF
97
Risk management process for HUERAF (6)
1. Identify human rights impacted 2. Assess 3. Mitigate 4. Remedy 5. Accountability 6. Monitor