{ "@context": "https://schema.org", "@type": "Organization", "name": "Brainscape", "url": "https://www.brainscape.com/", "logo": "https://www.brainscape.com/pks/images/cms/public-views/shared/Brainscape-logo-c4e172b280b4616f7fda.svg", "sameAs": [ "https://www.facebook.com/Brainscape", "https://x.com/brainscape", "https://www.linkedin.com/company/brainscape", "https://www.instagram.com/brainscape/", "https://www.tiktok.com/@brainscapeu", "https://www.pinterest.com/brainscape/", "https://www.youtube.com/@BrainscapeNY" ], "contactPoint": { "@type": "ContactPoint", "telephone": "(929) 334-4005", "contactType": "customer service", "availableLanguage": ["English"] }, "founder": { "@type": "Person", "name": "Andrew Cohen" }, "description": "Brainscape’s spaced repetition system is proven to DOUBLE learning results! Find, make, and study flashcards online or in our mobile app. Serious learners only.", "address": { "@type": "PostalAddress", "streetAddress": "159 W 25th St, Ste 517", "addressLocality": "New York", "addressRegion": "NY", "postalCode": "10001", "addressCountry": "USA" } }

2: AI ACT Flashcards

(12 cards)

1
Q

Ex post <=> ex ante regulating

A

Ex ante: to prevent, before something happens
Ex. Transparancy obligations

Ex post: if something happens that there is liability and accountability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Predecessor of the AI act

A

1) 2018: first official EU approach to AI and robotics, highlights:
* the need to stay ahead of tech developments and encourages uptake by public and private sector
* need to prepare for socio-economic changes that will come
* explores the need of having a legal framework
=> EU strategy: making EU a world class hub for AI
=> focus on preserving, protecting and respecting human rights

2) 2018: high-level expert group on AI
Important step in regulating process
=> group of experts created a set of non-binding recommendations, and ethic guidelines for thrustworthy AI
=> recommendations were transformed into a assessment list for thrustworthy AI

3) EU regulatory approach to AI: White paper (2020) focussed on 2 pillars:
1) Excellence: strengthening AI research investment and innovation in the EU
2) Thrust: establishing risk based regulatory framework to protect fundamental rights
=> goal to set the stage for AI proposing the risk based approach: assess whether we need legal framework and whether it is possible to enforce it or should we focus on adjusting existing laws to make changes that would cover AI systems
=> ensuring compliance with EU values: AI should be fair, human-centered and accountable
==> talked about banning AI application that violate fundamental human rights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

AI act when

A

2021 proposal for AI act issued by the European commission: legislative process starts now (everything before the proposal were just ideas, visions,..)
European commission issues a proposal, European parlement and council prepare their own version and sit together

Final step: vote (2024), text was adopted

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does the AI act consider as AI?

A

1) Machine based system
2) designed to operate with varying levels of autonomy
3) that may exhibit adaptiveness after deployment
4) that for explicit or implicit objectives infers from the input it receives how to generate an output such as predictions/content/recommendations/decisions
5) that influence physical or virtual environments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the 5 important exceptions for the AI act

A

1) Open source and free software, unless they created high risks
2) Research and development before AI is put on the market
3) Systems that are used for personal non-professional activities
4) AI systems for military use
5) AI for national security purposes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

IMPORTANT Q: what is the core of the AI act

A

Risk-based approach to regulation: definition of what is considered as AI is so broad that not all AI-systems can be regulated in the same way
=> Classifies AI systems into 4 risk categories with different levels of regulations based on their risk of potential harm to the health/safety/fundamental rights of individuals and society
1) Unacceptable risk: Some risks are considered so extreme that they are unacceptable so they are prohibited
2) High risk: allowed but they have to comply to a lot of obligations => pose significant risk of harm to the health, safety and fundamental rights of natural persons
3) Specific transparency obligations: deepfake, genAI, chatbots; allowed but with additional obligations to transparancy so that the user is aware
4) Minimal or no risk: allowed without any restrictions but they have to comply to other laws that apply to them like GDPR or consumer protection

=> goal: balance innovation with protection of fundamental rights and safety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are prohibited unacceptable AI systems?

A
  • Manipulative systems
  • Systems that exploit human vulnerabilities (age, gender, disability)
  • Biometric categorisation systems that categorise individuals based on sensitive information
  • real-time biometric identification systems in the public for law enforcement
  • Profiling to predict criminal behavior
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the types of high risk AI systems?

A

Biggest part of the AI act focusses on the high-risk systems
=> have significant impact of safety, security, and human rights
=> must comply with transparency oversights and risk management requirements before deployment

1) AIS: product or a component of a product that needs to go through independent third party certification => conformity assessment: whether these products contain AI or not, they should receive a label of conformity
=> types of product that already fall under different sectoral legislation
AIS: AI systems when they are used in specific sectors for specific uses (both need to apply)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are examples of high risk AI systems: type 1?

And which are excluded?

A
  • Medical devices
  • Machinery
  • Elevators
  • Protection equipment

=> AI or not, because of the fact that these products need to go through assessment they are considered high risk when they contain AI

Excluded: bikes, trains, boats, planes (they are subjected to other legislation outside the AI act)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are examples of high risk AI systems: type 2?

A

Specific sector for specific use
* AI as safety component in critical infrastructure: road traffic/supply water, gas, electricity
* AI to determine admission or to assign natural persons to education/ evaluate learning outcomes
* To evaluate creditworthiness of natural persons
* to be used as judicial authority in interpreting facts and law

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

2 most important actors in AI-ecosystem

A
  • Provider: developer, most important category with the biggest amount of obligations
  • Deployer: professional user, company that buys AI system and uses it for professional purposes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Examples of AI-systems with specific transparancy obligations?

A
  • interactive AI systems (P)
  • synthetic content (P)
  • deep fakes (mark as artificially generated or manipulated) (D)
  • emotion recognition or biometric categorisation system (D)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly