7. Existing and Emerging Laws Flashcards
What is the worlds first comprehensive regulation for AI?
The EU AI Act that reached provisional agreement on December 8, 2023.
What is the EU AI Act?
- Is a risk-based regulation: the higher the risk, the stricter the rules.
- Has far-reaching provisions for organizations that use, design or deploy AI systems. (Like the GDPR’s impact on the processing of personal data, the Act is expected to have a
global impact.) - Aligns with the approach proposed by the OECD to ensure the definition of an AI system provides clear criteria for distinguishing AI from simpler software systems.
What is the scope of the EU AI Act?
The regulation applies to all systems placed in the EU market or used in the EU, including
those from providers who are not located in the EU.
What is the purpose of the AU AI Act?
- Regulate AI
- Address potential harms
- Ensure AI systems reflect EU values and fundamental rights
- Ensure legal certainty to promote investment and innovation
- Align organizations’ use of AI with EU core values and rights of individuals:
- Protect individuals from harm
- Provide organizations with legal bases for using AI in its current state and as the
technology advances
What is the AU AI Act’s applicability?
Applies to:
- All providers and users situated in EU member states
- Providers not located in the EU but providing products for use in the EU
- Operators located outside of the EU producing output to be used in the EU
What is the differentiation between “providers” and “deployers” under the EU AI Act?
Providers:
- Develop AI systems (usually to place on the market or put into service)
- Sell AI systems for use or make available through other means
- Majority of compliance obligations and requirements will apply to providers
Deployers:
- Organizations, individuals or other entities that use AI systems for specific purposes or goals
- AI system is considered “under the user’s authority,” except where the system is used for
personal, non-professional activities - May also be referred to as “users”
What are exemptions to the EU AI Act?
Exemptions to the Act include:
- AI used in a military context, including national security and defense
- AI used in research and development, including in the private sector
- AI used by public authorities in third countries and international organizations under international agreements for law enforcement or judicial cooperation
- AI used by people for non-professional reasons
- Open-source AI (in some cases)
What are the four risk categories classified by the EU AI Act?
- Unacceptable risk
- High risk
- Limited risk
- Minimal or no risk
What are unacceptable risks under the EU AI Act?
Social credit scoring systems
- Emotion-recognition systems used in law enforcement, border patrol and educational institutions
- AI that exploits a person’s vulnerabilities, such as age or disability
- Behavioral manipulation and techniques that circumvent a person’s free will
- Untargeted scraping of facial images to use for facial recognition
- Biometric categorization systems using sensitive characteristics
- Specific predictive policing applications
- Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations
What is important to know about the EU AI Act’s risk categories?
- Each risk level has a different level of compliance obligation
- Provides flexibility and adaptability for the Act
- Provides clear guidance for organizations
- Providers and, in some cases, users/deployers, will be required to:
- Process AI use in accordance with the risk level
- Document AI use
- Audit AI use
What are high risks under the EU AI Act?
Majority of the Act will apply to AI that falls into the high-risk category
- Specific articles within the Act outline requirements
- Will require CAs, among other obligations, to ensure the system is safe prior to it going on the market or into use
What are two high-risk subcategories under the EU AI Act?
- Product safety
AI used as a safety component of a product covered by EU legislation, such as toys,
machinery, medical devices, aviation, vehicles and railways
- Systems that pose a significant risk of harm to health, safety or fundamental rights
- Biometric identification and categorization of natural persons
- Management and operation of critical infrastructure (such as gas and electricity) * Education and vocational training
- Employment, worker management and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits (e.g.,
emergency service dispatching) - Law enforcement
- Migration, asylum and border control management
- Administration of justice and democratic processes
What are the provider requirements for managing high-risk under the EU AI Act?
- Implement a risk management system
- Identify and analyze risks posed by the AI system, and add measures to minimize and
mitigate risks
- Provide technical documentation of the risks and mitigatory processes
- Maintain and update documentation even after product release - Manage data and data governance
- Ensure input data is relevant for the purpose, free of errors, representative and complete
- Robust data management: collection, annotation, labelling, cleaning; examination for biases
(providers may process special category personal data to monitor, detect and correct bias)
- Monitor performance and safety; take corrective steps for nonconforming systems * - Register in the public EU database of high-risk AI systems before placing them on the market
- Keep logs in an automatic, documented manner (e.g., inputs and outputs should be traceable)
- Comply with transparency measures about the provider and how the system was built, and provide instructions for use
- Must be clear, concise and relevant
- How to use system safely: system maintenance, capabilities and limitations, how to implement human oversight - Develop the system in a way that allows for human oversight
- Humans must be able to oversee processes and understand how the system works,
understand and interpret output and intervene to stop or override the AI outputs - Ensure the system performs consistently to achieve its intended purpose
- Test regularly for accuracy and robustness
- Build with resilience to cybersecurity threats - Create a quality management system and undertake a conformity assessment
- Quality management: strategy for regulatory compliance, build standards, post-market
monitoring
- Fundamental rights impact assessment (CA: Demonstrate compliance prior to marketing. May be self-assessed, or may require
third-party assessments, depending on various factors.) - Report serious incidents and malfunctions that lead to breach of fundamental rights
What are high-risk areas requiring registration under the EU AI Act?
EU-wide database for high-risk AI systems
- Public, accessible by anyone
- Operated and owned by the European Commission
- Data provided by providers
- Providers must register prior to placing system on the market
What are high-risk areas requiring notification under the EU AI Act?
Providers must establish and document a post-market monitoring system
- Track how the AI system is performing (What the AI system is doing after it has been sold)
- Report any serious incident or malfunctioning which is, may be, or could become a breach of the obligations to protect fundamental rights (If an incident occurs: required to report to local market surveillance authority)
What requirements apply to deplorers, importers and distributors for high-risk systems under the EU AI Act?
- Complete an FRIA before putting AI system into use (for services of general interest like banks, schools, hospitals and insurers, for high-risk systems)
-Verify compliance with the Act and ensure required documentation is available, including
instructions for use - Communicate with the provider and regulator as required
- Ensure CA has been completed and is marked on the product
- Do not put the product on the market if there is reason to believe it does not conform with the provider’s requirements
- Deploy in accordance with the instructions for use
- Monitor AI systems and suspend use if any serious issues occur (as defined in the Act)
- Update provider or distributor about serious incidents or malfunctions
- Maintain automatically generated logs
- Assign human oversight to appropriate individuals
- Cooperate with regulators as necessary
- Comply with GDPR where relevant * Ensure input data is relevant to the use of the system - Inform people when they might be subject to the use of high-risk AI
What are EU AI Act requirements for limited risks?
Primary compliance focuses on transparency:
- Providers must inform people from the outset that they will be interacting with an AI system (e.g., chatbots)
2.Deployers must:
- Inform, and obtain the consent of, those exposed to permitted emotion recognition or biometric categorization systems
- Disclose and clearly label visual or audio deepfake content that was manipulated by AI
- Applies to the following techniques, systems and uses:
- Systems designed to interact with people (e.g., chatbots)
- Systems that can generate or manipulate content
*Large language models (e.g., ChatGPT) *Systems that can create deepfakes
What are EU AI Act requirements for minimal or no risk?
Examples include:
- Spam filters
- AI-enabled video games
- Inventory management systems
Codes of conduct may eventually be created by industry/specific use; these would be voluntary
What are penalties under the EU AI Act?
- Highest penalty is reserved for using prohibited AI (Up to tens of millions of euros or a certain percentage of global turnover for the preceding fiscal year, whichever is higher)
- Penalty for most instances of noncompliance will be lower than for the use of prohibited AI, but penalties can still go up to tens of millions of euros or a certain percentage of global turnover for the preceding fiscal year, whichever is higher
- More proportionate caps on fines for startups and small/medium-sized enterprises
When will the EU AI Act come into effect?
The provisional EU AI Act agreement provides that the Act should apply two years after it comes into effect, with some exceptions for specific provisions.
What is the foundation/general purpose AI models and systems under the EU AI Act?
- Usually referred to as GPAI (for General Purpose AI)
- An AI model that displays significant generality and can perform a wide range of distinct tasks, regardless of how the model is released on the market (Can be integrated into a variety of downstream systems or applications)
- A new categorization was created: “High-impact GPAI models with systemic risk” and all other GPAI
- For “all other” GPAI, obligations include:
- Maintaining technical documentation
- Making information available to downstream providers who integrate the GPAI model into their AI systems
- Complying with EU copyright law
- Providing summaries of training data
- For GPAI “with systemic risk,” whose definition is based on computing power and substantial compliance requirements, the obligations are the four items listed above, plus:
- Assessing model performance
- Assessing and mitigating systemic risks
- Documenting and reporting serious incidents and action(s) taken
- Conducting adversarial training of the model (also known as “red-teaming”) * Ensuring security and physical protections are in place
- Reporting the model’s energy consumption
What are the approaches of emerging AI regulations?
- Specific areas of focus:
* Automated decision-making
* Industry-based: e.g., health care, finance, transportation
* Employment - Overarching regulations: e.g., the EU AI Act
- Amending existing laws and regulations; e.g., Brazi
What do proposed AI regulatory frameworks build on?
Proposed regulatory frameworks often build off existing data protection and privacy laws:
- Requiring similar risk assessments and auditing processes
- Transparency is a primary concern
What countries have emerging AI legislation?
Australia, Canada, China, the EU, South Africa and the UK.