1
Q

Is AI Regulated currently?

A
  • While there are no UK laws that were explicitly written to regulate AI, it is partially regulated through a patchwork of legal and regulatory requirements built for other purposes which now also capture uses of AI technologies.
  • For example, UK data protection law includes specific requirements around ‘automated
    decision-making’ and the broader processing of personal data, which also covers processing for the purpose of developing and training AI technologies.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is AI ethics not enough?

A

Not all uses of AI are savoury or built on palatable values.

AI could become ‘god like’ in nature: Left to its self-proclaimed
ethical safeguards, AI has been shown to be discriminatory and
subversive i.e., Unfair Algorithms, Biased Data = Biased Results etc.

Imposing mandatory rules on AI would help prevent technology
infringing human rights. Regulation has the potential to ensure that
AI has a positive, not negative effect on lives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Transformer Language Model used to output text-based, or image-based content

How adaptive is it?

How autonomous is it?

What are the potential AI-related regulatory implications?

A

Adaptive: transformer models have a large number of parameters, often derived from data from the public internet. This can harness the collective creativity and knowledge present online, and enable the creation of stories and rich, highly-specific images on the basis of a short textual prompt.

Autonomous: These models generate their output automatically, based on the text input, and produce impressive multimedia with next to no detailed instruction or ongoing oversight from the user.

regulatory implications: Security and privacy concerns from inferred training data;
Inappropriate or harmful language or content output;
Reproduction of biases or stereotyping in training data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Self-driving car control system

How adaptive is it?

How autonomous is it?

What are the potential AI-related regulatory implications?

A

Adaptive: These systems use computer vision as well as iteratively learning from real-time driving data to create a model which is capable of understanding the road environment, and what actions to take in given circumstances.

Autonomous: These models directly control the speed, motion and direction of a vehicle.

regulatory implications: Safety and control risks if presented with unfamiliar input; Assignation of liability for decisions in an accident/dispute;
Opacity regarding decision-making and corresponding lack of public trust

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Who should lead AI standardisation?

A

Technical standardization is taking the lead on the regulation of AI through associations like the IEEE and ISO, and national agencies like the NIST in the US, and the CEN, CENELEC, AFNOR, Agoria and Dansk Standards in Europe.

In these settings, one key issue is the extent of government involvement.
* Are politicians capable enough to understand and make complex decisions about how to regulate technology?

The application and optimization of technical standards require the collaboration between lawmakers, policymakers, academics and engineers, and the support of different stakeholder groups, such as corporations, citizens, and human rights groups. Without this balance, BigTech lobbyists or geopolitics will have a disproportionate influence.

Who should lead the
charge? Probably the EU &
US. However – they’d have
to agree and work together.
How likely if this?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Why is global regulation a challenge?

A

Complexity and Rapid Advancement

Lack of Consensus: Different countries and regions have diverse perspectives, priorities, and values when it comes to AI regulation

Jurisdictional Issues

Balancing Innovation and Risk

Technical Complexity and Understanding

Cross-Sectoral Impact: AI will have applications across multiple different sectors

Compliance and Enforcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what is the EUs regulatory framework?

A
  • The proposed rules will:
    – address risks specifically created by AI applications;
    – propose a list of high-risk applications;
    – set clear requirements for AI systems for high risk applications;
    – define specific obligations for AI users and providers of high risk applications;
    – propose a conformity assessment before the AI system is put into service or placed on the market;
    – propose enforcement after such an AI system is placed in the market;
    – propose a governance structure at European and national level.
  • The Regulatory Framework
    defines 4 levels of risk in AI:
  • Unacceptable risk
  • High risk
  • Limited risk
  • Minimal or no risk
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does EU define Unacceptable risk?

A
  • Unacceptable risk; All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using
    voice assistance that encourages dangerous behaviour.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How does the EU define and mediate High risk?

A

AI systems identified as high-risk include AI technology used in:
– critical infrastructures (e.g. transport), that could put the life and health of
citizens at risk;
– educational or vocational training, that may determine the access to
education and professional course of someone’s life (e.g. scoring of exams);
– safety components of products (e.g. AI application in robot-assisted surgery);
– employment, management of workers and access to self-employment (e.g.
CV-sorting software for recruitment procedures);
– essential private and public services (e.g. credit scoring denying citizens
opportunity to obtain a loan);
– law enforcement that may interfere with people’s fundamental rights (e.g.
evaluation of the reliability of evidence);
– migration, asylum and border control management (e.g. verification of
authenticity of travel documents);
– administration of justice and democratic processes (e.g. applying the law to
a concrete set of facts).

Mediate;

  • High-risk AI systems will be subject to strict obligations before they can be put on the market:
    – adequate risk assessment and mitigation systems;
    – high quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
    – logging of activity to ensure traceability of results;
    – detailed documentation providing all information necessary on the system and its purpose for
    authorities to assess its compliance;
    – clear and adequate information to the user;
    – appropriate human oversight measures to minimise risk;
    – high level of robustness, security and accuracy.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How does the EU define and mediate limited risk?

A
  • Limited risk: Limited risk refers to AI
    systems with specific
    transparency obligations.
    When using AI systems such
    as chatbots, users should be
    aware that they are
    interacting with a machine so
    they can take an informed
    decision to continue or step
    back.

mediate:

  • The proposal allows the free use of minimal-risk AI. This includes applications such as
    AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the EU define and mediate minimal or no risk?

A
  • The proposal allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the USAs initial regulation in 2022?

A
  • Rise of Specific AI-use cases
    – New York joined a number of states, including Illinois and Maryland, in regulating automated
    employment decision tools (AEDTs) that leverage AI to make, or substantially assist, candidate
    screening or employment decisions.
    – Under New York’s law, AEDTs must undergo an annual “bias audit,” and results of this audit need to
    be made publicly available.
  • Equal Opportunity Employment Commission (EEOC) - launched initiative “algorithmic
    fairness” in employment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

State privacy laws concerning AI in 2023?

A

Consumer Rights for AI-Powered Decisions: Essentially, state
privacy laws will grant consumers opt-out rights when AI algorithms
make high-impact decisions

AI Transparency: Proposed Colorado privacy regulations would
require companies to include AI-specific transparency in their
privacy policies. Privacy policies would need to list all high-impact
“decisions” that are made by AI and subject to opt-out rights.

AI Governance via Impact Assessments: When data processing
presents a “heightened risk of harm to consumers,” companies must
internally conduct and document a “data privacy impact
assessment” (DPIA).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

State privacy laws concerning AI in 2023?

A

Consumer Rights for AI-Powered Decisions: Essentially, state
privacy laws will grant consumers opt-out rights when AI algorithms
make high-impact decisions

AI Transparency: Proposed Colorado privacy regulations would
require companies to include AI-specific transparency in their
privacy policies. Privacy policies would need to list all high-impact
“decisions” that are made by AI and subject to opt-out rights.

AI Governance via Impact Assessments: When data processing
presents a “heightened risk of harm to consumers,” companies must
internally conduct and document a “data privacy impact
assessment” (DPIA).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Proposed federal AI regulation?

A
  1. Make sure AI is trained using data sets that are representative and do not “miss
    information from particular populations.”
  2. Test AI before deployment – and periodically thereafter – to confirm it works as intended and does not create discriminatory or biased outcomes.
  3. Ensure AI outcomes are explainable, in case AI decisions need to be explained to consumers or regulators.
  4. Create accountability and governance mechanisms to document fair and responsible development, deployment, and use of AI.

At the federal level, AI-focused bills have been introduced in Congress but have not gained significant support or interest.

Federal Trade Commission (FTC) has however taken an interest. They have produced several statutes & publications that embed AI. This
means the following must occur (under the remit of the Fair Credit Reporting Act, Equal Credit Opportunity Act, and FTC Act).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

NIST released an initial draft (AI RMF) in 2022, what are its two parts?

A

The first part is a catalogue of characteristics that, if implemented in AI, would enable AI to be considered trustworthy because it minimizes key risks.

The AI RMF’s second part is an action framework designed to help companies identify concrete steps to manage AI risk and make AI trustworthy. It is built around a “Map –
Measure – Manage” structure, where each concept covers a different phase in the AI planning, development, and deployment cycle.

17
Q

What is the NIST AI risk management framework?

A

Valid & Reliable: AI is accurate, able to perform as required over time, and robust
under changing conditions.

Safe: AI does not cause physical or psychological harm, or endanger human life, health, or property.

Fair & Nonbiased: Bias in results is managed at the systemic, computational, and human levels.

Explainable & Interpretable: The AI’s operations can be represented in a simplified format to others, and outputs from AI can be meaningfully interpreted in their intended context.

Transparent & Accountable: Appropriate information about AI is available to individuals, and actors responsible for AI risks and outcomes can be held accountable.

1 Map: refers to the planning stage for AI – e.g., mapping the intended purpose of AI and its likely context of use – to identify likely risks and build AI to address risk while
achieving intended functionality.

2 Measure: occurs during the development stage, when AI is built. It comprises identifying methods for building AI and metrics for measuring its performance –
including metrics for evaluating AI’s trustworthy characteristics.

3 Manage: refers to risk management after AI has been deployed. It includes monitoring whether AI is performing as expected, documenting risks identified through AI use, and developing responses to identified risks.

18
Q

The UK’s AI landscape

A
  • Discriminatory outcomes that result from the use of AI may contravene the protections set out in the Equality Act 2010.
  • AI systems are also required by data protection law to process personal data fairly.
  • However, AI can increase the risk of unfair bias or discrimination across a range of indicators or characteristics. This could undermine public trust in AI.
  • Product safety laws ensure that goods manufactured and placed on the market in the UK are safe. Product-specific legislation (such as for electrical and electronic equipment medical devices, and toys) may apply to some products that include integrated AI.

Consumer Rights Act 2015 may protect consumers where they have entered into a sales contract for AI-based products and services.

While AI is currently regulated through existing legal frameworks i.e., Financial Services and Markets Act, this is not explicit and some AI risks arise across, or in the gaps between, existing regulatory remits.

19
Q

What approach will the UK take to regulating AI?

A

Pro-innovation: enabling rather than stifling responsible innovation.

Proportionate: avoiding unnecessary or disproportionate burdens for businesses and regulators.

Trustworthy: addressing real risks and fostering public trust in AI in order to promote and encourage its uptake.

Adaptable: enabling us to adapt quickly and effectively to keep pace with emergent opportunities and risks as AI technologies evolve.

Clear: making it easy for actors in the AI life cycle, including businesses using AI, to know what the rules are, who they apply to, who enforces them, and how to comply with them.

Collaborative: encouraging government, regulators, and industry to work together to facilitate AI innovation, build trust and ensure that the voice of the public is heard and considered.

20
Q

What strategy will the UK use?

A

Initially, the principles will be issued by the government on a non-statutory basis and applied by regulators within their remits

Framework characteristics: Pro-innovation. proportionate, trustworthy, adaptable, clear, and collaborative.

Framework characteristics: Cross-cutting principles implemented by existing regulators, with centralised support and coordination

Implementation: Proportionate and adaptable, informed by monitoring and evaluation

Objectives: Drive growth and prosperity, increase public trust, position the UK as a global AI leader