Mod 1 Flashcards

1
Q

What is the definition of AI?

A

Machines performing tasks that normally require human intelligence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does ML stand for, and what does it involve?

A

ML stands for Machine Learning, which involves training machines to display AI behavior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List the common elements of AI/ML definitions under new and emerging law: TARO.

A
  • Technology
  • Automation
  • Role of humans
  • Output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does it mean that an AI system is a socio-technical system?

A

AI systems are not just technical tools but also have a social impact on the people who use them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why is cross-disciplinary collaboration important in AI development?

A

To ensure experts from UX, anthropology, sociology, and linguistics are involved and valued.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the five dimensions of the OECD framework for the classification of AI systems?

A
  • People and planet
  • Economic context
  • Data and input
  • AI model
  • Tasks and output
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are some use cases and benefits of AI?

A
  • Recognition
  • Event detection
  • Forecasting
  • Personalization
  • Interaction support
  • Goal-driven optimization
  • Recommendation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the difference between strong/broad AI and weak/narrow AI?

A

Narrow AI can only perform one task or a narrow set of tasks, while General AI can mimic human thinking and learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define supervised learning in machine learning.

A
  • A subset of machine learning where the model is trained on labeled input data with known desired outputs.
  • These two groups of data are sometimes called predictors and targets, or independent and dependent variables, respectively.
  • This type of learning is useful for classification or regression. The former refers to training an AI to group data into specific categories and the latter refers to making predictions by understanding the relationship between two variables.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is semi-supervised learning?

A

A subset of machine learning that combines both supervised and unsupervised learning using a small amount of labeled data and a large amount of unlabeled data.
. This avoids the challenges of finding large amounts of labeled data for training the model.

Generative AI commonly relies on semi-supervised learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain unsupervised learning.

A

A subset of machine learning where the model is trained to find patterns in unclassified data with minimal human supervision.
The AI is provided with preexisting unlabeled datasets and then analyzes those datasets for patterns. This type of learning is useful for training an AI for techniques such as clustering data (outlier detection, etc.) and dimensionality reduction (feature learning, principal component analysis, etc.). Most cost efficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is reinforcement learning?

A

A type of machine learning where agents learn to make decisions through rewards and punishments.
Like reinforcement for children. Self driving cars, rewards/punishment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a transformer in the context of AI?

A

A neural network architecture that learns context and maintains relationships between sequence data using attention mechanisms.
t does so by leveraging the technique of attention, i.e. it focuses on the most important and relevant parts of the input sequence. This helps to improve model accuracy. For example, in language-learning tasks, by attending to the surrounding words, the model is able to comprehend the meaning of a word in the context of the whole sentence.

t does so by leveraging the technique of attention, i.e. it focuses on t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What defines a multimodal model?

A

A model that can process more than one type of input or output data simultaneously.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is generative AI?

A

A field of AI that uses deep learning trained on large datasets to create new content, such as written text, code, images, music, simulations and videos. Unlike discriminative models, Generative AI makes predictions on existing data rather than new data. These models are capable of generating novel outputs based on input data or user prompts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define deep learning.

A

A subfield of AI and machine learning that uses artificial neural networks. Deep learning is especially useful in fields where raw data needs to be processed, like image recognition, natural language processing and speech recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is natural language processing (NLP)?

A

A subfield of AI that enables computers to understand, interpret, and manipulate human language.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How does robotics differ from robotic process automation (RPA)?

A

Robotics involves designing machines for tasks without human intervention, while RPA uses machines for repetitive tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the AI technology stack composed of?

A
  • Platforms and applications (Platform=Sofware to develop/test) (AI app=HOW system is used)
  • Model types
  • Compute infrastructure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is compute infrastructure?

A

Democratization of AI - everyone can use AI
Tuning to customize model. Change hyper parameters. Varies based on complexity
Transform data to ingest into AI model. Data compatibility.
Labeling - enrich data to use for deployment . High quality and standard

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the model types?

A

Linear & Statistical models - relationship between 2 variables. Very explainable
Decision Tree - flowchart of Q & As. Subject to hacks
Neural network (ML models) - neural network. Blackbox and lack transparency & explainability
Vision recognition, speech recognition
Language models (NLP models) - Process speech
Reinforcement learning - feedback training. Earn a high score
Robotics application - no human intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What was significant about the 1956 Dartmouth summer research project?

A

It was when the term ‘AI’ was coined.
1950s to 1970s - LISP and Eliza - NLP
Mid 1970s - mid 1980s - slowdown like lighthill report in UK challenged feasibility and practicality
Mid to late 1980s - expert systems and japan
Late 1980s to 1990s - decline in interest and funding
Late 1990s - 2011 - Big data from internet boom and Deep blue chess in 1997
2011 to present - Open AI and AlphaGo

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Understand how the current environment is fueled by exponential growth in computing infrastructure and tech megatrends (cloud, mobile, social, IOT, PETs, blockchain, computer vision, AR/VR, metaverse).

A

Cloud - accessibility
Mobile - Explosion of data to learn from
IoT - wealth of data
PETs - viable approach to addressing security and privacy concerns
Computer vision - Efficient and interactive human machine interactions
AR/VR -
Metaverse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are some core risks and harms posed by AI systems?

A
  • Bias
  • Implicit bias
  • Sampling bias
  • Temporal bias
  • Overfitting
  • Edge cases & outliers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What potential harms can AI systems pose to individuals?
* Civil rights- Face ID software. Females hard to recognize. London 80 % inaccuracy rate. Also privacy of your data. Either shared with people who shouldnt have access or can reidentify data Appropriation of data use. Or inference. LAck of transparency. Should label if AI is used. Basically a lot of privacy issues * Economic opportunity- Job Loss * Safety- Identify targets, deadly weapons
26
How can AI systems harm groups?
Through discrimination towards sub-groups.
27
What potential harms can AI systems cause to society?
* Impact on democratic processes * Public trust * Educational access * Jobs redistribution
28
What are the potential harms to a company from AI systems?
* Reputational * Cultural * Economic * Acceleration risks
29
What are the characteristics of trustworthy AI systems?
* Human-centric- amplify human life * Accountable- Organizations are responsible for the putput * Transparent- understood by the user * Explainable * Privacy-enhanced
30
What are the five OECD AI principles? How is ethical guidamce rooted in FIP, HUDEIRA?
Based on 5 OECED AI PRinciples * Inclusive growth * Human-centered values * Transparency & explainability * Robustness, security, and safety * Accountability
31
What are some AI ethics publications
1. OECD AI Principles 2. White House AI Bill of Rights 3. High level expert group AI 4. UNESCO 5. Asilomar 6. CNIL
32
Which U.S. agency is responsible for unfair and deceptive practices?
FTC (Federal Trade Commission).
33
What are some non discrimination laws for employment
Employment: Title 7, EEOC
34
Non discrim laws for consumder fiannce
Equal credit opp act, fair credit reporting act, andSR 11-7 set by the Fed Reserve and shows about model risk management)
35
Non discrim laws for Safety and Food
Occuptaional Safety- robotics safety+hazard analysis The FDA goes through a ton trying to approve software as a medical device
36
Understand the basic requirements of the EU Digital Services Act (transparency of recommender systems). Transparency for Recommender systems
overlaps the EU's General Data Protection Regulation with regard to transparency and increases overall transparency related to online platforms. For instance: Recommender systems, which is ML that recommends products: Online platforms should ensure users are informed about how recommender systems impact the way information is displayed, and how and what information is presented Online advertising: Recipients should have information directly accessible from the online interface where an ad is presented, such as parameters used for determining why an ad was directed to them (the logic used and whether it was based on profiling)
37
Relevant product safety and IP laws
EU AI. US Consumer Product Safety commision is IP
38
What does Article 22 of the GDPR prohibit?
Automated decision-making unless explicit consent is obtained.
39
Which privacy laws relate to use of data ?
GDPR< CCPA, art 22- auto decision
40
Understand automated decision making , data protection impact assessments, anonymization and how they relate to AI systems.
Auto Decision- Art 22, Not ok unless you get consent DPIA- ARt 35, needed for high risk PA Anonymization- Recital 26
41
What is the classification framework of AI systems under the EU AI Act?
* Prohibited: Social credit scoring systems Emotion recognition systems in the areas of workplace and education institutions AI that exploits a person’s vulnerabilities, such as age or disability Behavioral manipulation and techniques that circumvent a person’s free will Untargeted scraping of facial images to use for facial recognition Biometric categorization systems using sensitive characteristics Specific predictive policing applications Real-time biometric identification by law enforcement in publicly accessible spaces, except certain limited, pre-authorized situations * High-risk: Biometric identification and categorization of natural persons Management and operation of critical infrastructure (such as gas and electricity) Education and vocational training Employment, worker management and access to self-employment Access to and enjoyment of essential private services and public services and benefits (e.g., emergency services dispatching) Law enforcement Migration, asylum and border control management Assistance in legal interpretation and application of the law * Limited risk- Systems designed to interact with people (e.g., chatbots) Systems that can generate or manipulate content Large language models (e.g., ChatGPT) Systems that create deepfakes * No risk- video grames, spam filter, inventory management
42
What is required for high-risk AI systems?
Conformity assessment and technical documentation. The EU mandates conformity assesmentents regardless of PII is being processed. They ask 1)how was this dev 2)what data are you using 3) what is the impact of your model?
43
Understanding Liability Reform- What is the EU Product Liability Law
2 parts: Fault and No Fault. Fault- have to show the manufacturer intentionally caused harm. No Fault- don't have to prove anyone did anything, just that harm was caused
44
What is the penalty for noncompliance with the EU AI Act?
Up to 7% of annual turnover or 35 million Euro for prohibited AI 3% of annual turnover or 15 million euro - non-prohibited Incorrect info to authority - 1% of turnover or 7.5 million euro Proportionate fines for small companies .
45
What are deepfakes?
Manipulated media using AI to create realistic fake content ## Footnote Deepfakes can pose risks in various sectors including misinformation and privacy violations.
46
What systems are considered low risk according to the guidelines?
Video games, spam filters, inventory management systems ## Footnote These systems typically do not involve high-stakes decision-making.
47
What are the notification requirements for malfunctioning AI systems?
Report malfunction within 15 days to local market surveillance authority ## Footnote This is crucial for maintaining regulatory compliance.
48
What penalties exist for noncompliance with AI regulations?
* 7% of annual turnover or 35 million Euro for prohibited AI * 3% of annual turnover or 15 million Euro for non-prohibited AI * 1% of turnover or 7.5 million Euro for incorrect info to authority ## Footnote Proportionate fines are applied for small companies.
49
What does the EU require for high-risk AI systems?
Conformity assessment and technical documentation available ## Footnote High-risk systems must meet specific regulatory standards before market release.
50
What is the key focus of Canada’s Artificial Intelligence and Data Act (C-27)?
Record keeping and a broad definition of AI covering both private and public sectors A lot of activities will be high risk and covered by the law. High impact systems. Nature and severity. Opt out. How much autonomy and authority do people have. Federal AI data comissioner ## Footnote Many activities will be classified as high risk under this law.
51
What is a key component of US State law with Ai>
transparnecy
52
What is China's stance (Cyberspace Admin) on gen ai?
No risk based approach- rather RIGHTS bsed approach. Opt out, clear notice
53
What is the goal of the NIST AI Risk Management Framework (NIST AI RMF)?
Practical guidance on AI risk management activities ## Footnote It emphasizes trustworthy AI principles.
54
NIST Principles?
Based on 7 Trust principles of AI (VAPE SSF) 1. Valid/reliable 2. Safe 3. Secure 4. Accountable 5. Explainable 6. Privay 7. Fair
55
What is the governance process to NIST
MAP, MEAsURE, MANAGE Key steps: Test, eval, verify, validate
56
What are the eight principles of ISO 31000:2018 Risk Management? How is it divided?
(ICH BISCD) Divided into principle, process, framework *Inclusive * Dynamic * Best available information * Human and cultural factors * Continuous improvement * Integration * Structured and comprehensive * Customized ## Footnote These principles guide organizations in managing risks effectively.
57
What are the six areas of focus in ISO 31000:2018 Risk Management?
* Leadership * Integration * Design * Implementation * Evaluation * Improvement ## Footnote These areas help organizations apply the principles in practice.
58
What is part of the ISO 2018 Process
1. Identify Risk 2. Evaluate probability 3. How severe 4. Risk reduce NOT eliminate!!
59
What does the EU Proposal for Harmonized Rules on AI aim to achieve? (EU AI ACT)
Regulation to standardize AI use across member states ## Footnote This proposal seeks to create uniformity in AI governance.
60
Council of Europe HUDEIRA- general guidance
General Guidance: Develop impact assessments combining human rights with AI-centric approaches Apply a risk-based approach following specific principles Assess proportionality, contexts, and stakeholders
61
HUDEIRA principles?
Human Dignity: No algorithmic manipulation of humans Human Freedom & Autonomy: Empower, inform, and enrich individuals Prevention of Harm: Avoid adverse effects on mental, physical, and planetary health Non-Discrimination: Ensure fairness and equity Transparency: AI use must be clear and explainable Data Protection: Require informed consent for personal information use Democracy: Inclusive and transparent oversight Rule of Law: Preserve judicial independence and due process
62
HUDEIRA Process
- Identify impacted HR Assess impacts.governance mechniams Always montior
63
What are the key components of the IEEE 7000-21 Standard Model?
Embed ethical values in system design Process: Ethical value traceability in operations, requirements, and risk-based design and Communication with stakeholders Two Stages: Concept explortaion, dev Focus: Ethics w/time balance ## Footnote Focuses on transparency, sustainability, and fairness.
64
ISO/IEC Guide 51 Safety Aspects
Intended Audience: Standards drafters and other stakeholders Goals: Reduce risks in design, production, and disposal of systems/products Achieve tolerable risk levels for people, property, and the environment Connection: Influences terminology/processes in EU AI Act
65
Singapore AI Gov framework
Key Elements: Transparency, fairness, human-centricity Guidelines: Transparency, explainability, repeatability/reproducibility Safety, security, robustness Fairness, data governance, accountability Human agency, oversight, inclusive growth, societal and environmental well-being
66
STEP 1: PLANNING What are the key steps in the AI system planning phase?
* Determine business objectives and requirements- What is the problem? Is it a * Determine the scope of the project- prioritizng problems * Determine governance structure and responsibilities- Is there policies/procedures in place? Any champions in the orgs ## Footnote These steps are crucial for effective AI project management.
67
STEP 2: DESIGNWhat does a data strategy in AI system design include?
* Data gathering- what data to gather+how much * Data wrangling-prepping/formating Data (5 V's) * Data cleansing- remove irrelevant/eroneous data * Data labeling-tagging/annotating * Applying privacy-enhancing technologies (PETs)- anonym, min, diff privacy, fed leraning (tales info from multiple locations+aggregates data) encyrpt ## Footnote Each step ensures the data used is relevant and secure.
68
STEP 3: Key steps in AI dev phase?
1. Build Model Define features (same for test/training) 2. Feature Eng Raw data--->Features SME Improve Model perf by decr cost, incr efficiency/transparency 3. Model train Train, test, eval--->repeat! 4. Model Test.Validation Test of relevant metrics. NEW DATA!!!
69
What is the purpose of performing model testing and validation?
Test on relevant evaluation metrics and new data ## Footnote This process ensures the model's effectiveness and reliability.
70
STEP 4: IMPL: What should be assessed during the AI system implementation phase?
* Readiness assessments * Model deployment * Monitoring and validation (baseline to measure- is there a drift/deviation) ## Footnote Ensures the system functions as intended post-deployment.
71
What types of risks should AI governance principles address?
* Security risk * Privacy risk * Business risk ## Footnote Each risk type has unique implications for AI systems.
72
What are the types of Security risk?
1. Hallucinations- Creating fake things/doesn't stay on track 2. Deepfake- images that are created by AI 3. Data poison- poisoned data by using AI to hack it 4. Data Leakage- sensitive PII getting exposed 5. Filter bubble- intellectual "bubble" 5. Overrelying on AI 5. Adversarial ML attack- attacking input data
73
What about Privacy risk?
Data persistence- data persists longer than the person who created it Data repurposing- using data for something besides og purpose Spillover data- data collected on not the OG Data subject Data collection- opt out functionality
74
How can organizations promote a culture of ethical behavior regarding AI?
1. Pro innovation 2. Governance=risk centric 3. Planning/design=consensus 4. Team should be outcome focused 5. Self managmenet- adjustmnet, evolution 6. Framework is law/ind/tech agnostic - interoperable among systems ## Footnote Continuous education is essential for ethical AI practices.
75
How to establish an AI governance infrastrucutre
1. Role - Developer, deployer, user? 2. Role/Responsibility of AI gov people - Incl: CPO, Chief ethic, RAi, Ethics, Architecture, AI PM 3. AI governance support from team - Pressure on tech team to build AI solution fast!!! - How does DS work? - Influence change - Org risk stragegy etablishment 4. AI/ML inventory apps/algo - develop RAI policies/incentive structure - AI regulatory req 5. Taxonomy 6. Knowledgeresources to train company on ethical AI 7. AI maturity levels 8. Use/adapt existing Privacy/data governance practices 9. TRPM/accountability policies 10. cultural difference s
76
Map, plan and scope the AI project (6) What elements are involved in mapping, planning, and scoping an AI project?
* Define business case+cost benefit analysis, incl tradeoffs - Is it good to use AI? Does the model you picked solve the problem? * Identify internal/external risks - No risk, mod, major, prohibited * Construct probability/severity harms matrix - Ex: HUDIERA * Perform algorithmic impact assessment - PIA is a good starting point *Human oversight ## Footnote These steps help in thorough project analysis and planning.
77
Explain how Stakeholders can be involved in the mapping/scoping/planning an AI project?
1. Stakeholder Salience 2. Diversity 3. Positionality excercise 4. Level of engagement, including method 5. AI actors during des, dev, impl phases 6. IMPORTANT- CREATE COMM PLAN FOR REGULATORS AND CONSUMERS - this should really reflect compliance/disclosure obligations
78
What else is involved in the AI map, scoope, plan process
1. How feasible is the optionality and redress 2. Track data lineage and make sure the data is rep, unbiased, accurate 3. Get early feedback from impacted pppl (test, eval, verify, validate) 4. Prelim analysis report on risk factor/proportionate managemnet
79
For testing/validating system during DEVELOPMENT, what should you evaluate?
1. The trustworthy primciples of AI using: - edge case, unseen data, harmful input data (try to break the model) -repeat assement- is your output the same - model card, fact sheet - CFE- counterfactual explanation - adversarial testing, threat modeling to find threat - alway srefer to OECE - multip;e layer of risk mitigation - trade off
80
What are the indicators for enhanced accountability in AI systems?
* Automated decision-making * Use of sensitive data ## Footnote These markers can trigger the need for third-party audits.
81
What are the challenges surrounding AI model and data licensing?
* Ownership of data * Limitations on usage rights * Confidentiality issues ## Footnote These challenges can complicate legal frameworks for AI.
82