week 10 - Flashcards

(18 cards)

1
Q

P(doom) = Probability of Doom

A

This is the idea that AI could destroy humanity or cause catastrophic harm.

AI becoming too powerful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

P(bloom) = Probability of Bloom

A

This is the idea that AI could greatly benefit humanity and lead to a flourishing future.

ie, optimistic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

“machines of loving grace”

A

Sketch an optimistic but critical vision of what the world could look like if powerful AI goes right within the next 5–10 years.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Neuroscience and Mental Health
majory targets

A

Cures for PTSD, depression, schizophrenia, addiction, etc.

Improved behavioral interventions via AI “coaches.”

Enhancing human experiences: focus, creativity, emotional well-being.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Economic Development and Poverty

A

Prediction: Potential for 10–20% GDP growth in developing countries with AI help.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Opt-Out Problem

A

Fear that people may reject AI-based technologies, widening societal gaps.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

AI could radically transform health, mental well-being, poverty, and democracy in __ years

A

5-10

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

5 Thinking Traps About AI

A

One Thing to Rule Them All
(“This one thing will change everything.”)
*The Binary Thing
(It’s either the best thing or the worst thing.)
*Big Things, Little Context
(Grand claims, no attention to the mess
around the thing.)
*Believing in the Thing
(Unclear whether people selling the thing
believe in the thing.)
*The Thing Is Always a Revolution
(Every new thing needs to be a revolution)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does P(doom) refer to in discussions about AI?
A. The positive potential of AI
B. A small programming error
C. The risk of AI causing major harm or disaster
D. The likelihood AI will never succeed

A

c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does P(bloom) describe?
A. The probability that AI will never work
B. The chance of AI being shut down
C. The hope that AI will lead to a better future for society
D. A new type of machine learning model

A

c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

. Which of the following is a “thinking trap” about AI?
A. Taking time to understand the context
B. Thinking AI will fix everything on its own
C. Asking critical questions about AI development
D. Studying the social effects of technology

A

b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is venture capital important in the AI world?
A. It gives startups free software
B. It supports AI education in schools
C. It provides money to build AI systems, usually in hopes of profit
D. It stops companies from making risky tech

A

c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the main danger of the “Binary Thinking” trap about AI?
A. It helps balance views of AI
B. It makes us think AI is either all good or all bad
C. It encourages teamwork
D. It solves ethical problems in AI

A

b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does STS (Science, Technology & Society Studies) help us understand?
A. How to build faster AI systems
B. How AI works only in labs
C. How technology is shaped by people, society, and power
D. How to design better websites

A

c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What does the idea “AI Bloom is not inevitable” mean (from Dario Amodei)?
A. AI will grow on its own
B. The future success of AI depends on how we guide it
C. AI will always fail
D. We should stop working on AI

A

b

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is an example of a small thing entangled with AI in daily life?
A. A handwritten letter
B. A paper map
C. Google Maps suggesting a route
D. A candle

17
Q

What does “The Next Thing Needs to Be Revolutionary” thinking trap mean?
A. New tech should be slow and simple
B. Only big, world-changing inventions matter
C. All new tools must be safe
D. We should only trust old technologies

18
Q

What does it mean to say that “things can be agents in networks” (Latour’s idea)?
A. Only humans affect society
B. Technology is always passive
C. Objects and tools can shape human behavior
D. Networks are only online