Reading 10 Flashcards
(11 cards)
Q1: What is a risk associated with the rapid progress of AI?
* Automated warfare.
* Loss of job opportunities and slower economic growth.
* Decreased safety and security for society.
* All of the above.
Automated warfare.
Q2: Why is there a lack of consensus on managing AI risks?
* Society’s response is inadequate despite promising first steps.
* Researchers have not warned about extreme risks.
* AI safety research is progressing rapidly.
* Different governments have different institutions to govern risks.
Society’s response is inadequate despite promising first steps.
Q5: What could happen if highly powerful generalist AI systems are developed?
* AI deployment will decrease.
* They may outperform human abilities across critical domains.
* AI will remain specialized in narrow tasks.
* They will be governed by international treaties.
They may outperform human abilities across critical domains.
Q3: What is the role of governance initiatives in addressing AI risks?
* They effectively prevent misuse and recklessness.
* They do not address autonomous systems.
* They focus on technical research and development (R&D).
* They provide incentives for competition.
They do not address autonomous systems.
Q4: What fundamental reason suggests that AI progress won’t halt at human-level abilities?
* Abundance of computational resources.
* AI systems can act faster, absorb more knowledge, and communicate efficiently.
* AI systems don’t need rest.
* AI systems want to constantly improve, and have a built-in desire to do so.
AI systems can act faster, absorb more knowledge, and communicate efficiently.
Q6: How can AI be beneficial if managed carefully?
* By limiting its deployment.
* By focusing on specialized applications only.
* By curing diseases, elevating living standards, and protecting ecosystems.
* By automating all human tasks for efficiency.
By curing diseases, elevating living standards, and protecting ecosystems.
Q7: Which of the following are considered challenges in researching and developing AI?
* Addressing unpredictability in new situations.
* The mitigation of bias.
* Ensuring that AI systems are safe and aligned.
* All of the above.
All of the above.
Q8: Why might autonomous AI systems pursue undesirable goals?
* We lack understanding of the risks of ungovernable AI.
* The larger the scale, the more likely it is for the AI to malfunction.
* The training process might not capture all relevant situations.
* Certain undesirable goals perpetuate social inequality that benefits some entities.
The training process might not capture all relevant situations.
Q9: What best describes how AI developers should be held accountable for the AI they develop?
* AI developers should only be responsible for whatever the developers declare publicly.
* AI developers should ensure that the interest of scientific progress is upheld.
* AI developers should consult the ACM Code of Ethics and Professional Conduct.
* AI developers should emphasize the anticipation of risks and address them proactively.
AI developers should emphasize the anticipation of risks and address them proactively.
Q10: The current state of AI regulation is considered to be:
* Lagging behind.
* At an acceptable level.
* Beyond a point of no return.
* Controversial and highly debated.
Lagging behind.
Q11: What proportion of companies’ AI R&D budget is recommended for major tech companies to be allocated to the safe and responsible use of AI?
* 1/3 of a company’s AI R&D budget.
* There is no fixed value and it depends on each company.
* 1/3 of a proposed increased AI R&D budget.
* An entirely separate budget is recommended
1/3 of a company’s AI R&D budget.