Session 9 Flashcards
(37 cards)
Q: What is the main critique Lucy Suchman presents in “Algorithmic Warfare and the Reinvention of Accuracy”?
A: Suchman critiques the U.S. military’s use of AI and automated systems in warfare, arguing that claims of precision and accuracy obscure political responsibility, rely on discriminatory profiling, and result in indiscriminate violence.
Q: What is Project Maven?
A: Project Maven is a U.S. Department of Defense initiative launched in 2017 to use AI and machine learning to analyze drone surveillance footage and automate the identification of potential targets.
Q: How does Suchman describe the concept of “situational awareness”?
A: She critiques situational awareness as a military construct that promises perfect knowledge of threats but is fundamentally flawed, relying on discriminatory apparatuses of recognition and dehumanizing classifications.
Q: What does Suchman mean by the “reinvention of accuracy”?
A: The reinvention of accuracy refers to how military technologies conflate weapon precision with legitimate target identification, masking the political and ethical violence embedded in the targeting process.
Q: What role does feminist and critical security studies theory play in Suchman’s analysis?
A: Suchman uses feminist and critical security studies to reveal how algorithmic warfare is embedded in racialized, gendered, and political logics that determine who becomes “targetable” and “killable.”
Q: What solution does Suchman propose to the dangers of algorithmic warfare?
A: She calls for rejecting technological solutionism and redirecting resources away from automated warfare toward diplomacy, social justice, and accountable forms of global security.
Q: What are loitering munitions and why are they controversial?
A: Loitering munitions are expendable drones that autonomously search for and attack targets. They are controversial because of the uncertainty around the level of human control and the potential for fully autonomous targeting without human oversight.
Q: What is a drone swarm in military terms?
A: A drone swarm is a coordinated group of uncrewed aerial vehicles (UAVs) that communicate and operate collectively, often without direct human intervention, to perform surveillance, reconnaissance, or offensive military operations.
Q: What are the three main drivers behind the development and proliferation of autonomous drones?
A: Strategic (great power competition), operational (efficiency, speed, and precision in warfare), and economic (lower costs compared to manned systems).
Q: What are the key legal concerns associated with autonomous drones?
A: Autonomous drones challenge fundamental principles of International Humanitarian Law (IHL), including distinction between civilians and combatants, proportionality, and the requirement for meaningful human control.
Q: What ethical issues are raised by the use of autonomous drones?
A: Ethical concerns include the delegation of life-and-death decisions to machines, undermining human dignity, and removing moral responsibility from human operators.
Q: What is the main technological risk associated with the use of AI in autonomous drones?
A: AI systems used in drones are brittle, prone to errors, can be manipulated or hacked, and often struggle to accurately distinguish between legitimate targets and civilians, particularly in complex environments.
Q: What is the main argument of Elke Schwarz’s article “From Blitzkrieg to Blitzscaling”?
A: Schwarz argues that the logic and practices of Venture Capital (VC) investment are reshaping military norms, procurement processes, and defense strategies, prioritizing rapid growth, profit, and disruption over ethical and democratic accountability.
Q: What is “blitzscaling” and how is it applied in the defense sector?
A: Blitzscaling refers to the VC-driven strategy of prioritizing rapid, exponential growth over efficiency and accountability. In the defense sector, it pushes startups to scale quickly by disrupting traditional procurement processes and accelerating the adoption of new military technologies.
Q: Name two key defense startups that embody VC influence in the military domain.
A: Anduril Industries and Palantir Technologies are two prominent VC-backed defense startups that have secured major U.S. defense contracts and shaped military innovation toward AI-enabled systems.
Q: What are the primary narratives used by VC-backed defense companies to legitimize their growing influence?
A: Narratives of urgency and crisis (e.g., competition with China), the need for bureaucratic reform, technological inevitability, and patriotism/democratic defense are used to justify rapid adoption of their technologies.
Q: What ethical risks does Schwarz associate with the influx of VC in the defense sector?
A: Schwarz warns that VC logics prioritize speed, profit, and market dominance over ethical considerations, democratic accountability, and long-term security. This can erode public oversight and increase the risk of conflict escalation.
Q: How has the U.S. defense sector structurally adapted to accommodate VC interests?
A: Through procurement reforms like the Adaptive Acquisition Framework (AAF) and Other Transaction Agreements (OTAs), which reduce oversight and allow faster, more flexible contracting tailored to startup timelines.
Q: What is Denise Garcia’s main argument in the introduction of Artificial Intelligence to Benefit Humanity?
A: Garcia argues that the militarization and weaponization of AI threatens global peace, human dignity, and international stability, calling for urgent global governance and cooperation to prevent autonomous killing.
Q: What does Garcia refer to as the “third revolution in warfare”?
A: The development and deployment of autonomous weapon systems, following the first (gunpowder) and second (nuclear weapons) revolutions in warfare.
Q: What is “transnational networked cooperation” according to Garcia?
A: A collective effort by states, scientists, civil society, and private actors to create global governance frameworks for military AI, beyond traditional state-centric models.
Q: What ethical and legal concerns does Garcia associate with autonomous weapons?
A: They risk violating international humanitarian law principles (distinction, proportionality, precaution), delegating life-and-death decisions to machines, and lowering the threshold for war.
Q: What is “common good governance” as introduced by Garcia?
A: A governance model aimed at creating global public goods (such as peace and security) through inclusive and cooperative efforts involving states, civil society, scientists, and other actors.
Q: Why does Garcia argue that the regulation of military AI is urgent?
A: Because the speed of AI militarization is outpacing the development of international legal norms, increasing global instability, and threatening the future of human-centered security.