![]() ![]() Using these guiding principles, we argue that effective philanthropists should focus not on a legally binding ban of autonomous weapons, but on researching strategic risks and on working on a set of so-called confidence-building measures or CBMs, which have a track record of regulating militarily-useful technologies. Focus on key actors, rather than multilateral inclusiveness, and prioritize those states most likely to develop and use autonomous systems, possibly starting with bilateral dialogues.Focus on strategic risks, including the effects of autonomous systems on nuclear stability.We argue that philanthropists can have an outsized impact in this space by following two guiding principles to choose their interventions: In addition to outlining the risks from autonomous weapon systems and the military applications of AI, we also evaluate potential interventions to mitigate these risks. Thus, philanthropists have an opportunity to have an outsized impact in this space and reduce the long-term risks to humanity’s survival and flourishing. Third, the most widely-advocated solution - formal treaty-based arms control or a “killer robot ban” - is not the most tractable solution. Second, those who do study risks beyond “slaughterbots” (like automation bias, strategic stability, etc.) are few and receive even less funding there is a talent shortage and room for funding. First, the largest organizations focus mostly on humanitarian issues, leaving strategic threats relatively neglected. These risks include the increased speed of decision-making in a world with autonomous weapons, automation bias, increased complexity leading to a higher risk of accidents and escalation, and the possibility of AI-related military competition and its implications for long-term AI safety.Īlthough “killer robots” feature in the popular imagination, and some prominent organizations have taken up and promoted the cause, autonomous weapons remain a neglected issue for three reasons. We argue that much of the problem arises from strategic risks that affect the likelihood of great power conflict, nuclear war, and risks from artificial general intelligence. This report intends to advise philanthropic donors who wish to reduce the risk from autonomous weapon systems and from the military applications of AI more broadly. Autonomy in weapon systems therefore affects both the long-term future of the world and the lives of billions of people today. Several of these issues are potential pathways towards existential and global catastrophic risks. Autonomous weapons and military applications of AI more broadly (such as early-warning and decision-support systems) have the potential to increase the risk factors for a variety of issues, including great power war, nuclear stability, and AI safety. The use and proliferation of autonomous weapon systems appears likely in the near future, but the risks of AI-enabled warfare are under-studied and under-funded. I would like to thank Anthony Aguirre, Stephen Clare, Sjir Hoeijmakers, Emilia Javorsky, Matt Lerner, Carl Robichaud, and Shaan Shaikh for their helpful comments and advice on earlier drafts, and to thank to Paul Scharre and Michael Horowitz, whose research and insights on AI-enabled military systems are foundational to much of this report. (This cause area report is viewable as a Google Doc here.) Autonomous Weapon Systems and Military AI ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |