The Use of Military AI Tools vs. “Traditional” Decision-Making in Targeting Processes
PIs: Yahli Shereshevky (University of Haifa, Law), Michael L. Gross (University of Haifa, Political Science), Ryan Shandler (Georgia Tech University, Cybersecurity and Privacy
This research project examines the growing role of AI in military targeting decisions and compares it to traditional intelligence-based targeting. While AI tools promise increased efficiency and precision in target selection, they also raise critical legal and ethical concerns regarding human oversight, automation bias, and accountability for errors. Recent reports highlight how AI-assisted targeting systems, such as those used in the Israel-Gaza conflict, influence military decision-making, sparking debates on trust, accuracy, and compliance with international law.
The study empirically investigates these concerns by conducting an experimental survey among military personnel. Participants will be placed in simulated decision-making scenarios where they assess target intelligence generated by either AI or human analysts. A key question in this research is whether decision-makers will show greater deference to AI-generated intelligence, given its perceived objectivity and efficiency, or whether they will rely more on traditional human-based assessments due to concerns over AI’s limitations and accountability gaps. Understanding these dynamics is crucial for evaluating the implications of AI integration in military operations and its potential to reshape targeting protocols.
This research contributes to ongoing debates by offering empirical data on the intersection of law, ethics, and AI-driven military operations. The findings will help refine policies on AI usage in warfare and inform future regulations to ensure responsible AI integration in combat settings.