Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Type 6: State Weaponization

AI deployed by states in war, civil war, or law enforcement can easily yield societal-scale harm

Source: MIT AI Risk Repositorymit06

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit06

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Institute binding national and international legal frameworks that mandate meaningful human control ("human on the loop") over all lethal autonomous weapon systems and high-impact AI used in law enforcement. These frameworks must clearly delineate lines of responsibility and accountability across the entire system lifecycle to address the 'responsibility gap' in the application of force. 2. Require mandatory, independent, and rigorous risk assessments, testing, and evaluation protocols for all state-deployed AI systems to validate performance, reliability, and bias mitigation in realistic operational contexts. This includes establishing minimum practices such as completing an AI impact assessment, testing for equity and nondiscrimination, and specifying a precise threshold for human review and intervention. 3. Advance international cooperative governance by engaging the civilian AI community, defense industry, and policymakers to develop global norms and best practices for managing dual-use AI risks and military applications. Proactive measures must also be developed to counter state-sponsored misuse of AI for information harms, such as digital harassment, censorship, and voter suppression.

ADDITIONAL EVIDENCE

Tools and techniques addressing the previous section (weaponization by criminals) could also be used to prevent weaponization of AI technologies by states that do not have strong AI research labs of their own. But what about more capable states? The elephant in room here is that AI can be used in war. Some argue that, ideally, mechanical drones could be pitted against one another in casualty-free battles that allow nations to determine who would win a war of lethal force, without having to actually kill any human beings. If taken no further, this would be a major improvement over current warfare practices. However, these capabilities are not technologically far from allowing the mass-killing of human beings by weaponized drones. Escalation of such conflicts could lead to unprecedented violence and death, as well as widespread fear and oppression among populations that have been targeted by mass killings.