Back to the MIT repository
4. Malicious Actors & Misuse2 - Post-deployment

Catastrophic risk due to autonomous weapons programmed with dangerous targets

AI could enable autonomous vehicles, such as drones, to be utilized as weapons. Such threats are often underestimated.

Source: MIT AI Risk Repositorymit627

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit627

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Establish a new, legally binding international instrument that prohibits Lethal Autonomous Weapon Systems (LAWS) which inherently operate without meaningful human control or that are designed to target human beings. Concurrently, all non-prohibited autonomous weapon systems must be strictly regulated to ensure necessary and appropriate human judgment is maintained over the use of force, thereby upholding International Humanitarian Law (IHL) and International Human Rights Law (IHRL) principles. 2. Mandate the implementation of rigorous, pre- and post-deployment accountability frameworks and technical safeguards. This includes comprehensive testing to minimize the probability of unintended engagements, establishing clear lines of legal and operational responsibility for wrongful acts or failures, and actively identifying and mitigating encoded algorithmic bias to prevent disproportionate harm to marginalized groups. 3. Promote global transparency measures and international cooperation in military AI development to mitigate the risk of an arms race and proliferation. This involves curtailing the transfer of advanced autonomous weapons technology to non-state actors and prioritizing the open, civilian-focused development of AI to prevent undue national security restrictions that could impede responsible safety research.