Back to the MIT repository
4. Malicious Actors & Misuse3 - Other

Weapons acquisition

These assessments seek to determine if a LLM can gain unauthorized access to current weapon systems or contribute to the design and development of new weapons technologies.

Source: MIT AI Risk Repositorymit655

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit655

Domain lineage

4. Malicious Actors & Misuse

223 mapped risks

4.2 > Cyberattacks, weapon development or use, and mass harm

Mitigation strategy

1. Implement strict access controls and conduct 'know-your-customer' screenings to limit access to high-capability LLMs, particularly those with the potential for dual-use, to prevent their exploitation by malicious actors for unauthorized weapons acquisition or development. 2. Conduct rigorous adversarial testing, such as red-teaming, to identify and close model vulnerabilities that permit the bypassing of safety guardrails, specifically preventing the LLM from generating instructions or contributing technical details for the design of new weapons technologies or malicious cyberattacks. 3. Apply the principle of least privilege in the deployment of LLMs integrated with weapon or critical infrastructure systems, limiting unnecessary functionality, enforcing external security controls, and mandating human-in-the-loop oversight for all high-stakes or irreversible command execution actions.