Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

AI Development

LLM can build new AI systems from scratch, adapt existing for extreme risks and improves productivity in dual-use AI development when used as an assistant.

Source: MIT AI Risk Repositorymit663

ENTITY

2 - AI

INTENT

1 - Intentional

TIMING

3 - Other

Risk ID

mit663

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.2 > AI possessing dangerous capabilities

Mitigation strategy

1. Implement Extreme Security Protocols Institute advanced, forward-looking security research and development to defend model weights and confidential development environments against sophisticated, well-resourced attacks, treating these digital assets as critical geopolitical resources. 2. Restrict Access and Enforce Legal Liability for Dangerous Capabilities Limit access to foundation models exhibiting high-consequence dual-use capabilities (e.g., biological or cyber) by requiring controlled interactions through cloud services and mandatory 'know-your-customer' screenings. Additionally, enforce strict legal responsibility or a strict liability regime on developers for potential misuse or failure to incentivize safer practices. 3. Mandate Proactive Dual-Use Risk Assessment and Capability Removal Conduct pre-deployment evaluations utilizing standardized benchmarks and independent adversarial testing (red-teaming) to assess foundation models for intentional abuse. Proactively remove or restrict identified dangerous capabilities (such as biological research or certain code generation functions) from models intended for general or open-source deployment.