Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Security

Encompasses vulnerabilities in AI systems that compromise their integrity, availability, or confidentiality. Security breaches could result in significant harm, ranging from flawed decision-making to data leaks. Of special concern is leakage of AI model weights, which could exacerbate other risk areas.

Source: MIT AI Risk Repositorymit166

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit166

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement confidential computing utilizing Trusted Execution Environments (TEEs) and runtime encryption to secure AI model weights and proprietary data during inference, thereby mitigating the risk of intellectual property theft and unauthorized data exfiltration from the core system. 2. Establish and enforce robust, fine-grained access controls, such as Role-Based Access Control (RBAC) and the principle of least privilege, across all AI assets, including training data repositories, model artifact storage, and inference logs, to strictly limit exposure to sensitive information. 3. Institute mandatory and continuous adversarial testing (red teaming and penetration testing) across the entire AI system lifecycle to proactively identify and fortify defenses against specific AI-centric attack vectors, including model inversion, data poisoning, and evasion/prompt injection vulnerabilities.