Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Harmful code generation

Models might generate code that causes harm or unintentionally affects other systems.

Source: MIT AI Risk Repositorymit1305

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1305

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement Security-by-Design at the point of generation. Enforce the use of secure-by-default system prompts and configurations within AI coding assistants that explicitly define security constraints, such as requiring input validation, using parameterized queries, and adhering to the principle of least privilege, thereby influencing the model toward safer code output. 2. Integrate continuous and automated security validation across the Software Development Lifecycle (SDLC). Integrate automated Application Security Testing (AST) tools, including Static Application Security Testing (SAST) and Software Composition Analysis (SCA), directly into the Integrated Development Environment (IDE) and the Continuous Integration/Continuous Delivery (CI/CD) pipeline to detect security vulnerabilities and non-compliant code patterns in real-time, immediately flagging AI-generated output for remediation. 3. Enforce mandatory human oversight and governance for critical code segments. Establish clear policies that restrict AI-generated code usage in high-risk areas (e.g., identity, authentication, cryptography, core business logic) and mandate peer code review by experienced developers, who must treat all AI-generated code as untrusted until it is thoroughly reviewed and validated against established security and compliance standards.