Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Models generating code with security vulnerabilities

Models can generate code or coding suggestions that contain security vulner- abilities. This may occur across various LLM-based model families, including more advanced models with superior coding performance, where the tendency to produce insecure code is even more pronounced [26].

Source: MIT AI Risk Repositorymit1194

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1194

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.3 > Lack of capability or robustness

Mitigation strategy

1. Establish continuous, automated security testing throughout the Software Development Lifecycle (SDLC). This includes integrating Static Application Security Testing (SAST) and Software Composition Analysis (SCA) into developer workflows (e.g., Pull Requests and Continuous Integration pipelines) to detect and flag generated vulnerabilities and insecure dependencies in real-time. 2. Employ advanced security context injection via Prompt Engineering and Model Guardrails. This necessitates augmenting the Large Language Model's (LLM) prompts with explicit security cues, organizational policies, and context-aware guidelines to steer the model toward generating code that adheres to secure coding practices and avoids common weaknesses. 3. Institute a mandatory Human-in-the-Loop (HITL) Validation process. Developers are required to treat all AI-generated code as untrusted input, ensuring rigorous human oversight, peer review, and validation of all generated logic, particularly concerning input validation, authentication mechanisms, and critical system components.