Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Software Vulnerabilities

Programmers are accustomed to using code generation tools such as Github Copilot for program development, which may bury vulnerabilities in the program.

Source: MIT AI Risk Repositorymit18

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit18

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Establish Mandatory Human Review and Governance Controls Institute a policy requiring rigorous human review for all AI-generated code, particularly for components related to identity, authorization, and cryptographic logic. This process must treat all AI-suggested code as untrusted input. Additionally, define and enforce clear organizational policies that limit the use of generative AI tools in sensitive code areas, thus managing the attack surface and controlling tool sprawl. 2. Integrate Continuous and Comprehensive Vulnerability Scanning Deploy continuous Static Application Security Testing (SAST) and Software Composition Analysis (SCA) across the entire Software Development Life Cycle (SDLC), from the developer's Integrated Development Environment (IDE) through the build pipeline. This ensures that vulnerable code patterns and dependencies potentially introduced by the AI model are detected and remediated before they can be committed to production. 3. Implement Developer Training on Secure Prompting and AI Risk Develop and administer specialized training programs to enhance developer awareness of AI-specific security risks, such as insecure code generation and data leakage. The training must emphasize the practice of secure prompting—embedding security requirements and trust boundaries into the query—and reinforce the critical need for developers to validate and comprehend any code snippet before its acceptance or deployment.