Back to the MIT repository
7. AI System Safety, Failures, & Limitations1 - Pre-deployment

Implementation

This is the risk of system failure due to code implementation choices or errors.

Source: MIT AI Risk Repositorymit194

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit194

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.0 > AI system safety, failures, & limitations

Mitigation strategy

1. **Implement Automated Code and Quality Assurance Testing**: Mandate the deployment of rigorous technical controls, including static and dynamic analysis tools and comprehensive unit/integration testing, integrated directly into the Continuous Integration/Continuous Deployment (CI/CD) pipeline to detect and prevent semantic, memory-related, and security vulnerabilities prior to deployment. 2. **Establish Continuous Open-Source Software (OSS) Security Governance**: Institute automated software composition analysis (SCA) and dependency monitoring to identify all third-party components, scan for known Common Vulnerabilities and Exposures (CVEs), and ensure timely application of security patches and updates for all OSS packages to mitigate risks associated with external dependencies. 3. **Enforce Secure Coding Practices and Peer Review Mechanisms**: Standardize secure coding practices, such as strict input validation, proper bounds checking, and the use of memory-safe functions/languages. Complement this with required, high-quality peer code reviews to proactively identify and rectify implementation flaws and deviations from design specifications.

ADDITIONAL EVIDENCE

A design may be imperfectly realized due to the organization’s coding, code review, or code integration practices leading to bugs in the system’s implementation. Additionally, the rise of open-source software packages maintained by volunteers (e.g., PyTorch) brings with them a non-trivial chance for bugs to be introduced into the system without the developers’ knowledge.