Back to the MIT repository
2. Privacy & Security1 - Pre-deployment

Deep Learning Frameworks

LLMs are implemented based on deep learning frameworks. Notably, various vulnerabilities in these frameworks have been disclosed in recent years. As reported in the past five years, three of the most common types of vulnerabilities are buffer overflow attacks, memory corruption, and input validation issues.

Source: MIT AI Risk Repositorymit21

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit21

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

- Priority 1: Pre-deployment Vulnerability Assessment and Remediation. Conduct comprehensive cybersecurity risk assessments, vulnerability scanning, and focused penetration testing on the deep learning framework and the LLM implementation to proactively identify and mitigate known software flaws such as buffer overflow, memory corruption, and inadequate input validation issues prior to system deployment. - Priority 2: Rigorous Framework and System Configuration Management. Establish a strict process for configuration management and patch deployment to ensure that all deep learning frameworks, dependencies, and underlying operating systems are maintained at the latest, securely configured, and patched versions to address disclosed Common Vulnerabilities and Exposures (CVEs). - Priority 3: Architecturally Embed Strong Technical Security Controls. Implement robust technical controls, including granular input validation and data sanitization routines at all interfaces, and adopt Zero Trust Architecture (ZTA) principles to reduce the attack surface through least-privilege access, micro-segmentation, and continuous monitoring of network activity.