Back to the MIT repository
2. Privacy & Security1 - Pre-deployment

Software Supply Chains

The software development toolchain of LLMs is complex and could bring threats to the developed LLM.

Source: MIT AI Risk Repositorymit22

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit22

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Rigorous Component and Data Provenance Assurance: Enforce strict vetting protocols for all external dependencies (pre-trained models, adapters, and third-party libraries) and training data sources. This includes mandatory cryptographic integrity checks (e.g., file hashes or digital signatures) and detailed provenance logging to ensure components originate from verified suppliers and have not been subjected to unauthorized tampering or data poisoning prior to system integration. 2. Mandatory Supply Chain Inventory and Vulnerability Management: Establish and maintain a comprehensive Software Bill of Materials (SBOM) for all integrated software and model artifacts, detailing version and patch status. Implement automated dependency scanning and patching policies to proactively identify and mitigate vulnerable or outdated components within the LLM ecosystem, aligning with established security frameworks. 3. Isolated Staging and Continuous Adversarial Vetting: Mandate the deployment of all external or newly updated components within isolated, sandboxed environments for pre-production security and behavioral validation. This must be complemented by continuous monitoring for anomalous activity and scheduled adversarial robustness testing (AI Red Teaming) to detect latent backdoors or emergent malicious functionality that may manifest during runtime.