Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Cyberspace risks (Risks of security flaw transmission caused by model reuse)

Re-engineering or fine-tuning based on foundation models is commonly used in AI applications. If security flaws occur in foundation models, it will lead to risk transmission to downstream models.

Source: MIT AI Risk Repositorymit698

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit698

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement stringent lifecycle governance and secure pipelines for foundation model reuse, including robust versioning and access controls to prevent the unintended transmission and deployment of compromised models in downstream applications. 2. Establish a continuous monitoring and vulnerability management program to detect and rapidly remediate security flaws inherited from foundation models or introduced during re-engineering, ensuring timely patching and updates across the model supply chain. 3. Secure model artifacts and harden the deployment infrastructure for both foundation models and their fine-tuned derivatives to ensure integrity and confidentiality against unauthorized access, modification, or theft.