Back to the MIT repository
2. Privacy & Security3 - Other

Risks from models and algorithms (Risks of stealing and tampering)

Core algorithm information, including parameters, structures, and functions, faces risks of inversion attacks, stealing, modification, and even backdoor injection, which can lead to infringement of intellectual property rights (IPR) and leakage of business secrets. It can also lead to unreliable inference, wrong decision output, and even operational failures.

Source: MIT AI Risk Repositorymit684

ENTITY

3 - Other

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit684

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Prioritize the deployment of **API Rate Limiting** and **Output Obfuscation (e.g., Differential Privacy)** to restrict the data harvesting necessary for model extraction and inference attacks. 2. Mandate the use of **Adversarial Training** during the model development lifecycle to improve intrinsic robustness against evasion and, when combined with **Automated Data Validation**, mitigate poisoning and backdoor injection risks. 3. Enforce stringent **Role-Based Access Controls (RBAC)** for access to model parameters, code repositories, and inference endpoints, complemented by **Model Watermarking** to enable forensic tracing of intellectual property theft.