Back to the MIT repository
2. Privacy & Security2 - Post-deployment

Insufficient Security Measures

Malicious entities can take advantage of weaknesses in AI algorithms to alter results, potentially resulting in tangible real-life impacts. Additionally, it’s vital to prioritize safeguarding privacy and handling data responsibly, particularly given AI’s significant data needs. Balancing the extraction of valuable insights with privacy maintenance is a delicate task

Source: MIT AI Risk Repositorymit474

ENTITY

1 - Human

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit474

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Embed Privacy by Design (PbD) principles into the entire AI system lifecycle, emphasizing data minimization, robust encryption, and role-based access controls for all data, to proactively ensure adherence to privacy regulations and uphold user data integrity. 2. Implement a multi-layered defense strategy against adversarial manipulation, including the use of adversarial training during model development to enhance model robustness and the deployment of input validation and anomaly detection at the inference stage to neutralize malicious inputs. 3. Utilize Explainable AI (XAI) frameworks and post-hoc explainability techniques to ensure model transparency, enabling the auditing of algorithmic decisions and the timely detection of unintended consequences, biases, or unauthorized model alterations post-deployment.

ADDITIONAL EVIDENCE

guaranteeing adherence to privacy regulations and upholding user data privacy constitutes a vital component of AI security management. Incorporating privacy-by-design principles and employing anonymization techniques can aid in safeguarding sensitive personal information