Back to the MIT repository
2. Privacy & Security1 - Pre-deployment

Data-related (Difficulty filtering large web scrapes or large scale web datasets)

A large scale “scraping” of web data for training datasets increases vulnerability to data poisoning, backdoor attacks, and the inclusion of inaccurate or toxic data [76, 28, 48]. With a large dataset, filtering out these quality issues is very difficult or trades off against significant data loss.

Source: MIT AI Risk Repositorymit1094

ENTITY

1 - Human

INTENT

2 - Unintentional

TIMING

1 - Pre-deployment

Risk ID

mit1094

Domain lineage

2. Privacy & Security

186 mapped risks

2.2 > AI system security vulnerabilities and attacks

Mitigation strategy

1. Implement advanced data validation and sanitization protocols employing statistical anomaly detection and clustering algorithms, such as DBSCAN, to proactively identify and excise adversarial or spurious data points from the large-scale training corpus prior to model ingestion. 2. Establish comprehensive data provenance tracking (data lineage) and utilize version control systems (e.g., DVC) to maintain reproducible, clean baseline states of the training dataset. This facilitates immediate rollback to an uncompromised state following the detection of an integrity breach. 3. Employ adversarial training methodologies during the model development phase to enhance model robustness. This involves strategically introducing synthesized adversarial examples into the training regimen to improve the model's capacity to correctly classify intentionally manipulated inputs.