Privacy
Face recognition technologies and their ilk pose significant privacy risks [47]. For example, we must consider certain ethical questions like: what data is stored, for how long, who owns the data that is stored, and can it be subpoenaed in legal cases [42]? We must also consider whether a human will be in the loop when decisions are made which rely on private data, such as in the case of loan decisions [37].
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit109
Domain lineage
2. Privacy & Security
2.1 > Compromise of privacy by leaking or correctly inferring sensitive information
Mitigation strategy
1. Establish a Comprehensive Regulatory and Legal Framework: Enact federal or international legislation that supersedes existing patchwork regulations, mandating strict accountability, purpose limitation, and transparency for all Facial Recognition Technology (FRT) deployments. This includes requiring judicial approval for law enforcement surveillance and a mandatory licensing system for FRT providers. 2. Mandate Algorithmic Fairness and Independent Bias Audits: Require all FRT systems to be trained on demographically representative datasets to mitigate documented racial and gender biases. Implement mandatory, independent, third-party audits and continuous fairness testing across all demographic subgroups before and after deployment to prevent discriminatory outcomes, particularly in high-stakes contexts such as law enforcement and credit decisions. 3. Enforce Data Minimization, Opt-in Consent, and Robust Security: Adopt the principle of data minimization, strictly limiting the collection, use, and retention of biometric data only to what is necessary for a specific, stated purpose. Standardize and require clear, informed, opt-in consent for all non-essential and private-sector FRT applications. Simultaneously, fortify security by implementing end-to-end encryption and irreversible privacy-preserving techniques for all stored biometric data.