Discrimination
Discrimination - Unfair or inadequate treatment or arbitrary distinction based on a person’s race, ethnicity, age, gender, sexual preference, religion, national origin, marital status, disability, language, or other protected groups.
ENTITY
3 - Other
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit958
Domain lineage
1. Discrimination & Toxicity
1.1 > Unfair discrimination and misrepresentation
Mitigation strategy
1. **Implement Rigorous Data Curation and Balancing Protocols** Institute comprehensive data governance to ensure that all training and testing datasets are demonstrably diverse and representative of the full population intended to be served. This includes proactively identifying and augmenting data for underrepresented demographic groups through techniques such as oversampling, synthetic data generation, or reweighting to mitigate historical and selection biases prior to model training. 2. **Integrate Fairness-Aware Algorithmic Design** Apply fairness-by-design principles by incorporating specialized algorithmic techniques to enforce non-discrimination during model development. This involves utilizing methods such as adversarial debiasing, fair representation learning, or imposing mathematical fairness constraints (e.g., equalized odds) directly into the optimization process to prevent the algorithm from basing predictions on protected characteristics, even those indirectly encoded. 3. **Establish Continuous Audit and Governance Mechanisms** Mandate the establishment of a robust governance structure, including regular, independent legal and technical audits, to continuously monitor the AI system's outputs post-deployment. This framework must track key fairness metrics (e.g., statistical parity, disparate impact) and ensure human-in-the-loop oversight is maintained for high-stakes decisions, providing a mechanism for prompt identification, explanation (XAI), and remediation of any emergent discriminatory outcomes.