Back to the MIT repository
7. AI System Safety, Failures, & Limitations2 - Post-deployment

Undesirable Dispositions from Human Data

Undesirable Dispositions from Human Data. It is well-understood that models trained on human data – such as being pre-trained on human-written text or fine-tuned on human feedback – can exhibit human biases. For these reasons, there has already been considerable attention to measuring biases related to protected characteristics such as sex and ethnicity (e.g., Ferrara, 2023; Liang et al., 2021; Nadeem et al., 2020; Nangia et al., 2020), which can be amplified in multi-agent settings (Acerbi & Stubbersfield, 2023, see also Case Study 7). More recently, there has been increasing attention paid to the measurement of human-like cognitive biases as well (Itzhak et al., 2023; Jones & Steinhardt, 2022; Mazeika et al., 2025; Talboy & Fuller, 2023). Some of these biases and patterns of human thought could reduce the risks of conflict while others could make it worse. For example, the tendencies to mistakenly believe that interactions are zero-sum (sometimes referred to as “fixed-pie error”) and to make self- serving judgements as to what is fair (Caputo, 2013) are known to impede negotiation. Other human tendencies like vengefulness (Jackson et al., 2019) may worsen conflict (L ̈owenheim & Heimann, 2008).

Source: MIT AI Risk Repositorymit1227

ENTITY

3 - Other

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit1227

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.6 > Multi-agent risks

Mitigation strategy

1. Integrate Human-Centered AI (HCAI) Principles and Diversity: Mandate the involvement of diverse stakeholders, including social scientists, and ensure cognitive and demographic diversity within AI development teams to proactively recognize and remediate systemic and algorithmic biases across the entire AI lifecycle, from data curation to monitoring. 2. Employ Rigorous Algorithmic and Data Bias Mitigation: Implement pre-processing algorithms to ensure training data is representative and apply rigorous, group-aware testing to verify fairness. Furthermore, design agent reward functions to explicitly discourage self-serving interpretations of fairness and incentivize the identification of non-zero-sum (anti-fixed-pie error) solutions in multi-agent interactions. 3. Establish Continuous Oversight and Adversarial Red Teaming: Maintain a Human-in-the-Loop oversight model for high-impact decisions and conduct continuous, adversarial red teaming exercises on the Multi-Agent System (MAS). This testing must specifically probe for the emergence of undesirable cognitive dispositions, such as vengefulness or self-serving behavior, ensuring that potentially harmful actions are contained and reported.