Other ethical risks
Although we have discussed a number of common risks posed by ML systems, we acknowledge that there are many other ethical risks such as the potential for psychological manipulation, dehumanization, and exploitation of humans at scale.
ENTITY
1 - Human
INTENT
1 - Intentional
TIMING
2 - Post-deployment
Risk ID
mit204
Domain lineage
4. Malicious Actors & Misuse
4.1 > Disinformation, surveillance, and influence at scale
Mitigation strategy
1. Implement robust regulatory and legal frameworks that mandate algorithmic transparency, data minimization, and the operationalization of Privacy by Design (PbD) principles to systematically realign commercial incentives away from pervasive surveillance and psychological exploitation. 2. Develop and integrate hybrid content governance mechanisms, utilizing both advanced technological detection methods and human oversight, to identify and mitigate the widespread proliferation of manipulative, polarizing, and outrageous content designed to increase engagement at the expense of public well-being. 3. Mandate and enforce comprehensive human rights due diligence across the ML lifecycle to prevent the exploitation and dehumanization of human labor. This includes ensuring fair working conditions, adequate compensation, and the right to organize for data workers ("ghost workers"), and restricting the use of algorithmic management systems that undermine worker autonomy.
ADDITIONAL EVIDENCE
This is aligned with the notion of surveillance capitalism, in which humans are treated as producers of data that are mined for insights into their future behavior [205]. These insights are often used to sell advertisement exposures. This incentive mismatch between the public and companies can lead to design choices that are detrimental to the former but beneficial to the latter [206]. Examples include the fanning of religious tensions that increased offline violence [84, 193] and encouraging the proliferation of outrageous content to increase engagement [56]