Type 4: Willful indifference
As a side effect of a primary goal like profit or influence, AI creators can willfully allow it to cause widespread societal harms like pollution, resource depletion, mental illness, misinformation, or injustice.
ENTITY
1 - Human
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit04
Domain lineage
6. Socioeconomic and Environmental
6.4 > Competitive dynamics
Mitigation strategy
1. Establish a formal, mandatory AI Governance framework led by executive leadership and the Board of Directors, tasked with defining and enforcing ethical principles, risk tolerance thresholds, and internal compliance checks throughout the AI lifecycle. 2. Implement a "Safety-by-Design" mandate, prioritizing AI alignment mechanisms, such as conservative deployment policies and technical safeguards (e.g., interpretability and "deep ignorance"), to proactively minimize the risk of widespread societal harm. 3. Support and comply with regulatory regimes that impose clear legal liability, particularly strict liability, on AI providers for failing to adopt state-of-the-art preventative design measures, thereby introducing external pressure and financial disincentives against willful indifference.
ADDITIONAL EVIDENCE
All of the potential harms in the previous sections are made more likely if the creators of AI technology are unconcerned about its moral consequences. Even if some employees of the company detect a risk of impacts that’s bigger than expected (Type 2) or worse than expected (Type 3), it may be quite difficult to institute a change if the company is already profiting greatly from its current strategy, unless there is some chance of exposure or intervention from outside the company to motivate a reform.