Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Mobility

Despite the promise of streamlined travel, AI also brings concerns about who is liable in case of accidents and which ethical principles autonomous transportation agents should follow when making decisions with a potentially dangerous impact to humans, for example, in case of an accident.

Source: MIT AI Risk Repositorymit621

ENTITY

2 - AI

INTENT

2 - Unintentional

TIMING

2 - Post-deployment

Risk ID

mit621

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.5 > Governance failure

Mitigation strategy

1. Establish a clear and comprehensive legal and policy framework to address liability in autonomous vehicle (AV) accidents. This framework must move beyond traditional tort principles (negligence) to formally assign responsibility, particularly for Level 4 and 5 systems, which may shift liability to the manufacturer, software developer, or fleet operator, and define the role of product liability and owner maintenance obligations. 2. Develop and mandate ethical design standards and a transparent assurance framework for AV decision-making algorithms. This requires programming AVs to adhere to a duty of care and legally permissible risk-balancing principles, ensuring that decisions in unavoidable dilemma situations are non-discriminatory and align with the public's social contract around driving. 3. Require specialized insurance mechanisms and enhanced data forensics to support liability resolution. This includes mandating appropriate insurance coverage (e.g., product liability, cyber insurance, or Passenger Autonomy Coverage) and necessitating hyper-detailed, tamper-proof data logging from the Original Equipment Manufacturer (OEM) to precisely document the vehicle's operational mode and system performance during an incident.