Liability and negligence
Liability and negligence are legal gray areas in artificial intelligence. If you leave your children in the care of a robotic nanny, and it malfunctions, are you liable or is the manufacturer [45]? We see here a legal gray area which can be further clarified through legislation at the national and international levels; for example, if by making the manufacturer responsible for defects in operation, this may provide an incentive for manufactures to take safety engineering and machine ethics into consideration, whereas a failure to legislate in this area may result in negligentlydeveloped AI systems with greater associated risks.
ENTITY
2 - AI
INTENT
3 - Other
TIMING
2 - Post-deployment
Risk ID
mit121
Domain lineage
6. Socioeconomic and Environmental
6.5 > Governance failure
Mitigation strategy
1. Prioritize the enactment of comprehensive national and international legislation to explicitly define and apportion legal liability (e.g., manufacturer, deployer, or user) for harm caused by autonomous AI systems, which serves to incentivize proactive safety engineering and adherence to machine ethics. 2. Establish a robust organizational AI Governance framework, including clear accountability mechanisms, continuous risk assessments, and required human oversight for high-risk applications, to demonstrate reasonable care and mitigate negligence exposure. 3. Mandate stringent transparency requirements for AI systems, ensuring clear and accurate disclosure to users regarding the system's intended function, capabilities, limitations, and the role of AI, thereby mitigating "failure to warn" and negligent misrepresentation claims.