Back to the MIT repository
6. Socioeconomic and Environmental2 - Post-deployment

Products Liability Law

Like manufactured items like soda bottles, mechanized lawnmowers, pharmaceuticals, or cosmetic products, generative AI models can be viewed like a new form of digital products developed by tech companies and deployed widely with the potential to cause harm at scale....Products liability evolved because there was a need to analyze and redress the harms caused by new, mass-produced technological products. The situation facing society as generative AI impacts more people in more ways will be similar to the technological changes that occurred during the twentieth century, with the rise of industrial manufacturing, automobiles, and new, computerized machines. The unsettled question is whether and to what extent products liability theories can sufficiently address the harms of generative AI. So far, the answers to this question are mixed. In Rodgers v. Christie (2020), for example, the Third Circuit ruled that an automated risk model could not be considered a product for products liability purposes because it was not “tangible personal property distributed commercially for use or consumption.”176 However, one year later, in Gonzalez v. Google, Judge Gould of the Ninth Circuit argued that “social media companies should be viewed as making and ‘selling’ their social media products through the device of forced advertising under the eyes of users.”177 Several legal scholars have also proposed products liability as a mechanism for redressing harms of automated systems.178 As generative AI grows more prominent and sophisticated, their harms—often generated automatically without being directly prompted or edited by a human—will force courts to consider the role of products liability in redressing these harms, as well as how old notions of products liability, involving tangible, mechanized products and the companies that manufacture them, should be updated for today’s increasingly digital world.179

Source: MIT AI Risk Repositorymit530

ENTITY

1 - Human

INTENT

3 - Other

TIMING

2 - Post-deployment

Risk ID

mit530

Domain lineage

6. Socioeconomic and Environmental

262 mapped risks

6.5 > Governance failure

Mitigation strategy

1. Conduct Rigorous Safety Assessments and Employ Defect Mitigation Strategies Implement a "safety-by-design" methodology, encompassing robust pre- and post-deployment safety assessments, verification, and validation testing. This requires considering and documenting reasonable alternative designs, employing engineering controls, and ensuring high-quality training data and model parameters to proactively mitigate both design and manufacturing-type defects, aligning product development with principles of product liability law. 2. Establish Comprehensive Transparency and Failure-to-Warn Protocols Systematically provide clear, explicit warnings and disclosures to end-users, detailing the AI system's involvement, specific capabilities, inherent limitations, and associated risks. This also necessitates implementing transparent documentation of the AI's decision-making processes and ensuring clear instruction manuals, thereby proactively mitigating exposure to claims based on failure-to-warn and negligent misrepresentation. 3. Institute Continuous Risk Monitoring and Post-Deployment Governance Develop and implement a continuous monitoring and auditing framework to track AI system performance, assess evolving risk profiles, and promptly detect and flag anomalies or emergent defects post-deployment. This necessitates periodic field performance assessments, tracking use and misuse, and establishing a clear, documented process for timely investigation, remediation, and disclosure of identified discrepancies or errors.