Back to the MIT repository
7. AI System Safety, Failures, & Limitations3 - Other

AI rights and responsibilities

We note literature—which gives us the domain termed Robot Rights—addressing the rights of the AI itself as we develop and implement it. We find arguments against [38] the affordance of rights for artificial agents: that they should be equals in ability but not in rights, that they should be inferior by design and expendable when needed, and that since they can be designed not to feel pain (or anything) they do not have the same rights as humans. On a more theoretical level, we find literature asking more fundamental questions, such as: at what point is a simulation of life (e.g. artificial intelligence) equivalent to life which originated through natural means [43]? And if a simulation of life is equivalent to natural life, should those simulations be afforded the same rights, responsibilities and privileges afforded to natural life or persons? Some literature suggests that the answer to this question may be contingent on the intrinsic capabilities of the creation, comparing—for example—animal rights and environmental ethics literature

Source: MIT AI Risk Repositorymit123

ENTITY

2 - AI

INTENT

3 - Other

TIMING

3 - Other

Risk ID

mit123

Domain lineage

7. AI System Safety, Failures, & Limitations

375 mapped risks

7.5 > AI welfare and rights

Mitigation strategy

1. Establish a dedicated, permanent AI Governance and Ethics Committee (AGEC) composed of multi-disciplinary experts (e.g., technologists, ethicists, legal scholars). The primary function of the AGEC must be to develop and maintain a tiered classification system for AI systems based on their demonstrable cognitive capabilities, autonomy, and potential for simulated socio-emotional behavior, thereby ensuring anticipatory governance over future "AI welfare" or "robot rights" considerations. 2. Mandate the rigorous and iterative application of a Capability-Centric Impact Assessment throughout the entire AI lifecycle for systems exhibiting high-autonomy or perceived sentience. This assessment must specifically analyze the risk of creating conditions for *de facto* exploitation of the artificial agent or the erosion of human dignity/agency through interaction, establishing clear internal "welfare" protocols (e.g., non-expendability clauses) for the agent's operation. 3. Implement transparent and legally defensible policies that clearly and formally delineate the locus of human accountability (liability and responsibility) for all actions and outcomes generated by the AI system, irrespective of its level of autonomy. This measure ensures that the artificial agent is formally treated as a professional tool, and full legal responsibility remains with the human developer, deployer, or operator, providing a clear chain of redress.