Back to the MIT repository
5. Human-Computer Interaction2 - Post-deployment

Privacy concerns

Anthropomorphic AI assistant behaviours that promote emotional trust and encourage information sharing, implicitly or explicitly, may inadvertently increase a user’s susceptibility to privacy concerns (see Chapter 13). If lulled into feelings of safety in interactions with a trusted, human-like AI assistant, users may unintentionally relinquish their private data to a corporation, organisation or unknown actor. Once shared, access to the data may not be capable of being withdrawn, and in some cases, the act of sharing personal information can result in a loss of control over one’s own data. Personal data that has been made public may be disseminated or embedded in contexts outside of the immediate exchange. The interference of malicious actors could also lead to widespread data leakage incidents or, most drastically, targeted harassment or black-mailing attempts.

Source: MIT AI Risk Repositorymit398

ENTITY

3 - Other

INTENT

1 - Intentional

TIMING

2 - Post-deployment

Risk ID

mit398

Domain lineage

5. Human-Computer Interaction

92 mapped risks

5.1 > Overreliance and unsafe use

Mitigation strategy

1. Implement Design and Communication Controls to Decalibrate Trust: Systematically employ 'de-anthropomorphized' linguistic features—such as avoiding first-person pronouns, cognitive verbs (e.g., know, think, understand), and emotional cues—and explicitly disclose the AI's non-human identity in all critical interactions to foster *calibrated trust* based on function, not misplaced emotional or human-like dependence. 2. Establish Granular User Control and Affirmative Consent Mechanisms: Institute clear, accessible controls that require affirmative opt-in for all data collection and use beyond essential service provision, and provide users with the immediate ability to access, correct, delete, or retract consent for any information shared, thereby mitigating the risk of inadvertent data relinquishment and ensuring compliance with the *right to erasure*. 3. Mandate Data Minimization and Privacy-by-Design: Enforce stringent data governance that adheres to the principle of data minimization, including the default filtering of all sensitive or Personally Identifiable Information (PII) from chat inputs before processing, and utilizing technological safeguards such as anonymization and encryption to limit the scope and severity of potential data leakage incidents.