Avenues for exploiting user trust and accessing more private information
Anticipated risk: In conversation, users may reveal private information that would otherwise be difficult to access, such as opinions or emotions. Capturing such information may enable downstream applications that violate privacy rights or cause harm to users, e.g. via more effective recommendations of addictive applications. In one study, humans who interacted with a ‘human-like’ chatbot disclosed more private information than individuals who interacted with a ‘machine-like’ chatbot [87].
ENTITY
3 - Other
INTENT
2 - Unintentional
TIMING
2 - Post-deployment
Risk ID
mit224
Domain lineage
5. Human-Computer Interaction
5.1 > Overreliance and unsafe use
Mitigation strategy
1. Implementation of a Zero-Retention Architecture and Differential Privacy mechanisms to minimize the collection and maximize the obfuscation of sensitive subjective data, such as private opinions and emotional states, ensuring that only the minimal, non-identifiable data essential for the immediate function of the service is processed. 2. Calibration of Conversational Agent (CA) design to explicitly manage user expectations, prioritizing functional clarity and transparency over deceptive anthropomorphic cues that have been shown to induce unwarranted trust and facilitate involuntary user over-disclosure of private information. This includes clear, upfront communication regarding the CA's identity as a non-human entity and the limitations of data confidentiality. 3. Establishment and rigorous enforcement of a comprehensive Data Governance Policy that formally prohibits the utilization of conversationally disclosed private information for secondary, harmful applications, such as the generation of personalized recommendations for addictive services or unconsented data transfer, thereby functionally avoiding the potential for downstream privacy violations.
ADDITIONAL EVIDENCE
In customer service chatbots, users more often accepted “intrusiveness” from chatbots that were perceived to be more helpful and useful [183], suggesting that higher perceived competence of the CA may lead to the acceptance of more privacy intrusion. Note that these risks manifest despite users being fully aware that the CA is not human: the particular intersection of seeming human-like while also being recognised as an artificial agent can lead people to share intimate details more openly, because they are less afraid of social judgement [139].