MetaTOC stay on top of your field, easily

Why Consumers Prefer Chatbots' Simulated Empathy: Revisiting the Empathy‐Honesty Trade‐Off

, ,

Journal of Consumer Behaviour

Published online on

Abstract

["Journal of Consumer Behaviour, EarlyView. ", "\nABSTRACT\nAdvances in artificial intelligence have enabled chatbots to simulate empathic responses, yet it remains unclear how users evaluate such expressions compared to humans, particularly when empathy is known or believed to be simulated. We tested competing hypotheses: Participants favor chatbots’ false empathy since they expect chatbots to experience less empathy than humans, or favor humans’ false empathy because they expect chatbots to be more honest and transparent than humans. Across four experiments (N = 1435), participants imagined experiencing positive or negative events in various contexts (e.g., work, education, lottery) and shared these events with either chatbot or human agents. Agents responded with false empathic, neutral, or no reactions, after which participants reported their overall impressions, willingness to interact, and underlying psychological mechanisms. Results consistently showed greater preference for chatbots’ simulated empathy over humans’, driven by participants’ expectations that chatbots experience less empathy rather than by expectations of greater honesty. These findings extend expectancy violations theory and prosocial lie literature to human‐AI interaction and indicate that transparent disclosure of algorithmically generated emotions does not diminish engagement. The study offers actionable insights for designing ethically responsible empathic chatbots in consumer and social contexts.\n"]