Chatbots in Disguise: How Non-Disclosing Chatbot Identity Spurs Strategic Behavior

As organizations increasingly turn to Artificial Intelligence (AI) for addressing customer complaints, understanding the dynamics of user interactions with AI and human agents becomes imperative. In this study, we investigate customers’ strategic behavior when interacting with agents of different identity cues on the food delivery platform. We employ two two-round lab experiments, where participants are randomly assigned to a chatbot-based agent, a ChatGPT-based agent, a human agent, or a non-disclosure identity group. We observe significant strategic behavior and decreased satisfaction only in the non-disclosure group. The participants in the non-disclosure group present a unique paradox. Even though they report experiencing less negative emotions, their expressions are more negative, and they have the lowest satisfaction ratings among all groups. Our findings contribute to the growing research on AI-user interactions and emotion regulation, suggesting intriguing dynamics when the agent’s identity is undisclosed. This work provides valuable insights for organizations considering AI adoption for customer service, highlighting the potential challenges and implications for user experience, customer satisfaction, and solution acceptance.