High-risk AI system for business? Investigating the passive pathway of stereotype threat in AI-human interaction

Academic discussions and regulations, such as the Digital Services Act, regarding the ethical use of artificial intelligence (AI), typically focus on preventing consumers from being exposed to the active pathways of AI-generated harm. This study contributes to the literature by investigating the passive pathways of potential stereotype threats, in which consumers themselves are affected by AI due to their own perceptions, especially when AI has personified characters. In the context of the educational service domain, we empirically investigate whether and to what extent stereotype threats negatively affect consumers’ objectives when they interact with personified AI agents. We find that female consumers suffer from gender stereotype threats when interacting with AI agents designed as male in the STEM (science, technology, engineering, and mathematics) domain, where females are expected to perform worse than males. Our findings suggest that companies introducing AI-based services should consider potential negative effects arising from mismatches between consumers and AI agents in social dimensions. The results are robust across various settings, and we discuss marketing implications for companies deploying AI agents in frontline services toward customers.