With growing interests in artificial intelligence (AI) and conversational agents (CAs), human user's verbal abuse on them has become a universal problem. Compared to other gendered agents, female personified CAs are more frequently attacked by its use...
With growing interests in artificial intelligence (AI) and conversational agents (CAs), human user's verbal abuse on them has become a universal problem. Compared to other gendered agents, female personified CAs are more frequently attacked by its users, often sexually. To address this issue, this study explored possible response strategies of female conversational agents against human user's sexual harassment. An online questionnaire with a 2 (Agent Type: conversational or human agent) x 4 (Response Strategy: normative appeal, guilt appeal, fear appeal, or avoidant message) within-subject design revealed that fear and normative appeals were perceived as more effective than guilt appeal and avoidant responses. Moreover, a human agent was able to induce more behavioral intentions and likability than a conversational agent. Qualitative data was utilized to interpret the study results. The current study urges future collaboration between academia and industry to encourage research in AI ethics.