5. Auflage bestellen
|Schlagwörter||Conversational Agent, CAs, Interactive Agent, Chatbot, Chatterbot, Digital Assistant, Dialogue Systems, Virtual Assistant, Robots, Artificial Intelligence, Machine Learning, Natural Language, Natural Language Processing, Siri, Alexa, Google Assistant, Google Dialogflow, Service, Service Science, Service Satisfaction, Service Quality, Innovation, Digital Innovation, Digitization, Experimental Research, Information Systems, Human-Computer Interaction, System Design, Anthropomorphic Design, Human-like Design, Technology Design, Human-like Machines, Anthropomorphism, Computers Are Social Actors, Social Response Theory, Anthropomorphism Bias, Circumplex Model of Affect, Frustration-Aggression Hypothesis, Perceived Humanness, Human-Conversational Agent Interaction, Imperfection, Human-like Errors, Failure, Technical Failure, Response Failure, Flaw, Errors, Response Delay, Misinterpretation, Künstliche Intelligenz, Maschinelles Lernen, Natürliche Sprache, Natürliche Sprachverarbeitung, Experimentelle Forschung, Mensch-Computer-Interaktion, menschenähnliches Design, menschenähnliche Maschinen, soziale Reaktionstheorie, Anthropomorphismus-Voreingenommenheit, Zirkumplexmodell des Affekts, Frustrations-Aggressions-Hypothese, Reaktionsverzögerung, Dienstleistungskontext|
|URL zu externer Homepage||https://www.uni-goettingen.de/en/johannes+riquel/620766.html|
Conversational Agents (CAs) are changing the way people interact in their daily lives. Specifically, CAs such as chatbots or voice assistants are increasingly covering services traditionally provided by human employees. CA-based services are available at any time and in any capacity, providing convenience and comfort while overcoming the limitations of human employees. However, many CAs are imperfect and prone to errors, such as frequently misinterpreting users’ requests, which leads to a mismatch between the expectations of the service and the service provided. As a result, some CA-based services have been discontinued in the past. In this context, a human-like design of CAs potentially offers a valuable approach to enhancing the user’s perception of a service. Prior research shows that this leads to individuals interacting with a human-like CA as if they would interact with a real individual. Furthermore, human-like errors could be considered a social cue, since it is human nature to make errors. To address the overall research area of CAs imperfections, four studies were conducted and synthesized in this dissertation. The studies provide novel insights into the design of human-like text-based CAs interrelated with the occurrence of CA imperfections, including human-like errors. Through a set of experiments, four major contributions are provided. First, the human-like design of imperfect CAs can mitigate the negative individuals’ perceptions, if implemented carefully. Second, the human-like design of imperfect CAs can shift individuals into a positive emotional state which increases service satisfaction. Third, human-like errors are not perceived as human-like and should not be employed in CA-based service encounters at present. Fourth, not every CA-based service requires a high level of human-like design, as the expectations of a CA-based service may be as different as the expectations of traditional human-based services.