When human-like chatbots work for consumers — and even when they are not

A new study conducted by my colleagues at Oxford University — Felipe Thomaz, Rhonda Hadi, and Andrew Stephen — shows that making human-like chatbots a two-edged sword. On the other hand, when customers are neutral, happy or frustrated, contact with man-made chatbots can increase customer satisfaction. However, when customers get angry, dealing with man-made chatbots increases their dissatisfaction, which means that dissatisfied corporate customers are often the worst offenders.

More importantly, this low satisfaction does not affect just one conversation or the customer’s feelings about the chatbot itself; it conveys negative emotions about the whole company and reduces the consumer desire to buy from that company in the future.

Chatbots are becoming increasingly common across many industries, with companies replacing human customer service providers on their websites, social media pages and messaging services. Designed to mimic humans, these anthropomorphized chatbots usually have human names (such as Amtrak’s Julie or Lufthansa’s Mildred), human-like voice (for example, Alexa’s Amazon or Apple’s Siri) and human appearance, in use. avatars or anthropomorphic characters. Companies even design their own chatbots to have nice or weird people and interests.

Mickey and Doughboy

In general, this trend towards human capital has helped companies develop their own brands, products and technologies, including chatbots. Companies that personalize their products by using anthropomorphized brand mascots such as Pillsbury Doughboy, Disney’s Mickey Mouse, and M & M’s Red, Yellow and Green characters (among others) are building more personal relationships with their customers.

The companies also make the products personal, with advertising depicting a bottle of Gatorade as a heavyweight champion or a BMW car as an attractive woman. In addition to the advertising itself, products can be made to look like human beings, such as the famous British vacuum Henry and the car grids designed to make cars look like smiles. Previous research has shown that making people has a positive impact on products because consumers are highly rated, prefers man-made products, and is reluctant to replace these respected “friends”.

Existing research has also shown that, in general, man-made chat companies benefit companies. Man-made chatbots have been shown to be highly motivating, enhance happiness and offer added benefits to social presence. Consumers may trust the technological know-how of a person because they believe that they are competent and serious about breach of trust. Avatars can make the online shopping experience more enjoyable because the information is like a shopping trip with a friend. It is not surprising, then, that the trend of the industry as a whole is focused on making people technologically, including chatbots.

Not really a person

However a growing body of research suggests that the impact of personality formation is very different. For example, research shows that human interest in man-made robots is limited, after which they drop dramatically. Robots that are more like humans are “scary,” make people feel uncomfortable, and evoke a avoidance response.

Our research points to another phenomenon when man-made robots — in this case, chatbots — may be attacking. We have found that angry customers react negatively to man-made chatbots, because when companies personalize their chatbots, they often, indirectly, raise consumer expectations that chatbots will be able to edit, communicate and act humanly.

That works best when chatbots perform simple tasks, such as tracking a package or checking an account balance, because the chatbot may successfully complete those tasks. But often man-made chatbots cannot meet expectations, leading to disappointment. Both angry and frustrated customers feel the disappointment of unfulfilled expectations, but angry customers are more sensitive to this embarrassment and are more likely to take action. They treat man-made chatbot as the most responsible, respond violently, and “punish” the chatbot and its representative company with low rates and purchase plans.

As a result of these findings, companies can use several strategies to successfully send chatbots. First, companies need to determine if customers are angry (or not) before entering into a conversation, and then use the appropriate chatbot. If companies do not have the technical skills to use different chatbots in real time, they can use impersonal chatbots in customer service situations where customers are often angry, such as complaint centers, and human chatbots in neutral settings, such as product queries. .

If companies wish to continue using man-made chatbots in all cases due to consideration of product consistency, they should undermine bot power at the beginning of the conversation. By lowering customer expectations that the chatbot will be able to function better as a person, the company reduces the chance that customers will be disappointed and may respond negatively. Some companies are already using this strategy successfully. For example, Slack’s chatbot introduces himself as saying, “I’m trying to be helpful (But I’m still just a bot. Sorry!).” Some companies are not smart and describe their chatbots as “smart” or superficial. IQs. In these cases, companies simply make their chatbots fail, as well as their dissatisfied customers, frustrated by the embarrassment.

Source link