Chatbots: they’re supposed to make life easier, not inspire a CX faceplant. Yet, over the past few years, a handful of AI interactions have gone so spectacularly wrong they’ve become infamous case studies in “what not to do.”
Let’s revisit two classics that left customers fuming and companies scrambling to explain how, exactly, they handed the reins to a digital loose cannon. In each instance, the company tried to blame the CX disasters on the chatbot. Their logic, in a nutshell: It is artificial “intelligence.” Ergo, since the chatbot is “intelligent” it is responsible for its own decisions. This is how an airline and a home warranty company tried to dodge honoring financial commitments a chatbot made to customers.
The airline: The chatbot is “a separate legal entity responsible for its own actions.” (Translation: Our AI made the mess, not us.)
The warranty company: Not responsible, because the chatbot was guilty of “miscommunicating.” (Translation: It’s not our fault our chatbot promised to send $3,000 and then ghosted you.)
Here’s what happened. The airline’s chatbot assured a grieving customer that he could purchase a ticket at full price and then apply for an $800 bereavement fare refund afterward. Oops! Too bad the airline’s actual policy – explicitly stated on its website – requires pre-approval for such refunds. Cue a courtroom drama and a loss for the airline.
The warranty company’s chatbot agreed with a customer’s request to be sent a check for $3,000 so he could replace a broken AC unit and install the new unit himself. Sorry, said the company when the check never materialized, that was just the chatbot going rogue. (The company changed its tune and made the payment once the local news picked up on the story).
AI Without Perfected Data is a Recipe for Chaos
At first glance, these stories seem like quirky AI mishaps. But the underlying issue is far from amusing: neither chatbot had a solid foundation of accurate, relevant data or a clear understanding of company policies. Instead, they were left to freelance their way into bad decisions.
When chatbots are trained on incomplete or outdated data—or worse, given vague directives like “make the customer happy”—they’re bound to misfire. And while these rogue decisions might amuse social media, they cost companies money, trust, and goodwill.
Let’s be clear: AI isn’t magic. It’s only as good as the data feeding it. For a chatbot to perform like a star employee instead of a liability, it needs:
- Clean, Accurate Data: To understand customer history, preferences, and relevant policies.
- Training on the Right Data: So it knows when to sympathize and when to escalate.
- Guardrails: To stop it from approving hefty refunds or making promises it can’t keep.
The bottom line? AI decisioning must be grounded in trustworthy data. Without it, companies risk repeating these horror stories – or creating new ones.
Companies must start to think about AI trust and understand that AI-enriched information depends greatly on the underlying data. As AI-powered tools like chatbots and GenAI expand, the pressure is on for companies to perfect their data. A robust customer data platform (CDP) is no longer optional, but is essential to delivering the precise, real-time insights that AI needs to succeed. AI needs the best data for training purposes, and the fastest, most relevant data for predictions and calculations.
That’s a good lesson to learn, which unfortunately may have come too late for the airline and the warrantor.