- Airline held accountable for misinformation provided by its chatbot, setting a precedent for AI liability in the travel industry.
- Ruling emphasizes that companies cannot evade responsibility for the actions of their AI systems, regardless of whether they are static pages or chatbots.
- Incident underscores the risks associated with heavy reliance on AI in customer interactions and prompts businesses to reconsider their AI strategies to mitigate liabilities.
An airline has been found responsible for misinformation provided by its chatbot, highlighting the potential pitfalls of AI in the travel industry. In the case involving Air Canada, a passenger was incorrectly informed by the chatbot about a discount for booking a flight. When the passenger tried to claim the discount, the airline denied it, stating that the chatbot was a separate legal entity responsible for its own actions. However, the Civil Resolution Tribunal ruled in favor of the passenger, holding Air Canada accountable for the misinformation and ordering them to pay damages and fees.
This decision is seen as significant by consumer advocacy groups, signaling that companies cannot evade responsibility for the actions of their AI systems. Gabor Lukacs, president of Air Passenger Rights, emphasized the importance of companies being held liable for the behavior of their AI technology. He pointed out that relying on AI doesn’t absolve companies of responsibility and that the ruling establishes a common-sense principle in this regard.
The incident involving Air Canada is not isolated, as other airlines have also faced issues with their chatbots providing inaccurate information. Such errors, often termed “AI hallucination,” underscore the risks associated with relying heavily on AI in customer interactions. The ruling is expected to prompt airlines and other businesses to reconsider their use of AI and take precautions to mitigate potential liabilities.