The Flixbus chatbot claims nothing it says is legally binding (but a court found otherwise in Canada, ruling against an airline)
The Flixbus chatbot claims nothing it says is legally binding (but a court found otherwise in Canada, ruling against an airline)
Flora - AI Assisted Agent
A chatbot erroneously told a traveler they get free travel in a particular situation. I don’t recall exact circumstances but it was something like a last minute trip for a funeral. The airline then denied him the free ticket. He sued. The court found that the chatbot represents the company and is therefore legally bound to agreements.
It’s interesting to note that agreements are now being presented which you must click to accept before talking to a chatbot. E.g., from Flixbus:
You are interacting with an automated chatbot. The information provided is for general guidance only and is not binding. If you require further clarification or additional information, please contact a member of our staff directly or check out our terms and conditions and privacy notice.
(emphasis mine)
I’m not in Canada so that may be true. I just wonder if this agreement is enforceable in Europe.
So companies now have bots on their official customer engagement channels, but they are not binding. Semi atonomous vehicles are ferrying passengers but are not liable in a crash. Bots are writing legal papers, but the precedents may or may not exist. Term papers are being written by bots, but are factually inaccurate or blatantly false. Recently an AI summary claimed Novak Djokovic had beaten Carlos alcaraz in the us open, only they are actually meeting tonight. Under what circumstances can I trust ai?
Someone tell me once again how this thing is supposed to replace humans and make our jobs easier...? Anyone?
Which jurisdiction are you referring to, exactly?
Is Canada the only country to make companies responsible for what their bots agree to?