Chatbot chatter: a legal labyrinth?
It has been a particularly busy two weeks in the realm of generative AI. As well as Open AI's Sora, which Shaun discusses below, Google announced Gemini Pro 1.5, which it demonstrated handling up to 1 million tokens. Larger token windows allow users to provide the model with more context, reducing the risk of hallucination. For comparison, OpenAI’s GPT-4 only handles 128,000 tokens. Google also showcased the model’s ability to review video and accurately describe and pinpoint the timestamps of certain events, opening the door for automated monitoring or reviewing of video content. Not to let an AI announcement week go by without controversy, Google turned the spotlight on bias and diversity in AI models as it faced backlash over its image-generation model’s unstable take on history and diversity. Before Google turned it off, users had posted on social media screenshots of generated images of the US Founding Fathers as women and people of colour, Nazi soldiers of varied ethnicities and an outright refusal to portray any request for images of white people, despite having no qualms doing so for other groups.
Another interesting headline was the case involving Air Canada and customer Jake Moffatt. Moffatt had inquired about the bereavement policy with Air Canada’s website-embedded chatbot. Having been told he could claim a refund up to 90 days after ticket purchase, Mr Moffatt went ahead and purchased his flights to attend a family member’s funeral. After he had followed the chatbot’s advice and requested a refund, Air Canada informed him the information was incorrect and that no refund would be given.
Air Canada argued that the correct information was on its website and that the company “cannot be held liable for information provided by one of its agents, servants, or representatives – including a chatbot.” Describing the airline’s submission as “remarkable”, the Civil Resolution Tribunal of British Columbia ordered Air Canada to partially refund Moffatt, stating the chatbot was just another part of the airline's website and that there is no reason why a customer should know one part of a website is more reliable than another.
This case is just the tip of the iceberg regarding AI's legal implications. As AI agents become more advanced, they will also become more autonomous and start operating in higher-stakes situations, navigating decisions and interactions in ways that might not be entirely predictable. This unpredictability will pose significant challenges in establishing liability. At least in the case of chatbots designed to remove humans from the loop, it would seem incongruous with common sense to afford companies the combo deal of reducing headcounts and liability also.
Kieran Lindsay, CMT Research Officer