Think Local  

Whom do I sue?: BC Law Institute’s new Report on Artificial Intelligence and Civil Liability

Can you sue AI?

By Miranda Wardman

With the emergence of more and more AI technology, one of the unintended consequences is that occasionally AI may cause harm. In April 2024, the BC Law Institute, a non-profit law reform organization that undertakes research to determine how we can improve laws in B.C., published the Report on Artificial Intelligence and Civil Liability. The report explores the current state of AI and its implications for civil liability, recommending the best way forward for recognizing potential legal harms caused by AI.

Civil liability refers to the area of law where individuals, businesses, and governments file disputes against others, to obtain compensation for legal “harms” to persons or property. Think suing a person for property damage or suing a city, person, or business for negligence causing physical or financial harm. People, corporate entities, or governments are considered legal persons that can commit legal “wrongs” or harm and can be sued, but the status of an AI technology as a “legal person” that can be sued is not entirely clear.

AI is the topic of a lot of conversation these days. As stated in the report, pinning down one definition of AI is difficult. Canada’s Directive on Automated Decision-Making, defines AI as: “Information technology that performs tasks that would ordinarily require biological brainpower to accomplish, such as making sense of spoken language, learning behaviours, or solving problems.”

For the most part, AI is meant to optimize and improve certain tasks or systems. As stated in the report, AI is not generally developed with an intent to do harm, but ultimately, AI may be used by humans to do harm. AI may also sometimes display unpredictable original behaviour in pursuit of its objectives, called an “emergence.” Sometimes an emergence will create innovation, other times it will generate harmful results. Certain types of AI are more susceptible to causing harm, like autonomous systems.

But who is responsible for harms caused by AI? The report provides an overview of recommendations for how to hold AI liable for harm committed.

AI is not a “person” (yet), and they have no money to pay compensation. Treating AI like a human decision-maker causing harm is challenging, because AI “fails” and causes harm differently from a human. For example, the report draws on the example of a self-driving car involved in a fatal accident. The AI could not discern a pedestrian walking a bicycle across a crosswalk as a person or fixed object, and only decided the pedestrian was a human at the last second. The AI likely would have correctly identified a pedestrian or a person riding a bicycle separately, but when combined, the AI made a fatal error. Conversely, a human would not make this mistake.

In contemplating the pros and cons of determining different ways to assign fault for harm to AI, the report ultimately recommends that fault should be assigned to the individual or company with the decision-making authority, of a managerial nature, over the operation of the AI system (the “operator”). One of the arguments in favour of holding the operator liable for harms committed by AI is that it would be unfair to always hold the creator of an AI system liable for harms committed by operators using the AI systems, who could then commit harms with no consequences. At the same time, sometimes the AI creator will be equally responsible as the operator of the system depending on the particular AI. The report recommends against holding AI responsible for harms when overseeing other AI systems—the operator liable for the harms should always be an individual or company.

However, there is still a high number of potential operators involved in managing an AI system, and this may be complicated further in autonomous systems. There is also sometimes limited explanation or understanding for an AI’s emergence, and the harm the AI causes may not be foreseeable, which is an essential principle of civil liability.

Ultimately, there are going to be some challenges and a need for new legal developments in the area of civil liability for harms committed by AI. We are already seeing attempts at regulating responsible use of AI systems, namely Canada’s Artificial Intelligence and Data Act tabled in June 2022, but not yet passed. The report provides an extensive overview of the complications of developing the law in this area and well-thought-out recommendations to guide future law makers. AI users and luddites alike should check out the report if they are interested in learning more about AI and the potential impact it can have on their lives.

Pushor Mitchell LLP is a full-service law firm located in Kelowna that can help with any AI issues you might have. It is one of the largest firms outside the Lower Mainland, with a team of more than 35 lawyers and 100 staff serving clients across Western Canada. Its relationship-driven approach and experience across a multitude of practice areas help it identify and service its clients’ unique individual and corporate needs. For more information about Pushor Mitchell LLP, visit its website here.

Miranda Wardman is an associate lawyer with Pushor Mitchell LLP practising in the area of civil litigation, administrative law and regulatory law. Despite the theme, she did not use ChatGPT to write this article.

This article is written by or on behalf of the sponsoring client and does not necessarily reflect the views of Castanet.

More Think Local articles