Liability issues for the use of artificial intelligence (AI) in health care as a home assistance tool
Mélanie Bourassa Forcier, LL.L. (summa cum laude, Ottawa), LL.B. (Montreal), M.Sc. (London School of Economics), D.C.L. (McGill), Professor, Faculty of Law, University of Sherbrooke.
Lara Khoury Ad.E., LL.B. (Sherb.), BCL (Oxon), DPhil (Oxon), Associate Professor, Faculty of Law, McGill University.
Nathalie Vézina B.C.L., LL.B. (McGill), D.E.A. (Strasbourg), Doctorate in Comparative Law (Paris II), Professor, Faculty of Law, University of Sherbrooke.
ABSTRACT: This paper reviews the liability issues surrounding the use of artificial intelligence (AI) as an assistance tool at home and investigates how the Canadian legal framework addresses these issues. It also explores whether an alternative approach to existing liability regimes should be adopted to promote AI innovation based on recognized best practices and, in turn, increase the use of AI technology in healthcare delivery.
SUBMITTED: 22 MAR 2022 | PUBLISHED: 29 APR 2022
DISCLOSURE: None declared.
ACKNOWLEDGMENTS: Professor Bourassa Forcier worked in collaboration with Gaëlle Feruzi Salimata, Mathieu Kiriakos, Siobhan Mullan and Dary-Anne Tourangeau, research assistants. The authors are grateful for the review and helpful comments provided by Adam Allouba, from Dentons on an earlier draft. This article was made possible by an Age-Well grant for the SMART (Socially Mobile Assistive Robots for Telecare and Daily Activities of Older Adults) project led by Pr. François Michaud. We would also like to thank him warmly for his comments. The research leading to this article was also funded by the Social Sciences and Humanities Research Council of Canada.
CITATION: Bourassa Forcier, Mélanie et al (2022). Liability issues for the use of artificial intelligence (AI) in health care as a home assistance tool. Canadian Health Policy, APR 2022. ISSN 2562-9492, https://doi.org/10.54194/NSPX3994, www.canadianhealthpolicy.com
INTRODUCTION
Artificial intelligence (AI) technologies show great potential in assisting the elderly, helping them to remain longer at home (which includes private residences for the elderly), before turning to institutional care (Ho, 2020). Using AI to delay the need to rely on such care could, in turn, increase the elderly’s well-being through promoting their independence, which most of them value. It could also lead to health improvements in this population by allowing health systems to monitor patients’ health remotely (Ho, 2020).
It is no wonder, then, that “intelligent” chatbots are rapidly gaining acceptance and popularity in the provision of healthcare. In its simplest form, intelligent chatbots allow an individual to have a conversation with a robot. This functionality was proven to positively impact patients (Obayashi, 2021). Chatbots can also be goal-oriented for added benefits (Car, 2020). For instance, a San Francisco start-up company has developed Woebot, a mental health support chatbot. Woebot delivers cognitive behavioural therapy through conversations that use, amongst other elements, empathetic statements and positive reinforcement (Darcy et al., 2021). Studies have shown that discussions with Woebot can significantly reduce anxiety and depression (Fitzpatrick et al., 2017), notably thanks to its ability to develop bonds with users, although the chatbot frequently reminds users that it is not in fact a real person (Darcy et al., 2021). In England, focus was placed on evaluating the impact of a robot’s presence on dementia and loneliness. The result was the development of a companion called “Robbie,” whose monitoring functions allow one to watch over the person and compile data on their consumption habits and activities (Blanchard, 2019).
Personal assistance robots are likely to be found in an increasing number of private homes given the industry’s goal to bring their unit cost under $5,000 in the near future (Senate, 2017). However, provincial governments in Canada have not focused significantly on introducing robot care assistants for the elderly in public healthcare establishments such as public elderly residences. Neither have they made significant efforts to promote their use at home through financial incentives, for instance. Nevertheless, local initiatives exist, such as the use of cat robots to reduce aggressive tendencies in some individuals in elderly residences in Prince Edward Island (“Des chats”, 2018) or the introduction of Zora, a robot that interacts, sings and dances, in a Quebec City residence for elderly (“Zora”, 2019).
Given the aging population and the associated increase in chronic diseases, as well as the objective of enabling the elderly to remain at home as long as possible, robot care assistants represent an option that should be seriously considered by policymakers and governments. Japan, known for being a pioneer in robot development (D’Ambrogio, 2020, p. 1), has proven the benefits of such governmental involvement. After setting up the Robot Revolution Realisation Council in 2015, Japan adopted a new robot strategy focused on the “silver economy” (i.e., the contributions of aging populations to the economy) (D’Ambrogio, 2020, p. 6-7). Fast forward a few years and, thanks to countless subsidies granted by the Japanese government, most of its healthcare facilities have been able to afford robots such as “Paro”, now used globally for robotic pet therapy (Foster, 2018).
The use of robot care assistants at home raises both ethical and legal concerns. Aside from privacy issues, which are not addressed in this paper, there may be consequences in terms of liability if their use causes injuries. This paper explores the Canadian legal framework that may be called upon to address issues of liability for psychological and bodily injuries caused by AI used as an assistance tool at home.
ANALYSIS
AI as assistance tool at home and distinctions between types of AI
AI can be classified according to the “reasoning level” of the technology. “Weak AI”, also referred to as “artificial narrow intelligence” (ANI), merely simulates human intelligence, “strong AI”, also known as “artificial general intelligence” (AGI), “represents computational systems that [actually] have [it]” (Liu, 2021, p. 3), and “artificial super intelligence” (ASI), those that surpass it. Although the last two categories do not yet exist, we are getting closer to AGI thanks to deep learning, “which is essentially a neural network with three or more layers” (IBM, 2022). Input units receive outside information, activate a certain number of “intermediate” hidden units that will in turn activate output units (Paquette, 2021, p. 188-9). This system is known as black box AI, because even though the input and output are clear, “there is no straightforward way to map out the decision-making process” (Bathaee, 2018, p. 891). That leads to a disconnect between the programmer and the output, since the programmer no longer has control over the analysis performed by the technology after the initial set up. Therefore, the use of black box AI “may affect the safety and accuracy of treatment and should be carefully monitored and evaluated when used in health care” (Car, 2020). The lack of human understanding of, and control over, black boxes’ processes also make the establishment of liability a considerable challenge when injuries connected to AI systems occur.
The following analysis, although focused on the elderly, also applies to other people with physical, mental, or social vulnerabilities who seek greater autonomy in their daily lives.
Purchaser’s liability claim
Let us explore the case where an individual buys a robot companion, as if it were any other product. It functions as a chatbot that allows for conversations and also acts as a movement sensor, reminding the user to take his medication. If the robot malfunctions, causing serious damage to this user, one needs not look far to determine who could be held liable. Indeed, an obvious defendant would be the robot manufacturer. Contractual and extra-contractual rules apply, under both civil law and common law, granting the right to sue for damages against a manufacturer for a product’s safety defect.
The buyer enters into a contractual relationship with the seller as soon as he buys one of his products and is, therefore, protected under Quebec contractual rules (CCQ: art. 1458, 1726 ff.; CPA: s. 53) against latent defects through the legal warranty of quality when it comes to compensating injuries caused by this product.
Under general rules of contracts, the buyer benefits not only from a liability claim against the seller, but also against the manufacturer and intermediaries (CCQ: art. 1730). The same applies regarding a claim against a service provider who uses a good for the delivery of services to a client (CCQ: art. 2103). The buyer must prove the existence of the latent defect, its seriousness, the fact that it was unknown to the buyer at the time of sale, that it was not apparent and that it existed prior to the sale (CCQ: art. 1726). In the case of a sale by a professional seller (including a manufacturer), the pre-existence of the defect is presumed if the property malfunctions or deteriorates prematurely (CCQ: art. 1729). Generally, to claim damages, the buyer must demonstrate that the defendant was aware of the defect, but such knowledge may be presumed in the case of a seller who “could not have been unaware of the latent defect” (CCQ: art. 1728), as is the case for a manufacturer or other professional sellers.
It is worth noting that Quebec’s Consumer Protection Act grants the victim even stronger protection in a contractual setting. When the contract of sale, lease or service occurs between a consumer and a merchant, the warranty owed by the merchant, which extends to the manufacturer and some intermediaries (CPA: ss. 1, 34 & 53), allows to find the defendant liable for damages caused to the consumer if the good has a latent defect or lacks the instructions necessary for the protection of the user against a risk or danger of which he would otherwise be unaware (CPA: s. 53, paras. 1 & 2). The consumer does not have to demonstrate knowledge on the part of the defendant and the latter is barred from pleading that he was unaware of the defect or lack of instructions (CPA: s. 53, para. 3), thereby eliminating the possibility of an exoneration based on development risk (Dupoy, 2019).
The relevance of this product liability regime is obvious when an elderly person acts as a consumer, buying or renting a robot as an assistance tool. It is also relevant where the victim hires a service provider, such as a company offering an array of services ranging from personal assistance to occupational therapy, and where the company’s employee uses an assistance tool as part of the contract for services. This very advantageous contractual regime extends to a subsequent purchaser, who can invoke a latent defect or lack of instructions against the manufacturer after buying the product from the initial purchaser (CPA: s. 53, para. 4). Such would be the case if an elderly person decided to sell an assistance tool to a friend or neighbour in order to get a different or more recent model.
Unlike Quebec civil law, product liability in common law is mostly based on the tort of negligence, which requires the demonstration of a breach of the standard of care (Thompson & Allouba, 2019). Therefore, to hold a manufacturer liable for injury caused by a robot companion, the purchaser would have to demonstrate that the manufacturer committed negligence in the manufacturing process or in the information provided about the use of the assistance tool. Negligence may also be invoked against other parties who caused the safety defect, such as those who incorrectly assembled or serviced the assistance tool.
Third party liability claims
The victim of an injury caused by an assistance tool could be a third party to the contract between the primary user – who bought or rented the assistance tool – and the manufacturer, supplier, or other intermediaries in the distribution chain.
Under Quebec civil law, the extra-contractual product liability regime would be applicable to this third party’s recourse (CCQ: art. 1468, 1469 & 1473). Under this regime, the victim of injury must demonstrate that the product presents a safety defect, i.e., that it does not afford “the safety which a person is normally entitled to expect” in order to claim damages against a manufacturer, distributor (other than a mere economic intermediary such as a broker) or provider (CCQ: art. 1468-1469). A safety defect may consist of “a defect in design or manufacture, poor preservation or presentation, or the lack of sufficient indications as to the risks and dangers it involves or as to the means to avoid them” (CCQ: art. 1469), for example, improper programming of the robot by the manufacturer or a failure to provide sufficient indications as to the necessary precautions to avoid a risk of harm when using the robot. Discussing Microsoft’s robot, Tay, Vermeys suggests the notion of a computer safety defect. Tay, a conversational agent, could discuss on Twitter and tailor its replies based on data acquired through previous conversations. Although its algorithm functioned perfectly, it started using offensive language after some ill-intentioned individuals bombarded him with offensive tweets. Imagining potential liability toward other users who may have suffered moral injury as a result, Vermeys argues that despite the fact that there was no defect in design or manufacture or lack of sufficient indications, programmers could still be held liable under the extra-contractual regime for a safety defect; the author bases his opinion on the fact that, in this example, the programmers failed to properly remove the risk of a breach from the robot’s algorithm, thereby allowing external agents to impact its behaviour (Vermeys, 2018, p. 864).
As can be seen, the concept of “safety defect” under the extra-contractual regime is very similar to those of “latent defect” and “lack of instructions” under the Consumer Protection Act. Moreover, as in any other civil liability claim, the third party also has the burden of proving that this safety defect caused the injury for which compensation is claimed.
In addition to product liability, the liability regime for the autonomous act of a thing (CCQ: art. 1465) could find application under some circumstances. To be successful, the third party would have to demonstrate that the robot’s user has custody of the thing, and that the injury was caused by the robot’s autonomous act. Although fault-based, this regime provides for a legal presumption of fault that shifts the onus on the defendant to prove an absence of fault. While this presumption comes to the assistance of victims of injury, the application of this regime to AI systems poses challenges.
The condition that the thing acted autonomously is unlikely to raise difficulties, given that the functioning of an AI tool can easily qualify as an autonomous act. This condition requires showing the dynamism of the thing when it caused the injury, i.e., that the thing was not purely passive. Moreover, the act of the thing must not have been triggered directly by human action or be an immediate extension of human action. The behaviour of AI is the epitome of the autonomous act of a thing because the remoteness of the initial programming makes it reasonable to conclude that there is often no direct human intervention behind the thing’s harmful act, which is also unlikely to be entirely passive.
One of the challenges when invoking this regime rather lies in establishing not only in whose custody the assistance tool is, but also if there is custody at all. The concept of custody is usually evaluated in relation to more traditional objects, and the law enters uncharted territories in the case of deep-learning tools.
The “custodian” of a thing is the person entrusted to take appropriate measures to maintain, restrain, or even dispose of it as necessary. In that sense, custody can be exercised over AI that possesses a great decisional autonomy. By default, the owner is usually considered to have custody due to the inherent powers of ownership. Some authors argue that the owner may, under some circumstances, demonstrate a lack of control over the technology due to its deep-learning capacities (Vermeys, 2018, p. 860-861). This argument is subject to debate. A thing’s custodian needs not have full and detailed knowledge of how the thing operates. It has never been argued that a thing – at least a man-made thing, rather than something wild that has never been appropriated – is under no one’s custody because of its high level of autonomy. Deciding that AI tools are not in anyone’s custody would create a dangerous situation. It would entail that no one would have the prerogative and duty to control these tools and that there would be no accountability should they cause harm, given that they have no legal personality of their own. It is worth noting that the owner of a thing who wishes to avoid being subject to the presumption may argue that control over the thing was transferred to another custodian. Indeed, the custody could be transferred to the manufacturer should there be sufficient evidence that the company retained effective control over the robot after the delivery, for instance through an after-sale contract for the robot’s maintenance and regular updates to its programming that would entrust the manufacturer with the actual control of the AI tool. However, the mere existence of maintenance or programming follow-up by the manufacturer will not systematically enable the owner to escape the presumption applicable to the custodian through the argument of a transfer of custody or anything beyond a shared custody between the owner and the manufacturer. A contract for the maintenance or continued programming of the tool may be used, instead, to rebut the presumption of fault by demonstrating that the owner acted with prudence and diligence in the custody of the AI tool, which is the next step in the analysis.
Actually, the possibility for the custodian to rebut the legal presumption of fault may be the most problematic aspect of a claim based on the autonomous act of a thing: even if custody is proven, the custodian can avoid liability by demonstrating that no fault was committed in the exercise of the custody. The level of complexity of an AI tool facilitates such proof. It may be reasonably easy to establish that a prudent person could not have reasonably foreseen the danger associated with the AI tool, especially in the case of deep learning, as even a reasonably diligent person could not be expected to have knowledge of its complex data processing (Vermeys, 2018, p. 861). However, where the custodian was forewarned of potential problems due to prior incidents involving a given AI tool, rebutting the presumption may become more difficult. One could indeed argue that a normally prudent and diligent person would have been able to take appropriate measures to prevent the tool from being harmful.
The regime established by article 1465 CCQ illustrates the significant limits of a fault-based regime in terms of victim protection, despite a shift in the burden of proof through a legal presumption of fault.
For these reasons, the victim of a defective assistance tool would be better protected under the strict liability regime relating to a product’s safety defect. Under such regime, as stated above, the third-party could invoke the lack of safety of the product without having to demonstrate negligence on the part of the manufacturer, the supplier, or any other intermediary (CCQ: art. 1469), although this assertion relates only to Quebec civil law. It is also important to outline some limitations as to the extent of the protection offered under the extra-contractual regime, by comparison with the contractual rules stemming from the Consumer Protection Act.
Indeed, while the extra-contractual regime falls within the category of strict liability, which provides better protection than a fault-based regime, the third-party victim who invokes the extra-contractual regime will be most vulnerable in the event of damage caused by deep learning AI. This is due to the fact that the manufacturer, the supplier, or other intermediaries may successfully invoke the development risk defence, i.e., that the defect could not have been known according to the state of knowledge at the time when the product was manufactured, distributed, or supplied, especially considering the pace of development in this field (Vermeys, 2018, p. 865), provided that there was a warning to users once the defect was known (CCQ: art. 1473). Extra-contractual liability could therefore potentially be less favourable to victims than contractual rules. As of yet, it is unclear whether the development risk defence would be successful under the general rules governing the contract of sale as the manufacturer is considered to be the most expert type of seller (Levy, 2020, p. 10). It is quite clear, however, that in the case of a contract governed by the Consumer Protection Act, this means of exoneration is unavailable since the manufacturer or merchant bears the risk generated by scientific or technological advances, as mentioned above (CPA: s. 53, para. 3; Dupoy, 2019). Therefore, the extent of the protection given to the victim under contractual rules may not be entirely identical to that provided to third-party victims.
The extent of strict-liability rules for safety defect is somewhat clearer in Quebec civil law than in its common law counterpart. Unlike some other common law jurisdictions where explicit rules were adopted to implement a strict-liability regime, as it was the case in the United Kingdom under the influence of a European directive on product liability, Canadian common law still relies on the case law to determine the extent of strict-liability rules regarding damage caused by things. For instance, victims may attempt, in appropriate circumstances, to convince a court to extend the rule in Rylands v Fletcher – applicable where a person brings a thing onto his land that was not naturally there which is likely to cause injury in the event of its escape – to the context of AI robots. Such an advance in modern day case law would undeniably provide better protection than the fault-based tort of negligence.
DISCUSSION
In light of the liability issues and uncertainty surrounding the use of black box AI assistance tools, some propose the creation of a strict liability regime specific to AI manufacturers or developers. This would create a strong incentive to get insurance coverage against the risks associated with their technology, especially if third parties chose to contract only with properly insured manufacturers (Vermeys, 2018, p. 867; Vladeck, 2014). Others have proposed establishing a “no-fault” insurance regime contracted by owners at the point of purchase (Levy, 2020, p. 1). These appealing proposals deserve further reflection. Before considering the design of a strict-liability regime specifically addressing the risks associated with the use of AI technology, one must evaluate whether the existing liability regimes are sufficient to protect victims while mitigating the chilling effect that too stringent rules may have on the emergence of new technology through, for instance, the development risk defence (when available). Quebec civil law seems to have found such a balanced approach through the strict-liability regimes applicable to safety defects in contractual and extra-contractual settings, accompanied by the defence of development risk in the case of the latter. Common law jurisdictions may be tempted to borrow some elements of Quebec law or of other legal systems, such as EU law, which inspired Quebec’s extra-contractual product liability regime.
Finally, the proposal to implement a no-fault insurance (or, alternatively, a no-fault compensation scheme) designed to replace or to complement existing liability regimes may be inspired by the no-fault compensation scheme existing in Quebec in favour of victims of bodily injuries caused by road accidents. This scheme is managed by a public body, the SAAQ. In the context of pilot projects for autonomous vehicles, the Highway Safety Code grants the Quebec government the power to allow the SAAQ to recover compensation paid to victims from the vehicle’s manufacturer (Highway Safety Code, s. 633.1). A no-fault compensation fund for victims of medical accidents was also proposed in Quebec a few years ago (Tétreault, 2002; Bourgoignie, 2006) and continues to be demanded by both legal and medical experts (Gagnon, 2019). Such a scheme was never adopted, although narrow compensation schemes designed to address specific health-related risks – flowing from vaccination or tainted blood – were created. While there has recently been a renewed interest for this type of solution in the province of Quebec (Gagnon, 2019), there are no indications that the legislature is ready at this point to adopt it.
Given the time and hurdles involved in reforming liability regimes or implementing a no-fault compensation scheme, victims of AI tools will have to rely on existing liability regimes for the time being, hoping that the courts will find ways to adapt existing rules that were designed before the advent of AI technology or without having it specifically in mind.
REFERENCES
Bathaee, Y., (2018). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31(2)
Blanchard, S. (2019). “Robbie the Robot” can spot worsening dementia after watching 13 episodes of Emmerdale (and next the scientists want to make it view Friends…). Daily Mail. https://www.dailymail.co.uk/health/article-6673115/Robbie-Robot-spot-worsening-dementia-watching-13-episodes-Emmerdale.html
Bourgoignie, T. (2006). Accidents thérapeutiques et protection du consommateur – vers une responsabilité sans faute au Québec? Cowansville: Éditions Yvon Blais
Car, T.L. et al. (2020). Conversational agents in health care: Scoping review and conceptual analysis. JMIR, 22(8). doi: 10.2196/17158
CCQ: Quebec. Civil Code of Quebec, CQLR c. CCQ-1991.
CPA: Quebec. Consumer Protection Act, CQLR c. P-40.1.
D’Ambrogio, E. (2020). Japan’s ageing society. European Parliament. https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/659419/EPRS_BRI(2020)659419_EN.pdf
Darcy, A., et al. (2021). Evidence of human-level bonds established with a digital conversational agent: Cross-sectional, retrospective observational study. JMIR Formative Research, 5(5). doi: 10.2196/27868
Deep learning. (2022). IBM. https://appen.com/blog/ai-vs-deep-learning-vs-machine-learning-everything-youve-ever-wanted-to-know/
Des chats robots pour réconforter des aînés atteints de démence. (2018, April 17). Radio-Canada. https://ici.radio-canada.ca/nouvelle/1095522/chats-robots-demence-foyer-soins-ile-prince-edouard-summerside-summerset/
Dupoy, D. (2019). La Cour d’appel confirme que la LPC ne s’applique pas à la vente de médicaments sur ordonnance. Norton Rose Fulbright. https://www.nortonrosefulbright.com/fr-ca/centre-du-savoir/publications/1a48d6e6/la-cour-d-appel-confirme-que-la-lpc-ne-s-applique-pas-a-la-vente-de-medicaments-sur-ordonnance
Fitzpatrick, K. K., Darcy, A., & Vierhile, M. (2017). Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): A randomized controlled trial. JMIR Mental Health, 4. doi:10.2196/mental.7785/
Foster, M. (2018). Aging Japan: Robots may have role in future of elder care. REUTERS Healthcare & Pharma. https://www.reuters.com/article/us-japan-ageing-robots-widerimage-idUSKBN1H33AB
Gagnon, K. (2019, September 12). Accidents médicaux : le Collège des médecins réclame l’instauration d’un no-fault, La Presse. https://www.lapresse.ca/actualites/sante/2019-09-12/accidents-medicaux-le-college-des-medecins-reclame-l-instauration-d-un-no-fault#:~:text=Le%20syst%C3%A8me%20de%20no%2Dfault,peut%20opter%20pour%20l’indemnisation.
Holder, C. et al. (2016). Robotics and law: Key legal and regulatory implications of the robotics age (Part I of II). Computer Law & Security Review, 32. doi: 10.106/j.clsr.2016.03.001
Ho, A. (2020). Are we ready for artificial intelligence in health monitoring of elder care? BMC Geriatrics, 20(1). doi:10.1186/S12877-020-01764-9
Learner, S. (2017, October 17). Robots to care for elderly and check if they are happy or sad. HomeCare. https://www.homecare.co.uk/news/article.cfm/id/1588943/robots-care-elderly-happy-sad-angry/
Levy, D. (2020). Intelligent no-fault insurance for robots. Journal of Future Robot Live, 23(1). doi: 10.3233/FRL-200001
Liu, B. (2021). “Weak AI” is likely to never become “Strong AI”, so what is its greatest value for us? doi: arXiv:2103.15294
Nicholson Price II, W. (2017). Regulating black-box medicine. Michigan Law Review, 116
Obayashi, K., Kodate, N. & Masuyama, S. (2021). Assessing the impact of an original soft communicative robot in a nursing home in Japan: Will softness or conversations bring more smiles to older people? Int J of Soc Robotics. https://doi.org/10.1007/s12369-021-00815-4
Oliveira, S. (2016). La responsabilité civile dans les cas de dommages causés par les robots d’assistance au Québec. Université de Montréal. https://papyrus.bib.umontreal.ca/xmlui/handle/1866/16239/
Paquette, L. (2021). Artificial life imitating art imitating life: copyright ownership in AI-generated works, Intellectual Property Journal, 33(2)
Robots: What is the new Romeo project? (2018). SoftBank Robotics. https://www.softbankrobotics.com/emea/en/robots/romeo/
Rylands v. Fletcher, [1868] UKHL 1
Senate (2017). Integrating robotics, artificial intelligence and 3D printing technologies into Canada’s healthcare systems, Standing senate committee on social affaires, science and technology. https://sencanada.ca/content/sen/committee/421/SOCI/reports/RoboticsAI3DFinal_Web_e.pdf
Tétreault, R. (2002, May, 5). Les erreurs médicales et la sécurité des patients: qui a peur du no-fault? Le Devoir.
Thompson, K. & Allouba, A (2019). Use of AI algorithm triggers lawsuit and countersuit. Dentons Data. https://www.dentonsdata.com/use-of-ai-algorithm-triggers-lawsuit-and-countersuit/
Vermeys, N. (2018). La responsabilité civile du fait des agents autonomes. Les Cahiers de propriété intellectuelle. 30(3)
Vladeck, D.C. (2014). Machines without principals: Liability rules and artificial intelligence. Washington Law Review, 89
Zora, le premier robot en résidence privée pour aîné. (2019, May 2). Radio-Canada. https://ici.radio-canada.ca/ohdio/premiere/emissions/c-est-encore-mieux-l-apres-midi/segments/entrevue/116372/robot-zora-residence-aines-levis-lionel-bertorello