Regulation of Artificial Intelligence in Future Jurisprudence: Civil Liability in Autonomous Vehicle Accident Cases

*This is an AI-powered machine translation of the original text in Portuguese

** Image resource obtained from Freepik.com

The rapid evolution of artificial intelligence (AI) and its application in various areas have raised fundamental ethical and legal questions related to assigning responsibility to the developers and operators of these systems. Recently, a jury in the U.S. state of California dismissed a compensation claim related to an accident involving an autonomous vehicle, in a lawsuit filed by the accident victims.

An autonomous vehicle is one that can move independently, without direct human intervention in its control, which is done automatically by an artificial intelligence system. The decision found no evidence of software failures in the vehicle, concluding that the vehicle's developer could not be held responsible for the damages suffered by the victim. Factors such as the need for the driver to supervise the autopilot were cited as defense arguments.

This case sparks debates about defining civil liability in situations involving the use of artificial intelligence systems and may influence future decisions, including in Brazil. These situations involve complex scenarios with various specifics depending on the system's application, requiring responsibility assignment to involve an assessment not solely based on the presence of an artificial intelligence system.

In the United States, there is still no federal definition regarding the legal framework for AI systems, but there is a proliferation of laws at the state level focusing on the intersection of AI and data protection. Regarding the issue of civil liability in AI, the contours of the matter will await the consolidation of legal precedents.

In Brazil, the current version of Bill No. 2,338/2023, which aims to regulate the use of artificial intelligence in the country, adopts a regulatory approach based on the concept of risk. In the specific case of autonomous vehicles, this application is currently classified as "high risk," requiring preliminary assessment, algorithmic impact assessment, and the adoption of governance measures. Furthermore, there is an objective assignment of responsibility for damages caused, meaning that only the elements of the fact, damage, and causal connection need to be established, without the need to prove fault. In this scenario, it includes as liability exclusions the AI agents' proof of the absence of a link between the damage and a potential AI system failure or that the damage solely results from the victim's actions or that of a third party, including external unforeseeable events.

With the wording suggested by the Bill, it is necessary to determine whether the proposed exceptions can encompass the nuances of cases with complex scenarios. The regulatory approach to be approved for AI applications should consider that different concrete cases will involve peculiarities that need to be resolved according to each context.

Thus, the wording proposed by the Bill should ensure that the proposed civil liability framework can cover the nuances of complex cases. The regulation of artificial intelligence applications should consider that different scenarios will involve particularities that need to be addressed specifically in each context. Therefore, the legislation's definition in this area requires careful consideration to ensure efficient regulation that can handle the expansion of AI applications in various sectors.

By using our website, you agree to our Privacy Policy and our cookies usage.