Magyar · Română · English

Pagina principală / Viaţa ştiinţifică /

acum un an

Legal implications of the development of artificial intelligence (AI)

Interview with Prof. Dr. Emőd Veress

Prof. Dr. Emőd Veress has recently conducted research into the legal implications of the development of artificial intelligence (AI) systems and applications as member of an international research group, in the context of the ‘Polish-Hungarian Research Platform 2022 project, which will result in several publications, and proposals for action by national and European legislators. The project was initiated and coordinated by the Institute of Jusice (Instytut Wymiaru Sprawiedliwości) from Warsaw, Poland. I have asked him regarding some of the objectives and the results of his research.

János Székely: Dr. Veress you have participated in a research project undertaken by a group of legal scholars, to determine the implications of, and necessary legislative steps to regulate AI due to its foreseen future impact. Could you tell me a bit about the objectives of the project?

E. V.: The project involved a group of experts from several universities and research institutions, from Poland and Hungary. Its main objective was to take a holistic view of the regulation of AI, and what norms ought to be developed in the future to tackle the challenges of an ever-expanding AI presence in commerce, banking, industry and our everyday lives. Within the research team I focused specifically on the known and foreseen implications of artificial intelligence as its stands, as well as those of future, more general and human-like artificial intelligence, from the perspectives of legal personhood for AI, AI as applied to the judicial process (the artificial intelligence judge for example) and the specific problems of liability for damage caused by AI. I've also participated as a presenter on several online seminars in which we discussed, among other topics, the state of private law research in AI-related fields, compared regulations from Hungary which may apply to AI to those of other jurisdictions, especially norms of civil liability, as well as the civil-law implications of the way AI makes decisions, including as a possible judge.

J. Sz.: You spoke about the future regulation of AI. What is the state of the regulatory environment in this field?

E. V.: What gave our project a high degree of significance is exactly the state of the regulation of AI, which currently is as of yet conspicuously absent on the EU level. Outside of vaguely formulated recommendations up until very recently no legislative action was taken specifically targeted to AI. This now changed, as in the past months the European Commission has tabled two legislative proposals: one for an Artificial Intelligence Act (AIA), in the form of a directly applicable EU regulation that would form the general framework for AI development and utilisation in Member States, and another, titled as the AI Liability Directive which would achieve harmonisation of national liability regimes, and administration of evidence for proving fault by an AI manufacturer or operator, as well as causation between that fault and the damaging output (as well as possible lack of output) from the AI. Aside from these two instruments which are likely to be widely followed in the future, national legislators have yet to react by specific norms of their own in the civil law field.

J. Sz.: What results did your research yield in view of the AIA?

E. V.: My research, and the project as a whole found a lot of problems that must be attended to both at the level of the EU and at a national level. We also proposed some solutions to these. If I was to single out only the most interesting ones, I would mainly refer to the general approach taken in the AIA, the problem of legal personhood for AI which kind of haunted the entire development process of the AIA and the general approach to gathering evidence in cases of AI liability proposed by the AI Liability Directive.

Regarding the approach the EU legislator took in the AIA I, as well as my colleagues found that this is not without fault. Very simply put, the EU legislator wants to prevent dangerous results produced by AI systems while encouraging the development of home-grown (EU-based) AI. This is a difficult task. In the AIA proposal a lot of emphasis is put on distinguishing between low-risk AI and high-risk AI, with the latter category including most applications of this technology that are actually important in the economy (from the selection of job-applicants to the appraisal of creditworthiness, to robots and self-driving vehicles as well as the possible future ‘AI judge'). This precautionary approach results in some restriction for high-risk AI, including something called ‘logging by design' which aims to tackle one of the major problems of AI, so-called ‘opacity', the impossibility of determining why exactly the AI made the decision that it did. Some AI systems (specifically the neural networks used for machine learning, the most developed version of AI today) cannot always give precise reasons for their actions, and the AIA wants to impose a change in this. We found that this approach is imperfect when applied to proving that the AI made a mistake due to the fault of the manufacturer or the operator, because the expectations it imposes are not very clear, or feasible as technology stands, which may hinder development of new technologies.

Also, during the development of the AIA, the abstract possibility of giving AI some form of legal personality was proposed, and actually made its way into the first draft of the Act. I also analysed this, the so-called ‘technological person' problem. This proposal resulted in a lot of debate between legal scholars, some being for and some ardently against this solution. I thought I would look at the problem from the perspective of what is required for someone to be a ‘responsible' person under the law. I found that for legal personality to be granted its not enough for an entity to have a patrimony (assets and liabilities), and to have disposal over that patrimony. These are the criteria mostly looked at by proponents of technological personality. However, today liability for damage caused by a legal person ultimately affects a human being and causes a feeling of guilt to that human (as well as suffering through having to pay damages), which in turn discourages further violations of the law. AI not being able to have emotions such as guilt or suffering in my view cannot be qualified as a ‘person'. Even more so, since AI today mostly acts based on information fed to it by a human ‘minder' and has a limited ability for autonomous decision-making outside its specific scope. Liability for damage caused by a legal person AI would also affect human stakeholders who are themselves unable to influence or even decode the way AI made its decisions (again the problem of ‘opacity'). Finally, intentionally under-funding a legal person AI would make it impossible for injured parties to receive sufficient compensation. This would make granting AI legal personality unfeasible. The current state of the AIA proposal eliminated the possibility of legal personality for AI, which I consider to be the right approach of the regulation also for the future based on the arguments I just presented. Especially the argument basing the effects of liability on the effects over the psyche of the liable person hasn't really been explored in the past in the literature.


J. Sz.: Are we likely to see and AI judge anytime soon in the courtroom? I'm also asking this from the perspective of attorneys at law...

E. V.: My short answer would be: Likely not in the near future... The AIA recitals do speak about this possibility and it being a high-risk form of AI. I examined this question from several perspectives. The first is, again, opacity. A human judge is, and indeed for a fair trial to take place he or she must be, able to give reasons for the decision rendered. This obligation of the court is also strictly linked to the human dignity of the person standing trial. Opacity of the AI judge would apparently make this impossible. This opacity has two sources: First, the software and hardware used will likely be proprietary (owned by a private company, subject to copyright) which makes the whole thing non-transparent from the outset. The other is that the AI as technology stands makes decisions in a way that is inherently non-transparent to itself, or its operators. The problem with giving even correct court decisions without any reasoning is that no judicial review (another right sometimes recognised as fundamental) can be exercised against them on their merits. How can judicial review be exercised without knowing how the decision under review was reasoned? The only way we could have an AI judge is if we renounced on holding some elements of the fair trial and human dignity as not being fundamental rights.

J. Sz.: How does the EU proposal for liability for damage caused by AI deal with this problem of opacity, according to the results of your research?

E. V.: The solution the EU legislator found for facilitating liability for AI in the AI Liability Directive is imperfect under several aspects, one of them being that it tends to rely on presumptions to resolve the opacity issue.

J. Sz.: How do these presumptions work, and what are the main problems they raise, that you found?

E. V.: First I have to talk a bit about liability. There were two forms of civil liability the EU legislator could choose from: Fault-based liability for damage (where we must prove a misconduct by the party causing the damage, the so-called ‘tortfeasor' and also that the misconduct caused the damage) and strict liability where its enough to prove that damage has been caused by something or someone (like a machine, an animal, an employee or an AI) under the tortfeasor's control. Strict liability would have been much better suited to damage caused by AI, but some (not all) EU Member States favour fault-based liability in their legal systems, and the EU legislator also thought that strict liability would discourage AI development, so fault-based liability was the implicit choice of the EU legislator in the AI Liability Directive.

This means that in cases of damage caused by AI both fault and causation must be proven. This is solved in the AI Liability Directive by imposing two presumptions: A presumption of fault by the AI operator (and sometimes others) if it failed to provide evidence requested by the party who suffered the damage, or by the court, beforehand, and a presumption of causation between the fault and the damaging output of the AI if the AI operator (or others) did not respect provisions of the AIA, including avoiding opacity.

The problem with this solution is that it created what is sometimes referred to as a ‘Rube Goldberg machine', that is, a very complicated system for a very simple task. For any of the presumptions to work, the party who suffered the damage must first try to obtain evidence against the operator of the AI, and if refused must address the court for such evidence. Only if the AI operator does not comply may the court apply the presumption of fault. This is a system of evidence gathering that is worse than some of the systems used in Member States, making liability for AI harder to achieve, as some Member States will have to harmonise their laws to an inferior solution when compared to the one used in their own legal systems. Also, the presumption of causation (between fault and damage) is very difficult to operate in practice. I therefore proposed during the research that these solutions should be reviewed, and the strict liability model be reconsidered, along with compulsory insurance for damage caused by certain types of AI (in line with some earlier proposals during the process of developing the AI Liability Act).

J. Sz.: Professor Veress, thank you very much for summing up your contribution to what is likely to be important new research on AI and civil law. I'd like to wish you a merry Christmas!

E. V. Thank you! I wish the same to you and all the readers!

 

© 2011 Drept, Universitatea Sapientia, Facultatea de Ştiinţe şi Arte, Cluj-Napoca

Site creat de Weblap.ro