Critical aspects of the use of artificial intelligence in the legal profession

Мұқаба

Дәйексөз келтіру

Толық мәтін

Аннотация

What is the current and potential relationship between automated processing tools for legally relevant data and the performance of the legal profession? Is it possible to consider whether there is a potential for synergy, integration, or alternation between classical human legal activities and procedurally designed algorithmic processes? What definitions and control mechanisms would be required for the potential avenues of inquiry into this subject, given the need to comply with existing principles and guarantees?

Толық мәтін

Table of contents:

  1. Artificial intelligence as an aid and a means of creation
  2. Authorship of informatically drafted court documents
  3. Is the human defensor outdated?
  4. Critical issues and rules of accountability
  5. Regarding the complex relationship between artificial intelligence and the legal profession, some preliminary questions can be advanced.

Firstly, it must be determined whether automated processing tools aimed at the interpretation of law, which can compare normative data and jurisprudential decisions, can be of assistance to those in the legal profession. If this is the case, it is necessary to identify the conditions under which they should be used. Moreover, if implemented, should legal algorithms be regarded solely as supplementary activities or even as substitute actions for the work of lawyers? In light of these considerations, what assumptions should be made about the extent to which it is permissible for part of legal activity to be computerized? Thus, the question arises whether and to what extent institutional control mechanisms should be implemented with a view to ensuring compliance with existing procedural principles and substantive legal guarantees.

In attempting to provide at least partial answers to these questions, in a reconstructing and explorative perspective, it can be posited that the professional activities performed by attorneys are, at least in part, defined by their technical foundations and the creative processes inherent in the practice of law. If the role of the lawyer is to present a particular party’s point of view, thus contributing to the trial dialectic, and to compare and have that perspective compared with even opposing ones contributed by other parties, it is essential that the arguments put forward, whether directly or indirectly based on documentation, be subjected to a rigorous and systematic examination. This examination must take place within the formal legal system and within the decisional framework that has been shaped by the principles of jurisprudence. This interpretive activity, which encompasses both facts and norms, cannot be reduced to a mere act of compilation or verification. Rather, it should be regarded as an activity in itself, one that is inherently and manifestly creative. It is crucial to underscore the significance of this assumption, particularly in light of the potential disruptive implications that may arise in the context of “autonomously” algorithmic computing in new application areas [1]. Accordingly, it is important to highlight that the application of artificial intelligence to the field of jurisdiction is also making changes with regard to the complex activities carried out by lawyers, thereby also affecting the idea and practices of creativity in the legal field.

It is important to note that the gradual consolidation of a legaltech approach with respect to the role played by lawyers can be read as a result of two factors: firstly, the availability of data of a statistical-social, economic, commercial nature, as well as very large documentation of a legal and judicial nature in digital format; and secondly, the usability of applied mathematics and information science techniques, combined with the development of machines with exponentially growing computational potential.

  1. These findings raise significant questions regarding the recognition of authorship in the context of works created with the aid of artificial intelligence (AI), with respect to which some critical points should be considered. In the event that a lawyer employs such software for the purpose of generating legal documents, it is imperative that the human contribution be duly acknowledged. This can be achieved by recognizing the unique and original elements that the human input brings to the specification and personalization of the software’s output. However, in accordance with the prevailing norms of copyright, the software itself or, even more so, its programmers may be granted the right to legal protection, which is inherent in the recognition of ownership for the intellectual work produced. It is also necessary at this juncture to consider the desirability (or the necessity) of entering into some sort of contractual agreement between the lawyer, the developer of the technologies in question, and the provider of the software, which would set out the conditions under which the program may be used and the authorship of the works generated by its use. In any case, if the value of such algorithmic procedures were to be reclassified as purely tools, these issues would be somewhat overcome at the outset. Nevertheless, it cannot be denied that, in the “exciting” frenzy of maximizing the benefits of the use of artificial intelligence in the legal sphere [2] (even at the hypothetical expense of the public maintenance of professional integrity requirements1), this use may become that which mainly determines the documentary and even procedural content of the required legal activity. At this juncture, the creative output may be perceived to be primarily attributable to the formulation of the posed questions and the subsequent verification of the electronically expressed findings. In addition to the dimension inherent to the critical capacity that is consistently present in human acting and thinking (and not in the artificial, which is flattened in the description of prediction, with an inadequacy to assume the datum of implausibility and, above all, with an inability to evaluate sharply the ethicality of something), precisely here can be traced the second place of maximum recognition of authorship (of being the true author) of the professional intellectual work accomplished. It is important to note that there is a social risk associated with the perception that these activities can be carried out by anyone. This perception can lead to a lack of clarity regarding the role of responsibility in the recognition of professional expertise and the professionalization of the activity itself. A certain objectification of responsibility is evident in the reliance on instrumentation, which is presented as both powerful and entirely reliable.
  2. These issues thus pertain, in some ways, to the very activity of the lawyer carried out within the legislative and jurisprudential framework and the progressive substitution of many of his tasks.

In light of the technological opportunities that are currently available, it could be argued that the role of the lawyer is rendered superfluous2. This is because the lawyer’s work appears to be merely a form of packaging, or even a misappropriation of authorship, of information that is already in the form of data and has been transformed into a format that can be used to make decisions. In this scenario, the only remaining entities bearing responsibility would be the holders and objective referents of responsibility, due to their inherent profiles of guilt arising from the only theoretically implementable choice and control over what is algorithmically processed. However, the first and last originator of the proposed legal acts would not be included in this designation.

It should be noted that distrust in the fallibility of humans and the tendency to view nonhuman entities, including machines, as inherently infallible, also plays a role in this context.

Consequently, it may be argued that human activity is almost entirely dependent on computer processing power, with the potential for true creativity, rather than merely being generative. It is evident that one might be inclined to consider the role of the lawyer as superfluous, and instead allow a non-legal official to input the data to be processed. At this juncture, it is possible to posit that the administration of justice could be carried out in an “impersonal” manner, no longer necessitating the involvement of all the practitioners of the process, including the judges themselves. It is evident that this outcome is the consequence of a simplistic perspective that fails to acknowledge the nuanced, emotionally charged work carried out by legal professionals and the inherently human aspect of judicial decision-making, which is both law-compliant and socially creative.

Furthermore, if the linguistic and intellectual distinction between humans and computers lies in the former’s ability to utilize finite tools in an infinite manner, while the latter employs finite combinations of seemingly infinite elements, the misunderstanding arises from the assumption that one can precisely substitute an activity of comparison, assonance, and arrangement of the preexisting with a productive activity of the new (based on a flawed theoretical conception of language and knowledge) [3]. In this context, rather than a replication of human thought in its original dynamics, uncritical artificial intelligence invokes patterns of cognitive connection and especially results of prior thought. Consequently, there is a risk that, by employing the statistical basis of judicial precedent in a simplistic manner, the creative element inherent in the evolution of law, which is derived from human thought, will be lost, thereby preventing the innovative effects that are produced by jurisprudence [4], whether explicitly or implicitly. Nevertheless, it is important to recognize that AI tools may reveal previously unidentified and underrepresented aspects that could be beneficial for legal professionals. Nonetheless, the potential risk in the face of the advantages deriving from automation of repetitive tasks and large-scale data analysis may be precisely that of improperly exchanging aid proposals3, necessarily instrumental, coming from this sort of “code-lawyers.” This is also taking into account the traits of immediacy and the orality characteristics of the procedural rite, especially criminal, as well as the constitutionally inescapable human work in the matter of establishing principles.

  1. It is also essential to highlight some critical issues pertaining to the fundamental assumptions underlying the use of the instruments in question.

Firstly, the technical and specialized expertise that lies at the origin and is enhanced through the design and development actions of automation and artificial intelligence devices (which may include symbolic, statistical, generative) is held asymmetrically and largely oligopolistically by market actors. It would be prudent to consider the potential involvement of institutional advocacy in digital design, particularly in the context of “co-design” [5] for program settings that have a direct impact on the jurisdiction. This could involve defining the criteria for quality and security in the analysis and use of socio-judicial information that becomes operational data, with the aim of ensuring the effectiveness of the guarantees formally provided4. This could, to some extent, offset the trend towards enhancing work efficiency and reducing costs by compromising extensive areas of privacy and confidentiality.

Furthermore, the personalization of legal activity, which has traditionally been characterized by originality and, even more so, the personalization of its content relative to the subjects it addresses, encounters patterns of standardization of decisional addressing procedures and tendencies to place their results in a median position. The outcomes in question should be subject to careful scrutiny and monitoring by the bar, not least to ensure that they do not lead to any potential misuse.

A related topic is the discussion of the control that can and should be exercised over professional activities conducted through algorithmic tools of this kind5. Notwithstanding the hypothetical variety of ways in which this can be implemented, it cannot be ruled out that, even for said activity, computerized procedures of verification could be used, which would make (at least in part) the control carried out on acts that are themselves the result of generative automation “automated”, generating a short circuit of information that would be difficult to resolve.

It is possible to place certain cautions on these points. Indeed, in the European Federation, the bar associations have drawn up certain guiding canons that are, at least in part, binding (and bring back the need to reflect on possible integrations and modifications of the deontological codes in force in the various countries). These canons pertain to the use of artificial intelligence mechanisms in the legal profession. In June 2023, the New Technologies Commission of the European Bars Federation (Fédération des Barreaux d’Europe) proceeded to elaborate seven guidelines with the objective of ensuring the responsible and informed use of these technologies. In order to safeguard ethical parameters and protect client confidentiality, these guidelines address a number of specific concerns, including an understanding of the technology in question, awareness of its inherent limitations, keeping abreast of relevant regulations, integration with human skills, respect for professional secrecy, protection of personal data, and transparent communication with clients. These indications, however, appear to address only some of the existing problems and do not provide a comprehensive understanding of the creative potential of such tools. It seems prudent to note, despite concerns about job security, that the use of artificial intelligence will not replace the lawyer’s professional judgment, critical capacity, and competence. This reiterates the inescapable specialized evaluative discretion for resolving both technical and ethical issues. For automated processing systems, respect for ethical standards can only be activated by prohibitions that, in effect, prevent any discussion of the matter.

However, it is important to note that the convenience of using certain tools in operational contexts may lead to concrete practices of slavish “accommodation” with respect to the results6, even in probabilistic terms, returned by generative algorithms. This is a matter that warrants careful consideration, particularly given the fact that artificial intelligence algorithms are capable of operating through decision-making processes that are “impenetrable” with respect to their full understanding and explanation. This raises significant concerns about the transparency and interpretability of their outputs, particularly in legal contexts where it is of paramount importance to fully understand the procedures and reasons that underpin any given decision.

Furthermore, if it is commonly agreed that, in the event of potential liability, lawyers who utilize such systems must do so at their own discretion, in accordance with the instructions provided by the manufacturer, and without modifying the programs, altering their operation, or introducing different input data. It can be observed that this regulatory framework, rather than potentially relieving the lawyer of responsibility, seeks to hold the producers and suppliers of the relevant computer programs accountable. This may result in a limitation of the lawyer’s autonomy in organizing settings and expanding the complexity of legal computation. While modifications to the program may potentially compromise its reliability, thereby rendering the aforementioned rules generically logical, it is nevertheless evident that these programs do not offer any guarantees of certainty in light of the increasing tightening of the originality of forensic activity (which remains irreplaceable, even from an ethical standpoint, but is informatically constrained). Instead, there is a shift towards broader delegations of computational rationality.

Nevertheless, the most crucial challenge lies in establishing procedures to assist the legal profession, which is confronted with novel developments with essential awareness and questionable competence.

Even if one resolves, with regard to the distinction between civil law and common law legal systems, the distinction between deductiveness and inductiveness in the elaborative process of artificial intelligence (where only the former could theoretically reduce the risk of socio-cognitive bias), the question of defining and arranging the rules of legitimacy and proceduralism remains.

It can be argued that inductively there is a greater tendency to perpetuate distortions (and therefore potential injustices) than can be deductively inferred from the general principles of law and legislation under consideration. However, it is important to recognise the inescapable aspect of qualification of reality, which can be equally susceptible to distortion. It can be argued that inductively there is a greater tendency to perpetuate distortions (and therefore potential injustices) than can be deductively inferred from the general principles of law and legislation under consideration. However, it is important to recognise the inescapable aspect of qualification of reality, which can be equally susceptible to distortion.

In light of these considerations, it seems reasonable to suggest that the aforementioned issues must also be viewed through the lens of a tendency to rely on legal processes of cognitive elaboration that are characterized by a substantial absence of classical responsibility. This is to say that such processes are founded upon a mechanical capacity to will, which is associated with an inability to intend, within an inclination to make technique the expression of a calculating thought that is divorced from emotions and feelings. This, in turn, serves as the engine of institutional functions.

 

1 See, for example, Vaciago G. (ed.). Intelligenza artificiale generativa e professione forense. La sperimentazione dell'Ordine degli Avvocati di Milano. Milano: Giuffrè Francis Lefebvre, 2024.

2 Refer to the “DoNotPay” program, an American start-up that defines itself as “the first robot lawyer in the world”, see Roselli T. Debutta in tribunale l'avvocato robot: opportunità o rischio? // Il dubbio. 2023. January 11.

3 In this regard, there appears to be a considerable degree of expectation placed upon lawyers themselves, as reported by Censis, Cassa Forense, Rapporto sull'avvocatura 2024, Il passo della innovazione e una ripresa da consolidare, available at: https://www.censis.it/sites/default/files/downloads/RAPPORTO%20AVVOCATURA_2024_0.pdf, p.54 ss. (see Morelli C. Intelligenza artificiale: per il 58,7% degli avvocati è un'opportunità. https://www.altalex.com/documents/news/2024/05/09/intelligenza-artificiale-per-58-percento-avvocati-opportunita).

4 It is also noteworthy that the Italian bar's institutional representative body is engaged in deliberations regarding the potential establishment of a control system to certify the use of artificial intelligence applications in law firms (see: Si è aperto il G7 delle Avvocature a Roma sull'intelligenza artificiale, https://www.consiglionazionaleforense.it/web/cnf-news/-/24697-273).

5 It is worthwhile to recall the well-known case concerning U.S. attorneys who were sanctioned for producing in court a pleading containing nonexistent court records. This was done through their use of the "ChatGPT" program, which is worthy of further reflection, Bechini U. L'intelligenza artificiale, i notai e l'avvocato Schwartz // Notariato. 2023. No 6. P. 610.

6 It is posited that speed and simplification in themselves represent vehicles of efficient reliability: cf. Varì D. A che servono gli avvocati, ora ci pensa Alexa… // Il dubbio. 2022. August 1.

×

Авторлар туралы

Michelangelo Pascali

University of Naples Federico II

Хат алмасуға жауапты Автор.
Email: michelangelo.pascali@unina.it
ORCID iD: 0000-0003-3490-1651
https://www.docenti.unina.it/michelangelo.pascali

PhD, Associate Professor of “Sociology of law, deviance and social change” at the Department of Social Sciences, University of Naples Federico II, researcher for Interdisciplinary Research Laboratory on Mafia and Corruption (LIRNAC), member of the Bar Association of Naples. 

Италия, Naples

Әдебиет тізімі

  1. Grimolizzi G. Lucilla Gatt: “In Italia finora non si è andati oltre l’uso della digitalizzazione del dato legale” // Il dubbio. 2023. October 23.
  2. Cf. Morelli C. Intelligenza artificiale. Essere avvocati nell’era di ChatGPT. Maggioli: Santarcangelo di Romagna, 2024.
  3. Chomsky N., Roberts I., Watumull J. The False Promise of ChatGPT // The New York Times. 2023. March 8.
  4. Grimolizzi G. L’IA non va subita ma governata, o il linguaggio giuridico perderà la propria ricchezza // Il dubbio. 2023. October 23.
  5. Piana D., Viciconte G. Un’intelligenza artificiale attenta ai diritti? Sì, se controllata dagli avvocati // Il dubbio. 2023. October 23.

Қосымша файлдар

Қосымша файлдар
Әрекет
1. JATS XML

Согласие на обработку персональных данных с помощью сервиса «Яндекс.Метрика»

1. Я (далее – «Пользователь» или «Субъект персональных данных»), осуществляя использование сайта https://journals.rcsi.science/ (далее – «Сайт»), подтверждая свою полную дееспособность даю согласие на обработку персональных данных с использованием средств автоматизации Оператору - федеральному государственному бюджетному учреждению «Российский центр научной информации» (РЦНИ), далее – «Оператор», расположенному по адресу: 119991, г. Москва, Ленинский просп., д.32А, со следующими условиями.

2. Категории обрабатываемых данных: файлы «cookies» (куки-файлы). Файлы «cookie» – это небольшой текстовый файл, который веб-сервер может хранить в браузере Пользователя. Данные файлы веб-сервер загружает на устройство Пользователя при посещении им Сайта. При каждом следующем посещении Пользователем Сайта «cookie» файлы отправляются на Сайт Оператора. Данные файлы позволяют Сайту распознавать устройство Пользователя. Содержимое такого файла может как относиться, так и не относиться к персональным данным, в зависимости от того, содержит ли такой файл персональные данные или содержит обезличенные технические данные.

3. Цель обработки персональных данных: анализ пользовательской активности с помощью сервиса «Яндекс.Метрика».

4. Категории субъектов персональных данных: все Пользователи Сайта, которые дали согласие на обработку файлов «cookie».

5. Способы обработки: сбор, запись, систематизация, накопление, хранение, уточнение (обновление, изменение), извлечение, использование, передача (доступ, предоставление), блокирование, удаление, уничтожение персональных данных.

6. Срок обработки и хранения: до получения от Субъекта персональных данных требования о прекращении обработки/отзыва согласия.

7. Способ отзыва: заявление об отзыве в письменном виде путём его направления на адрес электронной почты Оператора: info@rcsi.science или путем письменного обращения по юридическому адресу: 119991, г. Москва, Ленинский просп., д.32А

8. Субъект персональных данных вправе запретить своему оборудованию прием этих данных или ограничить прием этих данных. При отказе от получения таких данных или при ограничении приема данных некоторые функции Сайта могут работать некорректно. Субъект персональных данных обязуется сам настроить свое оборудование таким способом, чтобы оно обеспечивало адекватный его желаниям режим работы и уровень защиты данных файлов «cookie», Оператор не предоставляет технологических и правовых консультаций на темы подобного характера.

9. Порядок уничтожения персональных данных при достижении цели их обработки или при наступлении иных законных оснований определяется Оператором в соответствии с законодательством Российской Федерации.

10. Я согласен/согласна квалифицировать в качестве своей простой электронной подписи под настоящим Согласием и под Политикой обработки персональных данных выполнение мною следующего действия на сайте: https://journals.rcsi.science/ нажатие мною на интерфейсе с текстом: «Сайт использует сервис «Яндекс.Метрика» (который использует файлы «cookie») на элемент с текстом «Принять и продолжить».