Creation and application of artificial intelligence for public purposes: a comparative legal analysis.
- Authors: Atabekov A.R.1
-
Affiliations:
- Issue: No 6 (2023)
- Pages: 59-68
- Section: Articles
- URL: https://ogarev-online.ru/2454-0706/article/view/372927
- DOI: https://doi.org/10.7256/2454-0706.2023.6.40848
- EDN: https://elibrary.ru/IIGGCY
- ID: 372927
Cite item
Full Text
Abstract
The article focuses on a comparative analysis of existing approaches regarding the compliance of the artificial intelligence (AI) designed for public purposes in foreign countries and Russia with the current national strategies and regulatory approaches. As part of the study, the research identified the basic problems in the field of transparency in the decision-making of artificial intelligence, specified challenges to implicit regulatory nature for AI in the public sphere arising from the technical design of AI systems designed by developers; suggested theoretical and practical situations of using artificial intelligence that does not comply with the principles of designing AI on the basis of fundamental legal norms; and outlined possible compensatory legal measures that ensure the safe integration of artificial intelligence into the Russian public sphere. The subject of the study covers the issues of the influence of the design of artificial intelligence on its subsequent application in the public sphere. The object of the study focuses on the normative documents, recommendations and other documents regulating the issues of artificial intelligence preoccupation for public legal relations in Russia and foreign countries, judicial practice, academic publications and analytical reports on the issues under study. The research methodology integrates a complex of modern philosophical, general scientific, special scientific methods of cognition, including dialectical, systemic, structural-functional, hermeneutical, comparative legal, formal legal (dogmatic), etc. Within the framework of this study, special emphasis is placed on the implementation of a comparative legal study of the problems of designing artificial intelligence and its subsequent application within the public law field in the context of the problems inherent in the basic stage of creation. Subsequently, potential scenarios for regulating responsibility for AI actions are identified. The research has resulted in asset of measures that can be applied in the legislative and law enforcement practice of relevant authorities implementing the integration of artificial intelligence into the sphere of public relations in Russia, as well as in the scientific field in order to determine subsequent vectors for analyzing the minimization of AI bias as a result of incorrect technology design in violation of the basic legal structures.
References
Citron D. K. Technological due process //Wash. UL Rev. – 2007. – Т. 85. – С. 1249. Roth A. Trial by machine //Geo. LJ. – 2015. – Т. 104. – С. 1245. Stilgoe J., Guston D. Responsible research and innovation. – MIT Press, 2016. Lessig L. Code 2.0: Code and other laws of cyberspace. – 2006. Goldberg I., Wagner D. TAZ servers and the rewebber network. – 1998. Villani C. et al. For a meaningful artificial intelligence: Towards a French and European strategy. Conseil national du numérique: [сайт]. — URL: https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf (дата обращения: 17.05.2023). Maintaining American Leadership in Artificial Intelligence. A Presidential Document by the Executive Office of the President on 02/14/2019: [сайт]. — URL: https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence (дата обращения: 17.05.2023). Germany AI Strategy Report: [сайт]. — URL: https://ai-watch.ec.europa.eu/countries/germany/germany-ai-strategy-report_en (дата обращения: 17.05.2023). Указ Президента РФ от 10 октября 2019 г. № 490 «О развитии искусственного интеллекта в Российской Федерации»: [сайт]. — URL: https://www.garant.ru/products/ipo/prime/doc/72738946/ (дата обращения: 17.05.2023). Распоряжение Правительства РФ от 19 августа 2020 г. № 2129-р Об утверждении Концепции развития регулирования отношений в сфере технологий искусственного интеллекта и робототехники на период до 2024 г.: [сайт]. — URL: https://www.garant.ru/products/ipo/prime/doc/74460628/ (дата обращения: 17.05.2023). Barocas S., Selbst A. D. Big data's disparate impact //California law review. – 2016. – С. 671-732. Kroll J. A. et al. Accountable algorithms’(2017) //University of Pennsylvania Law Review. – Т. 165. – С. 633. Citron D. K. Technological due process //Wash. UL Rev. – 2007. – Т. 85. – С. 1249. Бахтеев Д. В. Риски и этико-правовые модели использования систем искусственного интеллекта //Юридические исследования. – 2019. – №. 11. – С. 1-11. Карпов В. Э., Готовцев П. М., Ройзензон Г. В. К вопросу об этике и системах искусственного интеллекта //Философия и общество. – 2018. – №. 2 (87). – С. 84-105. Минбалеев А. В. Проблемы регулирования искусственного интеллекта //Вестник южно-уральского государственного университета. серия: право. – 2018. – Т. 18. – №. 4. – С. 82-87. Ястребов О. А., Аксенова М. А. Влияние искусственного интеллекта на административно-правовой режим противодействия легализации доходов, полученных преступным путем и финансированию терроризма: вопросы права //Правовая политика и правовая жизнь. – 2022. – №. 3. – С. 84-109. Michigan’s MiDAS Unemployment System: Algorithm Alchemy Created Lead, Not Gold. : [сайт]. — URL: https://spectrum.ieee.org/michigans-midas-unemployment-system-algorithm-alchemy-that-created-lead-not-gold#toggle-gdpr (дата обращения: 17.05.2023). SyRI legislation in breach of European Convention on Human Rights: [сайт]. — URL: https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx (дата обращения: 17.05.2023). District Court of the Hague, 6 March 2020, ECLI:NL:RBDHA:2020:865: [сайт]. — URL: uitspraken.rechtspraak.nl/inziendocument?id=ECLI:NL:RBDHA:2020:1878 (дата обращения: 17.05.2023). Арбитражный процессуальный кодекс Российской Федерации от 24 июля 2002 г. №95-ФЗ // Собрание законодательства Российской Федерации от 29 июля 2002 г. №30 ст. 3012. State v. Loomis. Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing.: [сайт]. — URL: https://harvardlawreview.org/print/vol-130/state-v-loomis/ (дата обращения: 17.05.2023). "Кодекс Российской Федерации об административных правонарушениях" от 30.12.2001 N 195-ФЗ // "Собрание законодательства РФ", 07.01.2002, N 1 (ч. 1), ст. 1
Supplementary files

