Cognitivism as the Basis of Artificial Intelligence

Cover Page

Cite item

Full Text

Abstract

The article examines the main issues of cognitivism as the basis of artificial intelligence (AI) in a modern philosophical interpretation of these entities. A classification of AI is given according to the level of cognitivism of basic functions. We consider the issues of the evolution of the cognitive capabilities of artificial intelligence. The problems of predictability of the negative impact of AI on society are raised. The article highlights the main cognitive distortions that are possible when using artificial intelligence in research, namely, the illusion of research breadth. The authors provide recommendations for researchers and editors of academic journals regarding a competent use of AI in scientific experiments. This work also raises the issue of trust in the field of cybersecurity of AI systems. The authors consider the hypothesis about the presence of consciousness in chatbots and draw clear conclusions about its absence.

About the authors

Vladimir A. Artamonov

International Academy of Information Technology (IAIT)

Email: artamonov@itzashita.ru
ORCID iD: 0009-0001-4959-3818
Doctor of Engineering, Professor, Full Member of IAIT Minsk, Belarus

Elena V. Artamonova

International Academy of Information Technology (IAIT)

Email: admin@itzashita.ru
ORCID iD: 0000-0002-7591-6465
Candidate of Engineering, Member of IAIT Minsk, Belarus

Alexandr S. Milakov

Missoffdesign Studio

Email: 9985585@gmail.com
Project Manager / Information Security Specialist Moscow, Russia

References

  1. Артамонов В. А., Артамонова Е. В., Сафонов А. Е. Искусственный интеллект: когнитивное начало // Защита информации. Инсайд. 2022. № 4 (106). С. 50–59. EDN FWAAIR.
  2. Why scientists trust AI too much – and what to do about it // Nature. 2024. Vol. 627. P. 243. doi: 10.1038/d41586-024-00639-y.
  3. Messeri L., Crockett M. J. Artificial intelligence and illusions of understanding in scientific research // Nature. 2024. Vol. 627. P. 49–58. doi: 10.1038/s41586-024-07146-0.
  4. Can you tell AI-generated people from real ones? // University of Waterloo : [сайт]. 2024. March 6. URL: https://uwaterloo.ca/news/media/can-you-tell-ai-generated-people-real-ones (дата обращения: 06.02.2024).
  5. Seeing is not always believing: Benchmarking human and model perception of AI-generated images / Zeyu Lu, Di Huang, Lei Bai [et al.] // arXiv.org : [сайт]. 2023. URL: https://arxiv.org/abs/2304.13023 (дата обращения: 16.02.2024). doi: 10.48550/arXiv.2304.13023.
  6. Samin M. Claude 3 claims it’s conscious, doesn’t want to die or be modified // Lesswrong : [сайт]. 2024. March 5. URL: https://lesswrong.com/posts/pc8uP4S9rDoNpwJDZ/claude-3-claims-its-conscious (дата обращения: 06.02.2024).
  7. Not so different after all: A cross-discipline view of trust / D. M. Rousseau, S. B. Sitkin, R. S. Burt, C. Camerer // Academy of Management Review. 1998. Vol. 23, no. 3. P. 393–404. doi: 10.5465/amr.1998.926617.

Supplementary files

Supplementary Files
Action
1. JATS XML


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Согласие на обработку персональных данных

 

Используя сайт https://journals.rcsi.science, я (далее – «Пользователь» или «Субъект персональных данных») даю согласие на обработку персональных данных на этом сайте (текст Согласия) и на обработку персональных данных с помощью сервиса «Яндекс.Метрика» (текст Согласия).