Optimization of a recurrent neural network using automata with a variable structure
- Authors: Dimitrichenko D.P.1
-
Affiliations:
- Issue: No 4 (2023)
- Pages: 30-43
- Section: Articles
- URL: https://ogarev-online.ru/2454-0714/article/view/359429
- DOI: https://doi.org/10.7256/2454-0714.2023.4.69011
- EDN: https://elibrary.ru/FEIPTC
- ID: 359429
Cite item
Full Text
Abstract
The subject of this study is to identify a set of common structural properties inherent in recurrent neural networks and stochastic automata, the feature of which is purposeful behavior in dynamic environments. At the same time, the necessary commonality of properties is revealed both in the process of their functioning and in the process of their training (tuning). The author considers in detail such topics as: formalization of purposeful behavior, consideration of the design of automata, as well as a comparative analysis of the considered designs of automata. From the revealed commonality of functioning and the established one-to-one correspondence of neurons of a fully connected recurrent neural network and states of a probabilistic automaton with a variable structure, it follows that the structure of a tuned stochastic automaton can be considered as a reference for a set of connections of a recurrent neural network. This leads, even at the setup stage, to the removal of redundant states (neurons) and connections between them, based on the parameters of the corresponding automaton. The methodology of the conducted research is the construction of a one-to-one correspondence between the neurons of a fully connected recurrent neural network and the internal states of an automaton with a variable structure and the probabilities of transitions between them that are relevant after the tuning process. With a one-to-one correspondence, the probabilities of transitions of the automaton correspond to the weights of connections between neurons of the optimal configuration. The main conclusions of the study: 1.Comparing the structures of recurrent neural networks and automata with a variable structure allows one to take advantage of an automaton with a variable structure to solve the problem of appropriate behavior in dynamic environments and build a recurrent neural network based on it; 2.The correspondence of the internal structure of a recurrent neural network and an automaton with a variable structure allows already at the training stage to release the trained recurrent neural network from redundant neurons and redundant connections in its structure; 3.Due to the fact that an automaton with a variable structure approaches the optimal automaton with linear tactics for these conditions with nonlinear values of the learning rate, this allows a logical analysis of the structure of the final recurrent neural network.
References
Горбань А.Н., Россиев Д.А. Нейронные сети на персональном компьютере. Новосибирск.: Наука. Сибирская издательская фирма РАН. 1996. С. 276. Рутковская Д., Пилиньский М., Рутковский Л. Нейронные сети, генетические алгоритмы и нечёткие системы. М.: Горячая линия – Телеком. 2004. С. 452. Хайкин С. Нейронные сети: полный курс, 2-е изд. / Пер. с англ. М.: Издательский дом "Вильямс". 2006. С. 1104. Dimitrichenko D.P. A Method for Diagnosing a Robotic Complex Using Logical Neural Networks Apparatus, 2021 International Russian Automation Conference (RusAutoCon). 202. Pp. 907-911. Kazakov M.A. Clustering Algorithm Based on Feature Space Partitioning, 2022 International Russian Automation Conference (RusAutoCon). 2022. Pp. 399-403. Zhilov R.A. Application of the Neural Network Approach when Tuning the PID Controller, 2022 International Russian Automation Conference (RusAutoCon), 2022, Pp. 228-233. Шибзухов З.М. Конструктивные методы обучения сигма-пи нейронных сетей. М.: Наука. 2006. С. 159. Барский А. Б. Логические нейронные сети. М.: Интернет-университет информационных технологий Бином, Лаборатория знаний. 2016. S. 352. Dimitrichenko D.P. Optimization of the Structure of Variable-Valued Logical Functions when Adding New Production Rules, 2022 International Russian Automation Conference (RusAutoCon), 2022, pp. 240-245. Осовский С. Нейронные сети для обработки информации. М.: Горячая Линия – Телеком. 2017. С. 345. Вакуленко С.А., Жихарева А.А. Практический курс по нейронным сетям . СПб.: Университет ИТМО. 2018. С. 71. Поспелов Д.А. Игры и автоматы. М.: Энергия. 1966. С. 136. Цетлин М.Л. Исследования по теории автоматов и моделированию биологических систем. М.: Наука. 1969. С. 316. Поспелов Д.А. Вероятностные автоматы. М.: Энергия. 1970. С. 88. Варшавский В.И. Коллективное поведение автоматов. М.: Наука. 1973. С. 408. Варшавский В.И., Поспелов Д.А. Оркестр играет без дирижера: размышления об эволюции некоторых технических систем и управление ими. М.: Наука, 1984. С. 208.
Supplementary files

