Two-Point Step Size Gradient Method for Solving a Deep Learning Problem
- Autores: Todorov T.D.1, Tsanev G.S.2
-
Afiliações:
- Department of Mathematics and Informatics, Technical University
- Department of Computer Systems and Technology, Technical University
- Edição: Volume 30, Nº 4 (2019)
- Páginas: 427-438
- Seção: Article
- URL: https://ogarev-online.ru/1046-283X/article/view/247956
- DOI: https://doi.org/10.1007/s10598-019-09468-5
- ID: 247956
Citar
Resumo
This paper is devoted to an analysis of the rate of deep belief learning by multilayer neural networks. In designing neural networks, many authors have applied the mean field approximation (MFA) to establish that the state of neurons in hidden layers is active. To study the convergence of the MFAs, we transform the original problem to a minimization one. The object of investigation is the Barzilai–Borwein method for solving the obtained optimization problem. The essence of the two-point step size gradient method is its variable steplength. The appropriate steplength depends on the objective functional. Original steplengths are obtained and compared with the classical steplength. Sufficient conditions for existence and uniqueness of the weak solution are established. A rigorous proof of the convergence theorem is presented. Various tests with different kinds of weight matrices are discussed.
Palavras-chave
Sobre autores
T. Todorov
Department of Mathematics and Informatics, Technical University
Autor responsável pela correspondência
Email: t.todorov@yahoo.com
Bulgária, Gabrovo
G. Tsanev
Department of Computer Systems and Technology, Technical University
Email: t.todorov@yahoo.com
Bulgária, Gabrovo
Arquivos suplementares
