67024

Автор(ы): 

Автор(ов): 

3

Параметры публикации

Тип публикации: 

Доклад

Название: 

Method of real time calculation of learning rate value to improve convergence of neural network training

ISBN/ISSN: 

0302-9743

DOI: 

10.1007/978-3-030-61401-0_10

Наименование конференции: 

  • 19th International Conference on Artificial Intelligence and Soft Computing, ICAISC 2020

Наименование источника: 

  • Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Обозначение и номер тома: 

Vol. 12415

Город: 

  • Zakopane, Poland

Издательство: 

  • Springer

Год издания: 

2020

Страницы: 

103-113
Аннотация
The scope of this research is a problem of correct initialization and further correction of a neural network learning rate. It is one of the main hyperparameters, which helps to increase a convergence rate of a training process. There are known techniques of time-based decay, step decay and exponential decay, in which the learning rate is initialized manually and then corrected downwards proportionally to some value. In contrast, in this paper, it is proposed to focus on an excitation level of a regressor - an output amplitude of a previous network layer. The formulas, which are based on the recursive least squares method, are derived to calculate the learning rate for each network layer, and their convergence is proved. Using them, the initial learning rate can be chosen arbitrarily, and not only can such rate decrease, but also it is able to increase when the value of the regressor has become lower. Experiments are conducted for a task of image recognition using multilayer networks and the MNIST database. For networks of different structures, the proposed method allows reducing the number of training epochs significantly in comparison with the backpropagation method with a constant learning rate.

Библиографическая ссылка: 

Глущенко А.И., Петров В.А., Ласточкин К.А. Method of real time calculation of learning rate value to improve convergence of neural network training / Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Zakopane, Poland: Springer, 2020. Vol. 12415. С. 103-113.