Please use this identifier to cite or link to this item:
|Type:||Artigo de evento|
|Title:||Complement To The Back-propagation Algorithm: An Upper Bound For The Learning Rate|
|Author:||Cerqueira Jes J.F.|
Palhares Alvaro G.B.
Madrid Marconi K.
|Abstract:||A convergence analysis for learning algorithms based on gradient optimization methods was made and applied to the backpropagation algorithm. Made using Lyapunov's second method, the analysis supplies an upper bound for the learning rate of the back-propagation algorithm. This upper bound is useful for finding solutions for the parameter adjustment for the backpropagation algorithm. The convergence is solved via empirical methods. The solution presented is based on the well knowledge stability criterion for nonlinear systems.|
|Editor:||IEEE, Piscataway, NJ, United States|
|Appears in Collections:||Unicamp - Artigos e Outros Documentos|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.