Skip to Main Content
Fast Back-Propagation Learning Using Steep Activation Functions and Automatic WeightAuthor(s): Tai-Hoon Cho; Richard W. Conners; Philip A. Araman
Source: Proceedings, 1991 IEEE International Conference on Systems, Man, and Cybernetics. pp. 1587-1592.
Publication Series: Miscellaneous Publication
PDF: View PDF (65 KB)
DescriptionIn this paper, several back-propagation (BP) learning speed-up algorithms that employ the ãgainä parameter, i.e., steepness of the activation function, are examined. Simulations will show that increasing the gain seemingly increases the speed of convergence and that these algorithms can converge faster than the standard BP learning algorithm on some problems. However, these algorithms may also suffer from increased instability, i.e., they frequently fail to converge within a finite time. One potential cause for the instability is an inappropriate choice for the initial weights. To overcome the instability resulting from this cause it is proposed that automatic weight reinitialization be used whenever the convergence speed becomes ãvery slowä due to a local minimum or premature saturation. On the simulations performed BP algorithms with larger initial gain (around 2 or 3) and automatic weight reinitialization converged much faster and were more stable than algorithms employing the same gain but not employing automatic weight reinitialization. The simulations performed involved a diverse set of problems including exclusive-or (XOR), encoder, and parity problems.
- You may send email to firstname.lastname@example.org to request a hard copy of this publication.
- (Please specify exactly which publication you are requesting and your mailing address.)
- We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information.
- This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain.
CitationCho, Tai-Hoon; Conners, Richard W.; Araman, Philip A. 1992. Fast Back-Propagation Learning Using Steep Activation Functions and Automatic Weight. Proceedings, 1991 IEEE International Conference on Systems, Man, and Cybernetics. pp. 1587-1592.
- Simulation study of grass fire using a physics-based model: striving towards numerical rigour and the effect of grass height on the rate of spread
- Real-time value optimization of edging and trimming operations for rough, green hardwood lumber
- Use of artificial landscapes to isolate controls on burn probability
XML: View XML