Skip to main navigation Skip to search Skip to main content

Training multilayer perceptrons Parameter by Parameter

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In this paper, a new fast training algorithm for multilayer perceptrons (MLP) is presented. This new algorithm, named Parameter by Parameter Optimization Algorithm (PBPOA), is proposed based on the idea of Layer By Layer (LBL) algorithm. The inputs errors of output layer and hidden layer are taken into consider. Four classes of solution equations for parameters of networks are deducted respectively. The presented algorithm doesn't need calculating the gradient of error function at all. In each iteration step, the weight or threshold can be optimized directly one by one with other variables fixed. Effectiveness of the presented algorithm is demonstrated by two benchmarks, in which faster convergence rate of training are obtained in contrast with the BP algorithm with momentum (BPM) and the conventional LBL algorithm.

Original languageEnglish
Title of host publicationProceedings of 2004 International Conference on Machine Learning and Cybernetics
Pages3397-3401
Number of pages5
StatePublished - 2004
EventProceedings of 2004 International Conference on Machine Learning and Cybernetics - Shanghai, China
Duration: 26 Aug 200429 Aug 2004

Publication series

NameProceedings of 2004 International Conference on Machine Learning and Cybernetics
Volume6

Conference

ConferenceProceedings of 2004 International Conference on Machine Learning and Cybernetics
Country/TerritoryChina
CityShanghai
Period26/08/0429/08/04

Keywords

  • Multilayer Perceptrons
  • Parameter By Parameter Optimization Algorithm (PBPOA)
  • Training algorithm

Fingerprint

Dive into the research topics of 'Training multilayer perceptrons Parameter by Parameter'. Together they form a unique fingerprint.

Cite this