Community Research and Development Information Service - CORDIS

Abstract

Theoretical and practical aspects of Multi-Layer Perceptron (MLP) learning methods from the bayesian perspective were addressed for the first time by David MacKay in 1991. In this framework, the learning algorithm is an iterative process which alternates optimization of weights and estimation of hyperparameters. Moreover, trained MLPs that generalize better have higher evidence, a probability which quantifyes how well an MLP is adapted to a problem. This paper suggests a new methodology that computes the evidence during learning for different MLP configurations. Such estimations and the confidence intervals on test set are used to rank MLP configurations and, then, to stop learning. This learning strategy is illustrated on classification problems.

Additional information

Authors: PERROTTA D, JRC Ispra (IT)
Bibliographic Reference: Paper presented: European Symposium on Artificial Neural Networks, Brugge (BE), April 22-24, 1998
Availability: Available from (1) as Paper EN 41271 ORA
Record Number: 199810476 / Last updated on: 1998-05-05
Category: PUBLICATION
Original language: en
Available languages: en