Maximum entropy principle in Bayesian backpropagation
Weight decay and pruning are techniques frequently used to control the generalisation ability of neural network models. The Bayesian approach to parameter estimation and model comparison gives a probabilistic interpretation of weight decay in terms of some prior overweights and defines a framework for choosing one model from a set. The choice of prior distributions is usually left to some heuristic explanation, but an entropy optimisation principle, such as maximum entropy (MaxEnt), should be preferred. This paper reviews the first part of MacKay's framework for backpropagation, and reassesses his a priori hypothesis using MaxEnt.
Bibliographic Reference: Paper presented: Neural Networks and their Applications - Neuronimes '94, Marseille (FR), December 15-16, 1994
Availability: Available from (1) as Paper EN 38816 ORA
Record Number: 199510136 / Last updated on: 1995-08-22
Original language: en
Available languages: en