Error rate estimation via cross-validation and learning curve theory
This paper presents three novel estimates for the expected error rate for a neural network trained with n examples (Ln). The estimators operate in two steps. Firstly LKn is estimated by training the model on subsets of size Kn (0 < K < 1) of the original training sample and measuring error rates on the remaining examples available. Secondly, adjustment for LKn to Ln is provided by learning curve theory. For K = 0.632, the proposed estimation process has some bearings with the acknowledged performing "632" bootstrap estimator. Comparison on a real-life classification task suggests the new estimators are effective and worth further investigations.
Bibliographic Reference: Article: Neural Processing Letters (1995)
Record Number: 199511306 / Last updated on: 1995-11-23
Original language: en
Available languages: en