Neural networks are essentially statistical modes for non-pararnetric, non-linear inference. Because of their inductive nature and their ability to model non-linear processes they have attracted considerable attention in the context of decision making and financial and econometric modelling and forecasting. Non-parametric non-linear models such as neural networks typically have more parameters than a linear model for the same number of independent variables. With the small size, noisy data-sets common in financial and economic modelling model selection and model specification becomes a crucial issue in order to avoid over-fitting. An optimal model with respect to out-of-sample performance will balance the bias induced by using a small model that cannot completely capture the structure off the data versus large parameter variance caused by the model being to complex ( with too many parameters). This trade-off is commonly referred to as the bias/variance dilemma. We propose to extend model selection methodologies for neural networks: Robust, efficient estimates for out-of-sample performance: For linear models powerful analytic results are available to calculate the expected out-of-sample performance For nonlinear models such as neural networks similarly powerful analytic results are not available. One therefore has to resort to computationally inefficient resampling schemes (Bootstrap, Cross-Validation). We propose the investigate approximation appropriate for neural networks that address the problem of local minima and improve computational efficiency, as well extend recent results on analytic approximations for expected test-set error.. Development of robust model selection techniques: Model selection aims to choose the optimal model for a given problem and data-set at hand based on the expected out of sample performance which must be estimated for each model under consideration. For neural network models both, the selection of independent variables, as well as the selection of competing neural network architectures must be addressed. We aim at developing a range of methods including extension to step-wise regression for neural networks, pruning methods to remove superfluous connections and internal units, and regularization methods.
The proposed research is a direct continuation of the HCM fellowship "Neural Networks for Advanced Data Analysis and Forecasting (duration 8.8 month), emphasising the importance of robust model selection methodologies for non-linear non-parametric models. The already established contacts to industry will greatly aid the testing and benchmarking of the proposed methods under real-world conditions. My own research has already and is expected to further benefit from dialogue with practitioners in the financial industry and has helped to focus on the robustness of the methods (such that they can be applied within the standard software setting without the need to fine-tune the algorithms). In addition, Prof. Refines research experience with applying neural networks to financial engineering will continue to have substantial impact on my work.
We propose to apply these methods several important key problems in financial analysis: Tactical Asset Allocation, Foreign Exchange Rate Prediction, and Option Pricing Models: These projects are part of the Departments ongoing collaboration with local financial firms (Barkleys, Citibank, Dresdner, Societe Generale; previously under the
Neuroforecasting Club framework and now under the umbrella of the Decision Technology Centre). This greatly aids in focusing on industry relevant research as the research problems are embedded in a framework of real-world applications.