Skip to main content
European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
Contenu archivé le 2024-05-30

Solving dynamic models: Theory and Applications

Final Report Summary - DYNAMIC MODELS (Solving dynamic models: Theory and Applications)

The computation of competitive equilibria in dynamic stochastic general equilibrium models with heterogeneous agents has become increasingly important in finance, macroeconomics and public finance. Unfortunately, economists often use ad hoc computational methods with poorly understood properties that produce approximate solutions of unknown quality. Furthermore many interesting models cannot be analyzed because of a lack of computational methods. The research-project had three goals: Building theoretical foundations for analyzing dynamic equilibrium models, developing efficient and stable algorithms for the computation of equilibria in large scale models and applying these algorithms to finance and to macroeconomic policy analysis.
The first two goals were fully achieved, in the third area I only made partial progress, mainly because in the development of algorithms for large scale models many exciting new dimensions were uncovered an needed to be explored in detail.

In the first area, theoretical foundations, I wrote a paper with Brumm and Kryczka (both funded by the project) where we develop a set of sufficient conditions for the existence of a recursive equilibrium: we prove existence of recursive equilibria in stochastic production economies with infinitely lived agents and incomplete financial markets. We consider a general dynamic model with several commodities which encompasses heterogeneous agent versions of both the Lucas-Breeden asset pricing model and the Brock-Mirman stochastic growth model as special cases. Our main assumption is that there are atomless shocks to individuals' endowments that have a purely transitory component and a component that does not depend on last period's shock directly.

In the second area, algorithms for large-scale problems, there has been a breakthrough achieved by two post-docs in my group (one of the funded by the project): Johannes Brumm and Simon Scheidegger developed a suite of computational routines to solve large scale dynamic problems numerically. I believe that this is a path-breaking contribution and I would like to describe and motivate it in some detail. I also urge you to read Brumm and Scheidegger’s (2014) paper, “Using Adaptive Sparse Grids to Solve High-Dimensional Dynamic Models”. Brumm and Scheidegger (2014) introduce adaptive sparse grids in context of dynamic stochastic economic models. By embedding an adaptive sparse grid algorithm into a time-iteration procedure, they were able to solve up to 80 dimensional models with occasionally binding constraints. For this, they use linear bases functions which are much better suited to handle non-differentiabilities. Their method stands in contrast to previous economic modelling, where researchers were only capable to deal with up to three-dimensional models that contain non-differentiabilities. As non-adaptive methods (as for example in Krueger and Kubler (2004) and the work on Smolyak’s method that followed it) can only provide one resolution over the whole domain, they waste resources where not needed. This fact makes adaptive sparse grid algorithms favorable over all other (sparse) grid interpolation methods if kinks or discontinuities have to be handled.
In order to accelerate the time-consuming computations that are needed for very high dimensional models, Brumm, Mikushin, Scheidegger and Schenk (2015, “Scalable High-Dimensional Dynamic Stochastic Economic Modeling”) extend the newly-developed framework to contemporary high-performance computing architectures. The developments of this paper included the use of an adaptive sparse grid algorithm and the use of a mixed MPI Intel TBB - CUDA/Thrust implementation to improve the interprocess communication strategy on massively parallel architectures. This code framework was shown to scale up nicely to tens of thousands of cores.