## Final Report Summary - MONFISPOL (Modeling and Implementation of Optimal Fiscal and Monetary Policy Algorithms in Multi-Country Econometric Models)

The MONFISPOL project is the coordinated effort of the MONFISPOL consortium over the last three years to provide the best tools and models for the analysis of optimal fiscal and monetary policy in order to advance macroeconomic policy evaluation and decision-making in a monetary union such as the European Union. From the start the project was structured around two objectives that interact: (1) the development of solid computational tools for solving, simulating and estimating models under optimal policy and (2) the conception of innovative macroeconomic models considering optimal fiscal and monetary policy issues.

The natural way for the consortium to implement the first objective was to make all tools and software routines developed for the project part of a software platform called DYNARE. That platform was originally designed at the CEPREMAP, the project coordinator, to offer a generic approach to model solving and estimation with a user-friendly interface that reduces the time and complexity of modelling new policy initiatives and simulating the impact of these policy measures. Over the years, DYNARE has become a widely used platform among policy-makers and academics and has a vivid community of users. The first objective was implemented through the deliverables of seven workpackages. All objectives in those workpackages have been attained. Concerning those deliverables the main final contributions are the following: (1) new algorithms and routines to compute optimal policy (Ramsey policy, timeless perspective and simple rules); (2) tools to formulate optimal policy in the Linear Quadratic framework; (3) tools to accelerate the computation of model solution and estimation using block decomposition techniques; (4) tools to accelerate the computation of model solution and estimation using parallelization techniques; (5) analytical tools to deal with identification issues in the estimation process of a model; (6) analytical tools to deal with the solution, simulation and estimation of models under the assumption of partial information; (7) analytical tools to provide benchmark priors for the estimation of vector autoregressions.

The second objective of the project, the conception of innovative macroeconomic models, was achieved with the production of high quality academic scientific papers and a database of macroeconomic models for model comparison and validation. This second objective was implemented through the deliverables of six workpackages. All objectives in those workpackages have been attained. Concerning those deliverables the main final contributions are the following: (1) a model that considers optimal monetary policy in an open economy with labour market frictions where the international transmissions of shocks and the welfare ranking of different exchange rate regimes along with the optimal monetary policy rule when area members face asymmetric labour market conditions in a currency area are discussed. A model that discusses the current account dynamics associated with the wave of financial globalization that economies have gone through in the last two decades; (2) a model with matching frictions that explores whether or not a strong fiscal stimulus could help to contrast the recession and rising unemployment. A model that discusses how bank regulation and monetary policy interact in a macroeconomy that includes a fragile banking system keeping in mind the traditional question of the optimal taxation of capital which here takes the form of bank capital requirements; (3) a model that deals with the question of exiting the expansionary monetary and fiscal policies put in place under the extraordinary circumstances represented by the 2007-2008 crisis; (4) a model that discusses optimal fiscal policy along with the question of the maturity of sovereign debt; (5) a database of macro models to perform model comparison; (6) model validation efforts using the macro model database.

In the three years of the project's l

Project Context and Objectives:

The core of the MONFISPOL project is centred on the analysis of optimal fiscal and monetary policy in order to advance macroeconomic policy evaluation and decision-making in a monetary union such as the European Union. The MONFISPOL project is structured around two main objectives: (1) the development of solid computational tools for solving, simulating and estimating models under optimal policy and (2) the conception of macroeconomic models considering optimal decisions by policy-makers dealing with fiscal and/or monetary policy. The objectives of the project are well suited to the determination of optimal policy in large multi-country macro-econometric models taking into account the diversity of issues found in the European Union. The developments of the project, computational tools as well as modelling contributions, are added to a public domain platform for the simulation and estimation of dynamic stochastic general equilibrium models (DSGE) called DYNARE. DYNARE is a modular collection of routines that offers a generic approach to model solving and estimation with a user-friendly interface that reduces the time and complexity of modelling new policy initiatives and simulating the impact of these policy measures. The DYNARE platform is now widely used by both institutional policy-makers and academics to solve and estimate a large class of macroeconomic models and now specifically those with optimal policy issues. In the three years that this final report covers, alpha version of tools and preliminary versions of models have been developed before being finalized and made available to the intended audience: academics, policy-makers and to some extent, the general public. The results achieved by members of the consortium have been disseminated through two workshops and two MONFISPOL Conferences. The summary below provides a short description of those achievements.

The development of solid computational tools for solving, simulating and estimating models under optimal policy is the first main objective of the MONFISPOL project. This aspect is covered by seven workpackages combining the efforts of CEPREMAP (France), London Metropolitan University (U.K.) the Institut d'Analisi Economia (Spain) and the Joint Research Center (Italy).

The first workpackage corresponding to this main objective was mainly developed at CEPREMAP and deals with the addition of tools for improving the computation of optimal policy in the sense of Ramsey (or optimal policy under commitment) and optimal policy in the timeless perspective by the DYNARE software platform. Solutions to solve the two main problems were found and implemented. A first issue was the computation of the steady state of the model under optimal policy. A second was the elimination of Lagrange multipliers in the representation of optimal policy in a timeless perspective. For optimal simple rules, a generalization of the algorithm for objective function more general than a quadratic function was sought and the derivation of the quadratic objective from a welfare function supplied by the user explored.

The second workpackage deals with LQ approximations and was developed mainly by the London Metropolitan University team. Several results relative to optimal policy can be formulated in the well know Linear Quadratic (LQ) framework and then be used to explore issues around the robustness of optimal policy rules, taking into account the uncertainty surrounding the parameters of the model. The basic algorithms of the Linear- Quadratic toolbox have been programmed by the team at University of Surrey and the interface to Dynare has been designed. The LQ framework provides many powerful results that are important to make available to users as part of the DYNARE platform.

The third and fourth workpackages developed at CEPREMAP and JRC explores two different solutions to a common issue: speeding up the computation of the solution and estimation of large macro-models. Models where it is assumed that policy-makers follow optimal rules are about twice as large as macroeconomic models where the policy-makers follow ad hoc rules. Thus finding the solution and estimating such models are time consuming. It is important to have at our disposal efficient procedures to solve and estimate those larger models. To overcome these issues, on the one hand, a team at CEPREMAP has exploited the block structure of a model. The final developments of this approach show important improvements for both deterministic and stochastic models. Benchmarks for the speed of computation of the solution and the estimation of models using this approach have been very good and are available to DYNARE users. On the other hand, the second direction to accelerate estimation was pursued at JRC and concerned the development of parallel computing tools for DYNARE called Parallel DYNARE. The final version of Parallel DYNARE completes and significantly enhances the first test version developed in the first stage of the project. The parallel toolbox is now able to manage message passing between processes. The interface developed for DYNARE users allows modellers to exploit the parallel features with a minimal learning effort. Everything is embedded in the standard DYNARE installation. Parallel DYNARE has been made compatible with completely hybrid clusters: multiplatform (UNIX/Windows) and multi-environment (Octave/MatLab).

The fifth workpackage was developed mainly at JRC and builds analytical tools to deal with identification issues. Concerning the process of estimating a model, the identification of parameters is a central element. Using a Bayesian approach can circumvent lack of identification and estimating a poorly identified model may not lead to a breakdown of the procedure. Nevertheless, the final user of such a procedure needs to be aware of the identification issues and be able to distinguish the effect of priors from the new information contained in the data. Much advance has been made on routines to diagnose lack of local identification. A new approach that relies on analytical derivatives of the reduced form of the model with respect to the structural estimated parameters has been developed. The implementation of the Identification Toolbox for DYNARE has been completed by the team at JRC. The toolbox is fully operational and included in DYNARE releases. It is now being used by economists to test their models.

The sixth workpackage was developed mainly by the London Metropolitan University team and builds analytical tools to deal with the solution, simulation and estimation of models under the assumption of partial information. Contrasting with the standard approach that assumes full information on the part of the agents in the model, partial information models assume that agents do not have more information than the econometrician who estimates the model. The only information available is the one contained in observed variables. Joseph Pearlman and Paul Levine have already derived the solution to linear rational expectation models under partial information. The task at hand was to reconcile the DYNARE representation and the representation needed for partial information modelling. DYNARE stores all rational expectations models in Sims-McCallum form. However one cannot apply the general partial information results to models in this form. The final implementation of the proposed software converts the models into Blanchard-Kahn representation internally. One contribution of the approach described above is a New Keynesian model where the standard approach, that assumes an informational asymmetry between private agents and the econometrician, is confronted with an assumption of informational symmetry that implements the partial information context. The results of this contribution is a significant improvement of model fitting of data in terms of model posterior probabilities, impulse responses, second moments or autocorrelations. Other notable results are that (a) imperfect information can provide an alternative source of endogenous persistence to the mechanisms involving consumption habits and price indexation (b) symmetrical information with measurement error for the observed macroeconomic series improves the fit still further, although the increase in the model probability is not significant, (c) there is little to be gained from the indexation mechanism in terms of model fit.

The seventh and last workpackage concerning the first main objective of the project was developed mainly at IAE/CSIC and provides a benchmark prior for the estimation of vector autoregressions. The contribution starts with the fact that the ordinary least squares (OLS) estimator tends to underestimate persistence in autoregressive models when a small sample is available. This may significantly affect empirical results, especially the impulse responses at longer lags. Many techniques have been designed to estimate autoregressions in small samples using both classical and Bayesian approaches. The contribution of this analytical tool is to provide a widely acceptable procedure for estimating autoregressions with small samples. One results show that given the same treatment of the initial condition, Bayesian and classical econometricians agree about the appropriateness or not of OLS. To do show that it is proposed to use an informative prior based on the a priori distribution of the observed series in the first few periods of the sample. This kind of prior has many advantages: (a) it allows to clearly relate initial observations and parameters, (b) it may be a near consensus prior, (c) it is much easier to express an opinion about a prior distribution of observed variables than of VAR parameters, (d) it entirely sidesteps the issue of what is a "truly" uninformative prior in time series. The implementation of this approach raises a technical difficulty: solving a Fredholm integral equation under very high dimension of the parameter space. An algorithm to overcome this issue is also a contribution of this last analytical tool.

The conception of innovative macroeconomic models considering optimal policy decision, focused on European Union issues and building on the tools developed as part of the first objective was the second main objective of the MONFISPOL project. This aspect is covered by six workpackages involving CEPREMAP (France), the Institut d'Analisi Economia (Spain) and the Goethe University of Frankfurt (Germany).

The first workpackage of this group was developed at CEPREMAP, by a team lead by Ester Faia. Two models were developed for this workpackage. The fist model developed is concerned with optimal monetary policy in an open economy with labour market frictions. Two questions are discussed: (1) the international transmissions of shocks and the welfare ranking of different exchange rate regimes for a two country model and (2) the optimal monetary policy rule when area members face asymmetric labour market conditions in a currency area. The comparison among different exchange rate regimes in the model shows that increasing the response to exchange rate fluctuations in the monetary policy rule reduces macroeconomic volatility. Furthermore, the optimal monetary policy rule for a currency area, whose area members have different labor market institutions in terms of unemployment benefit coverage is computed. The optimal rule is characterized by a positive response to unemployment and assigns different weights to different countries. The second model is concerned with financial globalization, financial frictions and monetary policy and is well suited to capture and explain the current account dynamics associated with the wave of financial globalization that economies have gone through in the last two decades. The main results displayed by this model follows: (1) the net asset accumulation in this model is uniquely determined in the steady state and that it is saddle path stationary in a neighbourhood of the steady state. The domestic economy experiences a persistent current account deficit, as in equilibrium domestic residents behave as impatient agents and borrow from the rest of the world, (2) a comparison of alternative exchange rate regimes show that under high financial liberalization, fluctuations in the exchange rate induce swings in the value of collateral, amplifying fluctuations in consumption, output and CPI, (3) optimal monetary policy might want to deviate from zero inflation and target the exchange rate in order to reduce swing in the wedges.

The second workpackage of this group was mainly developed at CEPREMAP and is built around two models. The first model is concerned with fiscal policy in a model with matching frictions and explores whether or not a strong fiscal stimulus could help to contrast the recession and rising unemployment. Short and long run multipliers are computed to examine the effectiveness of fiscal stimuli by considering both, increases in government spending or hiring subsidies. The results underlined follows: (1) increases in government spending remains ineffective in terms of producing large multipliers, (2) hiring subsidies are very beneficial both in the short run and in the long run. The second model discusses how bank regulation and monetary policy interact in a macroeconomy that includes a fragile banking system keeping in mind the traditional question of the optimal taxation of capital which here takes the form of bank capital requirements. The main results underlined are that: (1) a monetary expansion or a positive productivity shock increase bank leverage and risk. The transmission from productivity changes to bank risk is stronger when the perceived riskiness of the projects financed by the bank is low; (2) pro-cyclical capital requirements amplify the response of output and inflation to other shocks, thereby increasing output and inflation volatility, and reduce welfare. Anti-cyclical ratios, requiring banks to build up capital buffers in more expansionary phases of the cycle, have the opposite effect; (3) within a broad class of simple policy rules, the optimal combination includes mildly anti-cyclical capital requirements and a monetary policy that responds rather aggressively to inflation and also reacts systematically to financial market conditions.

The third workpackage of this group was also developed at CEPREMAP and deals with the question of exiting the expansionary monetary and fiscal policies put in place under the extraordinary circumstances represented by the 2007-2008 crisis. In the framework of the model, exiting the post-crisis policy stance is beneficial; almost any exit strategy leads to an improvement in terms of our evaluation criteria relative to the status quo. The gain is greater at long horizons, while in the short the results are more mixed.

The fourth workpackage was mainly developed at IAE/CSIC and focuses on optimal fiscal policy along with the question of the maturity of sovereign debt. The contribution studies Ramsey optimal fiscal policy under incomplete markets in the case where the government issues only long bonds. The results emphasize that that many features of optimal policy are sensitive to the introduction of long bonds, in particular tax variability and long run behaviour of debt.

The fifth workpackage for this group was developed at the Goethe University of Frankfurt and carries model comparison and built a database of macro models as part of the conception of models of the MONFISPOL project. The database now includes 50 macroeconomic models, ranging from small-, medium-, and large-scale DSGE models to earlier-generation New Keynesian models with rational expectations and more traditional Keynesian-style models with adaptive expectations. It includes models of the United States, the Euro Area, Canada and several small open emerging economies. Some of the models explicitly incorporate financial frictions. The models in the database uses the solution methods provided by DYNARE in the computational part of the project. Three scientific papers have illustrated the usefulness of this database in comparing available models and their performance with respect to economic policy issues.

The sixth workpackage also developed at the Goethe University of Frankfurt builds on the previous workpackage and conducts model validation efforts. Using a subset of six open-economy models from the database, the desirability of monetary policy responses to exchange rate movements was analysed. The key finding is that in the majority of models, monetary policy should respond only modestly to the exchange rate. This holds true whether one considers the level of the real exchange rate, or the change in the real exchange rate or the change in the nominal exchange rate. However, if one considers parsimonious policy rules that can only respond to the rate of inflation, the level of output and an exchange rate measure, then systematic responses of monetary policy to the exchange rate can lead to more significant stabilization improvements in some of the models.

Project Results:

The detailed description of main S&T results of the MONFISPOL project is presented below. This presentation follows the organization in workpackages of the project.

Workpackage 1.1: Optimal policy toolbox

The objectives of this workpackage were to implement an optimal policy toolbox into the Dynare software platform. This report describes with a non technical approach the algorithm used by Dynare to compute optimal policy. In this type of problems, the model at hand describes the behavior of private agents and the constraints that they face. On the other hand, the policy maker tries to define to define the best policy rule to achieve her objective. In that process, the model of the private economy acts a set of constraints for the policy maker.

This problem has been extensively discussed in the literature and can be addressed from different angles (see the State of the Art report). The objective function of the policy maker can be either the one of a benevolent social planner, and aims at maximizing private agents' welfare, or aims only at stabilizing the economy and the objective function is a loss function in the form of weighted sum of variances of variables in the model that is to be minimized. Note that the framework discussed here doesn't permit to discuss dynamic games where the objective of the policy maker is at odd with the objective of private agents.

The most general procedure approaches the problem with the tools of optimal control. It is known as optimal policy under commitment or Ramsey policy. No postulate is placed on the specification of the policy rule. The solution provides the equilibrium trajectory of the system under optimal policy. A difficulty with this approach is that the expectations of the private agents enter the set of constraints for the policy maker. As a consequence, the optimal policy is not recursive in the original state variables of the model, but history dependent. This dependence takes the form of additional Lagrange multipliers that enter the state vector for the solution. For the same reason, this optimal policy is time inconsistent and is only valid if the policy maker can commit to never re-optimizing in the future. Hence, its characterization as optimal policy under commitment. When the policy can't commit and re-optimize in each period, private agents end up anticipating this and the system switch to a Nash equilibrium called equilibrium under discretion. Current algorithms only permit to compute equilibrium under discretion for linear quadratic problems.

Furthermore the workpackage approached optimal policy in a timeless perspective, a second best policy that is time consistent by construction. Optimal simple rules that was also in the scope of this workpackage take an alternative route. In this approach, a relatively simple policy rule is specified as part of the model and numerical optimization is to determine the optimal value of its coefficients.

Final summary:

All objectives have been achieved.

Workpackage 1.2: LQ approximation

The objective of this workpackage is to obtain a linear-quadratic (LQ) approximation to an optimal monetary policy problem about the long-run steady state solution. This is obtained by linearizing the constraints and quadratifying the Lagrangian for the Ramsey problem about the steady state as in Levine et al. (2008a). The basic philosophy is that policymakers (central banks in particular) are regarded as having sufficient reputation for the private sector to believe in the deterministic solution to the policy problem, but that the stabilization problem may provide a source of time inconsistency. Thus the LQ approximation is suited to the analysis of the stochastic stabilization problem, and in this context the latter is addressed by calculating and comparing fully optimal, time-consistent and (optimal) simple rules using a single quadratic loss function. The fully optimal solution is regarded merely as a benchmark value, so the optimal rule used to calculate trajectories will not necessarily be saddlepath stable. An application of the software is to be found in Levine et al. (2011a).

In addition the software calculates robust optimal simple rules as in Levine et al. (2011b), which is done by amalgamating several aspects of the Dynare package. Firstly, the model is estimated using the MCMC algorithm, with the user choosing the number of draws. Secondly, a subset of draws, the number of which is defined by the user, is randomly sampled. Thirdly, for each simple rule, the average welfare loss over these sampled draws is calculated. Finally, the simple rule that minimizes this average is obtained by a grid-search algorithm within the Fortran code. The software is designed so that linear approximations to the dynamics, and quadratic approximations to the welfare function are passed to a stripped-down version of the ACES FORTRAN package (developed by Paul Levine and Joe Pearlman), which produces solutions to these different ways of addressing the optimal problem. The user will be unaware that the ACES package is involved.

The final part of this workpackage is to design all of the above rules in such a way as not to violate the zero lower bound (ZLB) for the nominal interest rate as in Levine et al. (2008b). This is kept within a linear rules context by (a) selecting a probability level above which violation of the of the ZLB is not on average permitted; (b) defining a penalty function that has the effect of both shifting the equilibrium interest rate to the right and narrowing its probability distribution. The software iterates over and until the ZLB probability bound is satisfied and the welfare is maximized under the given type of rule - optimal, time consistent, or optimized simple.

The software:

1. This reads in modfile.mod together with the welfare function. The user also needs to supply a Matlab file to calculate the steady state. This is in the standard form required by Dynare, and is named modfile_steadystate.m

2. Finds the first order conditions (which as a side-effect includes evaluating the Jacobian of the dynamic constraints), and finds the steady state of the system under the fully optimal policy, including the instruments and Lagrange multipliers,

3. By evaluating the Hessian of the Lagrangian, it produces a matrix Q that corresponds to the ordering of the variables, including the shocks, such that the approximation to the welfare function is , where . In fact, the presence of rational expectations means that the Q matrix is somewhat more complicated to evaluate, and actually operates on various linear combinations of y.

Constraints on the user:

There are two minor constraints that are imposed on users of the package: (a) all variables that appear with lags and leads n greater than 1 must have new variables defined up to lag or lead (n-1); (b) if users require lags of instruments when defining simple rules, then a new state variable equal to the instrument must be defined. Both of these constraints will be corrected in subsequent versions of the software.

Testing the software:

The software has been tested by comparing the results from an analytic DSGE example:

1. Using a nonlinear model,

2. By further eliminating some of the variables using contemporaneous relationships.

It has also been tested by writing a standard LQ problem in a way such that the linear constraints appear in a nonlinear form. All calculations are accurate to six significant figures.

Summary of software issues:

1. There is no need to integrate Fortran with Matlab via MEX-files because the programs are not interactive. A slight inefficiency is introduced by the creation of text-files in Matlab that are subsequently read by Fortran, but this is minor,

2. The constraints typically involve agents who have rational expectations about the future so that, as a consequence, both the structure of the linear approximation and that of the quadratic approximation are non-standard; a working paper discussing these issues has therefore been produced,

3. Model reduction, to ensure that the system is controllable and observable. This ensures that the system has no unstable modes which would prevent the fully optimal solutions from being produced. It also reduces the size of the matrices, thereby saving computational time for the fully optimal and time-consistent cases.

Final summary:

All objectives for monetary policy rules - optimal, time consistent, optimized simple, robust simple, and ZLB - have been achieved. A user guide has also been produced.

Workpackage 2.1: Estimating optimal policy

The contributions of this workpackage consist in the implementation of structural decomposition methods in Dynare for DSGE models. These new features are available in the current version of Dynare and for medium and large scale they reduce the computational burden for deterministic and stochastic simulations and estimation processes. The structural decomposition methods and their implementations are described in Mihoubi (2011), in the official DYNARE documentation (Adjemian et al., 2011) and the DYNARE wiki.

Structural decomposition of DSGE Model - the block decomposition to speed simulation and estimation:

The Dynamic Stochastic General Equilibrium (DSGE) models built in central Banks or in public institutions often contain several hundred equations. Their estimation using Bayesian methods is extremely expensive in CPU time. The computation of the posterior distribution mode and the posterior distribution using MCMC algorithm generally requires at least several thousand evaluations of the kernel distribution. During the kernel evaluation the four following steps are performed:

1. Solve the deterministic steady-state,

2. Check the Blanchard-Kahn conditions,

3. Compute the rational expectation solution of the model,

4. Compute the likelihood and the kernel distribution.

Even in the simplest configuration: a rational expectation solution computed using a first order perturbation method and an evaluation of the likelihood using a Kalman filter, these steps may require several hours of computing time.

This contribution investigates ways to reduce the computational time devoted to the simulation and the estimation of large scale DSGE models. Most of DSGE models have a block recursive structure. As an obvious example, AR(1) shocks can be solved independently of the remaining variables of the model. This recursive block structure is also met in models with nominal rigidities where the potential GDP is computed with the same model without nominal rigidities, or in multicountry models composed of a large country and of several small countries with no feedback effects from the small countries to the large one, or in the overlapping generation models without intergenerational altruism.

Several papers have examined the ways to exploit this block structure in order to improve deterministic simulations (Gilli and Pauletto (1998), van't Veer(2006) but few contributions have considered stochastic simulations or likelihood evaluation (Strid and Walentin (2009)).

This contribution investigates the way to speed up stochastic simulation and estimation of the DSGE models using the block decomposition. More precisely, the gains of the block decomposition are considered in the four steps involved in the DSGE likelihood evaluation.

First, this contribution addresses the question of block decomposition of DSGE models. The block decomposition method is carefully described considering the way to reduce the block size. Then, the gains involved by the block decomposition in the two first steps of the likelihood evaluation are examined. A particular attention is given to the reduction step during the rational expectation solution. In order to take advantage of the block decomposition to simulate stochastically and estimate DSGE models, the first order approximation of the block-decomposed model is depicted. Finally, this contribution describes the implementation of the block Kalman filter proposed by Strid and Walentin (2009), in order to make the most of the block structure.

Final summary:

All objectives have been achieved.

Workpackage 2.2: Parallel computing

The objective of this workpackage is to develop parallel computation toolbox for DYNARE. The toolbox has been completed, is operational and is available under the official distribution and installation of DYNARE. The main outcomes of the development work performed are summarized next.

The parallel package within DYNARE has been developed taking into account two different perspectives: the "User perspective" and the "Developers perspective". The fundamental requirement of the "User perspective" is to allow DYNARE users to use the parallel routines easily, quickly and appropriately. The interface developed for DYNARE users allows modellers to exploit the parallel features with a minimal learning effort. Under the "Developers perspective", on the other hand, we have built a core of parallelizing routines that are sufficiently abstract and modular to allow DYNARE software developers to use them easily as a sort of "parallel paradigm", for application to any DYNARE routine or portion of code containing computational intensive loops that will be suitable for parallelization in the future development of DYNARE. The Parallel DYNARE comes with the official DYNARE installation package, so the preprocessor part required to interpret the cluster definition is built-in the standard DYNARE installation. Other important features of the software are that: (i) it is able to manage message passing between processes; (ii) it has been made compatible with completely hybrid clusters: multi-platform (unix/windows) and multi-environment (octave/matlab).

The solution implemented for Parallel DYNARE can be synthesized as follows (Ratto (2010), Ratto et al. (2011)):

When the execution of the code should start in parallel, instead of running it inside the active MATLAB session, the following steps are performed:

1. The control of the execution is passed to the operating system (Windows/Linux) that allows for multi-threading;

2. Concurrent threads (i.e. MATLAB instances) are launched on different processors/cores/machines;

3. When the parallel computations are concluded the control is given back to the original MATLAB session that collects the result from all parallel `agents' involved and coherently continue along the sequential computation.

The DYNARE components that are parallelized and tested in DYNARE are listed below:

1. The Random Walk- (and the analogous Independent-)-Metropolis-Hastings algorithm with multiple parallel chains;

2. A number of procedures performed after the completion of Metropolis, that use the posterior MC sample:

a. The diagnostic tests for the convergence of the Markov Chain (McMCDiagnostics.m);

b. The function that computes posterior IRF's (posteriorIRF.m);

c. The function that computes posterior statistics for filtered and smoothed variables, forecasts, smoothed shocks, etc. (prior_posterior_statistics.m);

d. The utility function that loads matrices of results and produces plots for posterior statistics (pm3.m)

The parallel package has been tested and debugged. In terms of computational gain, we focused on the parallelization of the Random Walking Metropolis Hastings algorithm, which is the most expensive block in the Bayesian estimation of DSGE models. For all models analysed and for a standard number of iterations of Metropolis (> 100,000), the cost of running n parallel chains on n processors is almost reduced by a factor n with respect to the equivalent serial execution, i.e. very near to the maximum theoretical speed-up. We provide in the official DYNARE distribution tests for parallel execution: the test model ls2003.mod is available in the folder tests\parallel, that allows running parallel examples.

Final summary:

All objectives have been achieved.

Workpackage 3.1: Identification

The objective of this workpackage is to develop an identification toolbox for DYNARE. The toolbox has been completed, is operational and is available under the official distribution and installation of DYNARE. The main outcomes of the development work performed are summarized next.

In developing the identification software, we took into consideration the most recent developments in the computational tools for analysing identification in DSGE models. The toolbox provides a wide set of diagnostic tools to analyse the identification strength of the model. Advanced options allow to inspect identification patterns, that help the analyst in tracking the possibly weakest elements of the model parameter set. Moreover, a Monte Carlo option allows to study how identification features are changed across the entire prior space of model parameters.

The methodological basis of the identification toolbox in DYNARE has been designed, based on two pillars (Ratto and Iskrev, 2011a,b). The first includes the local identification methods à la Iskrev (2010a,b) in conjunction with an efficient analytic derivation engine developed at JRC to obtain point-wise identification diagnostics (JRC, 2010; Ratto and Iskrev, 2010). The second is a Monte Carlo shell to repeatedly analyse local identification in the prior space of model parameters. Then, global sensitivity analysis tests of Ratto (2008) are used to trace how identification properties are changed in different regions of the prior space. The analytic derivation engine has been extensively developed and includes the analytic derivatives of the model solution, of its theoretical moments and of the likelihood (scores and Hessian). The algorithms for local identification algorithms fully include the Iskrev (2010b) and Andrle (2010) methods: the rank tests based on Jacobians, the evaluation of the information matrix to measure the identification strength and the identification patterns. Moreover, the Monte Carlo shell has been plugged with Monte Carlo filtering tests for global sensitivity analysis (Ratto, 2008), in order to trace a measure of weak identification (the condition number of the Jacobian) versus model parameters.

Main features of the software:

The simple keyword identification triggers the identification diagnostics developed within MONFISPOL. The toolbox tackles the following fundamental question: "How well are parameters in DSGE models identified-" In doing so, the identification toolbox addresses the main elements of local identification problems, it is easy to implement (the brief command identification), it helps to better understand the inner working of the model.

For assessment, Identification Toolbox exploits the state equations, first and second moments of the model, analytical derivatives, efficient computation procedures. It covers the following issues:

* Non-identification based on rank deficiencies;

* Under-identification: Parameter does not affect moments;

* Partial identification: Parameters are collinear and cannot be identified separately;

* The code identifies parameter(s) that lead to rank deficiency;

The code provides a measure of identification strength based on the

* Information matrix;

* Weak identification through sensitivity of moments or near multi-collinearity.

A library of test routines is also provided in the official DYNARE test folder. Such tests implement some of the examples described in the MON- FISPOL deliverable Ratto and Iskrev (2011a).

Kim (2003) : the DYNARE routines for this example are placed in the folder dynare_root/tests/identification/kim;

An and Schorfheide (2007): the DYNARE routines for this example are placed in dynare_root/tests/identification/as2007;

Final summary:

All objectives have been achieved.

Workpackage 3.2: Partial information and estimation

There are two main strands to this work package:

1. To set up software for estimation under partial (imperfect) information, and also for producing impulse response functions and second moments;

2. To compute optimal and robust policy under partial information.

Estimation and Impulse Response Software

The purpose of this part of the work package was to produce software that in effect mimics the Dynare software already in place for the case of perfect information on the part of agents.

The software assumes a linear model, and the only difference between the modfile under usual Dynare and partial information is that there is a declaration either at the beginning of a line or in the stochsimul command that partial information is being used. In terms of the computations involved, the main difference is the assumption of symmetry of the information set. This means that the variables that are declared in the setobs command are also the only variables that are observed by agents. By incorporating new variables in the model equations that include additive shocks one can also allow for measurement error.

Dynare stores all rational expectations models in Sims-McCallum form as detailed in the Dynare manual. However one cannot apply the general partial information results to models in this form. Internally, the software converts the models into Blanchard-Kahn form in a manner detailed by Levine and Pearlman (2011). For the stochsimul command, the set of computer output that is produced under partial information - impulse response functions, second moments, autocorrelations - is exactly what is produced under perfect information, with the exception of variance decomposition. The latter is not meaningful under partial information because the variances of shocks have nonlinear effects on the variances of the variables.

For the estimation software, the output is identical to that for perfect information, with mode calculations followed by the MCMC algorithm. The only difference is that missing values cannot yet be included in the software. Applications of this estimation software are to be found in Levine et al. (2010) and Levine et al. (2011).

Testing the Software:

This has been done in several ways:

1. Using a very simple model with all likelihood values calculated on an Excel spread- sheet;

2. Using a slightly larger model in which it is easy to see analytically that partial information is equivalent to having perfect information;

3. Using a larger model for the case when the number of measurements is equal to the number of shocks (and shocks have an immediate impact on measurements).

In all of these cases the two pairs of likelihood values should have been identical; in fact they matched up to at least six significant figures.

Extending the Software:

The aim was to generalize the software to allow agents and econometricians to have differing information sets. However a problem arose in that an appreciation that the final test above was definitive was not arrived at until very late on in the project. As a consequence testing of this case of differing information sets stalled, and it then became too late to incorporate into the software.

However, to compensate for this, the software has instead been extended so as to apply to DSGE-VAR calculations as well. The idea behind DSGE-VAR is to start off with priors for a VAR calculation using the parameters generated by the DSGE estimation procedure. This is done under normal Dynare, but now uses the software for partial information second moments to extend this for the new software.

Summary:

All the main objectives - calculation of impulse response functions, second moments, MCMC estimation - have been achieved. A subsidiary objective - estimation for differing information sets - has not; but to compensate, the software has been extended to include DSGE-VAR, which was not in the original specification. A short user manual has also been produced.

Optimal and Robust Policy under Partial Information

The objective here was to produce software that is the analogue of that in work package 1.2. Because of certainty equivalence, the fully optimal and time consistent rules have exactly the same structure as those under perfect information, except that they apply to the current best estimates of the states, rather than to the actual values of the states. The only important difference in the computer outputs in these cases is the value of the utility function and the equilibrium variances of the variables.

For optimized simple rules, certainty equivalence no longer holds; once again however, the main differences are utility and variances. Nevertheless, since the optimized simple rules are different, it is useful to generate impulse response functions, and this can be done by recording the optimized simple rule and using the stochsimul command.

Robust simple and zero lower bound rules for each of fully optimal, time-consistent and optimized simple rules may also be obtained.

Final summary:

All objectives have been achieved. The user manual is incorporated into that from work package 1.2

Workpackage 3.3: Bayesian priors

An area where it is possible to instil more realism in our modeling approach is the specification of priors. Often, it is easy to make statements about growth rates of variables, such as GDP not growing above 5% or below -1% in developed economies. But translating this to a statement of probabilities on parameters can be extremely complex. This workpackage carried out at the Institut d'Analisi Economia CSIC, automates this translation process to render the estimation of complex VAR and DSGE models more straightforward with more realistic and precise priors on parameters.

The analytical tool, developed by the team at CSIC, provides a benchmark prior for the estimation of vector autoregressions. The contribution starts with the fact that the ordinary least squares (OLS) estimator tends to underestimate persistence in autoregressive models when a small sample is available. This may significantly affect empirical results, especially the impulse responses at longer lags. Many techniques have been designed to estimate autoregressions in small samples using both classical and Bayesian approaches. The contribution of this analytical tool is to provide a widely acceptable procedure for estimating autoregressions with small samples. One results show that given the same treatment of the initial condition, Bayesian and classical econometricians agree about the appropriateness or not of OLS. To do show that it is proposed to use an informative prior based on the a priori distribution of the observed series in the first few periods of the sample. This kind of prior has many advantages: (a) it allows to clearly relate initial observations and parameters, (b) it may be a near consensus prior, (c) it is much easier to express an opinion about a prior distribution of observed variables than of VAR parameters, (d) it entirely sidesteps the issue of what is a "truly" uninformative prior in time series. The implementation of this approach raises a technical difficulty: solving a Fredholm integral equation under very high dimension of the parameter space. An algorithm to overcome this issue is also a contribution of this last analytical tool.

Final summary:

All objectives have been achieved.

Workpackage 4.1: Optimal policy in open economy

This workpackage consist of two models that lead to two academic papers.

In the first paper, we analyze the optimal choice of exchange rate regime for a two-country model with price stickiness and matching frictions. Labor market flows and labor costs are an important determinant of the international transmission mechanism of shocks, hence the optimal choice of the exchange rate target cannot neglect the impact that relative movements of unemployment, wages and job flows have on currency fluctuations. Furthermore, it is well known that both asymmetric shocks and exchange rate fluctuations have a significant impact on labor market dynamics as they affect the dynamic of relative marginal costs across countries. Hence in this context the optimal degree of exchange rate stabilization is an important determinant of fluctuations in the labour market.

We use a DSGE two country model with matching frictions and price rigidity. The labor market is characterized by endogenous job destruction and wages are determined through an efficient Nash bargaining. Those elements allow us to characterize the dynamic of unemployment and labor market participation in response to external shocks and exchange rate fluctuations. The model also features sticky prices. Using this model we focus on two questions. First, for a two-country model we analyze the international transmissions of shocks and the welfare ranking of different exchange rate regimes. This question is very much inspired by the policy debate on whether the European Central Bank should target the exchange rate with the dollar. Second, we consider a currency area and ask what is the optimal monetary policy rule when area members face asymmetric labour market conditions. The latter question also carries a policy flavour as much discussion exists in Europe on whether the European Central Bank should also target unemployment alongside with inflation and on whether it should assign different weights to countries with different degrees of market frictions.

The comparison among different exchange rate regimes in the model (floating versus pegged) shows that increasing the response to exchange rate fluctuations in the monetary policy rule reduces macroeconomic volatility. As agents are risk averse the fall in macroeconomic volatility implies an increase in welfare. The figure below shows indeed the result for the domestic and foreign agents' welfare:

Furthermore, we compute the optimal monetary policy rule for a currency area, whose area members have different labor market institutions in terms of unemployment benefit coverage. The optimal rule is characterized by a positive response to unemployment and assigns different weights to different countries.

In the second paper, the main question is whether monetary policy should change in an environment with high degrees of financial globalization. The last two decades have been characterized by an extraordinary wave of financial globalization often accompanied with persistent current account imbalances. For many countries current account imbalances have been negatively related to booms in house prices, mortgages and consumer credits and in the demand for durable goods, such as residential properties. For many countries the boom in durable and housing investment has been financed mainly through, direct or indirect, international lending. See figures below which shows the unprecedented growth in foreign lending also for industrialized countries such as the US and the UK:

This implied that current account dynamics have been often determined and correlated with swings in house prices. Additionally, as foreign lenders have lower redeploying abilities, lending standards have been quite loose and often tied to collateral values. To this purpose we lay down a DSGE small open economy model in which agents consume durable and non-durable goods, supply labor services and finance consumption with foreign lending. The rest of the world is populated by infinitely lived agents, whose behavior is in accordance with the consumption-smoothing hypothesis. Total (net) foreign lending is constrained by a borrowing limit and is secured by collateral in the form of durable stock, as the latter can be seized by lenders in the event of default. Due to imperfect monitoring only a fraction of this collateral can be pledged by. Firms in this economy are monopolistic competitive and face quadratic adjustment costs.

Three main results arise. First, we show that the net asset accumulation in this model is uniquely determined in the steady state and that it is saddle path stationary in a neighborhood of the steady state. In this case the domestic economy experiences a persistent current account deficit, as in equilibrium domestic residents behave as impatient agents and borrow from the rest of the world. Second, we compare alternative exchange rate regimes and show that, under high financial liberalization, fluctuations in the exchange rate induce swings in the value of collateral, therefore affecting the availability of foreign lending and amplifying fluctuations in consumption (see figure below generated by the model, as an example), output and CPI, as shown in the figure below.

Finally, we analyze optimal policy, via Pareto efficient allocations and Ramsey monetary policies. We show that optimal monetary policy might want to deviate from zero inflation and target the exchange rate in order to reduce swing in the wedges.

Final summary:

All objectives have been achieved.

Workpackage 4.2: Optimal policy with labor and financial market frictions

This workpackage consist of two models that lead to two academic papers.

The first paper is titled "Fiscal Policy in a Model with Matching Frictions". The endorsement of expansionary fiscal packages has often been based on the idea that large multipliers can contrast rising and persistent unemployment. Following the 2007-2008 crisis, various national governments around the globe have passed expansionary fiscal packages arguing that, with nominal interest rates at the zero lower bound, only a strong fiscal stimuli could help to contrast the recession and rising unemployment. In the United States the fiscal stimulus involved, alongside with pure increase in government spending, also incentives to hiring. With the Hiring Incentives to Restore Employment (HIRE) Act, enacted on March 18, 2010, new tax benefits have been made available to employers who hire previously unemployed workers and who maintain them for a certain period of time. Recent and past literature (reviewed below) shows that generally fiscal multipliers for demand stimuli are small in business cycle models with various nominal and real frictions. None of the previous studies considered fiscal stimuli in the form of hiring subsidies. We compute short run and long run multipliers to examine the effectiveness of fiscal stimuli by considering both, increases in government spending or hiring subsidies. We do so using a New Keynesian model, with Hosios inefficiency, and endogenous workers' participation. Those elements induce inefficient and un-voluntary unemployment, alongside with discouraged workers' effect, therefore providing the scope for policy intervention. However, not all policy interventions are effective. We show indeed that increases in government spending remains ineffective in terms of producing large multipliers, even more so under frictional labor markets. On the contrary hiring subsidies are very beneficial both in the short run and in the long run, as they provide incentive for vacancy posting decisions and increase labor market participation. A short summary of the results for the aggregate demand multiplier in the model with endogenous participation is reported in the table below for the fiscal stimulus in terms of hiring subsidies and for different model assumptions.

Lump sum taxation Lump sum taxation Distortionary taxation Distortionary taxation

Short run multiplier Short run multiplier Long run multiplier Long run multiplier

Baseline 1.23 1.75 1.40 1.96

Bargaining power, high 3.48 4.15 3.84 4.83

Bargaining power, low 0.6280 1.1361 0.7252 1.2279

Labor elasticity, high 1.3478 1.6454 1.5679 1.9659

Labor elasticity, low 1.1728 1.7994 1.2880 1.9531

Real wage rigidity 1.5702 1.7055 1.6716 1.8904

With interest rate peg -1.3859 1.0961 -0.7931 1.2800

Non linear cost of posting vacancies 1.2928 1.6620 1.4413 1.8519

The second paper is titled "Capital Regulation and Monetary Policy with Fragile Banks". The financial crisis is producing, among other consequences, a change in perception on the roles of financial regulation and monetary policy. The traditional question of the optimal taxation of capital has been rephrased partly in terms of optimal pigouvian taxation of bank capital, which in fact takes the form of bank capital requirements as outlined in the various Basel agreement, including the last Basel III. There is a call for design of macro-prudential policies and for coordination between those and monetary policy. In this paper we study how bank regulation and monetary policy interact in a macroeconomy that includes a fragile banking system. We incorporate a state-of-the art banking theory in a general equilibrium macro framework and also incorporate some key elements of financial fragility experienced in the recent crisis. In our model, banks have special skills in redeploying projects in case of early liquidation. Uncertainty in projects outcomes injects risk in bank balance sheets. Banks are financed with deposits and capital; bank managers optimize the bank capital structure by maximizing the combined return of depositors and capitalists. Banks are exposed to runs, with a probability that increases with their deposit ratio or leverage. The relationship between the bank and its outside financiers (depositors and capitalists) is disciplined by two incentives: depositors can run the bank, forcing early liquidation of the loan and depriving bank capital of its return; and the bank can withhold its special skills, forcing a costly liquidation of the loan. The desired capital ratio is determined by trading-off balance sheet risk with the ability to obtain higher returns for outside investors in "good states" (no run), which increase with the share of deposits in the bank's liability side.

Inserting this banking core into a standard DSGE framework yields a number of results. A monetary expansion or a positive productivity shock increase bank leverage and risk. The transmission from productivity changes to bank risk is stronger when the perceived riskiness of the projects financed by the bank is low. Pro-cyclical capital requirements (akin to those built in the Basel II capital accord) amplify the response of output and inflation to other shocks, thereby increasing output and inflation volatility, and reduce welfare. Conversely, anti-cyclical ratios, requiring banks to build up capital buffers in more expansionary phases of the cycle, have the opposite effect. To analyse alternative policy rules we use second order approximations, which in non-linear models allows us to account for the effects of volatility on the mean of all variables, including welfare. Within a broad class of simple policy rules, the optimal combination includes mildly anti-cyclical capital requirements (i.e. that require banks to build up capital in cyclical expansions) and a monetary policy that responds rather aggressively to inflation and also reacts systematically to financial market conditions - either to asset prices or to bank leverage.

Final summary:

All objectives have been achieved.

Workpackage 4.3: Optimal policy and game theory

This workpackage led to one scientific paper. The paper deals with the question of exiting the expansionary monetary and fiscal policies put in place under the extraordinary circumstances represented by the 2007-2008 crisis. In all industrial countries, public sector deficits expanded sharply since the second half of 2008 for the combined effect of automatic stabilizers, on both the expenditure and revenue sides, and discretionary measures to support the financial, corporate and household sectors. The extent and nature of the official support varied across countries, but the overall effect was impressive by all standards. Budget deficits increased by about 5 percent of GDP between 2008 and 2009 in both the US and the euro area. For this purpose we use an adapted version of the model proposed by Angeloni and Faia 2009, henceforth AF, which integrates a risky banking sector, modelled as in Diamond and Rajan 2000, into a standard DSGE macro framework. In addition to the usual channels of monetary policy, there is also a "risk-taking" channel, affecting macroeconomic outcomes via the extent of risk present in the bank balance sheets.

Our model starts from a crisis scenario, triggered by a number of financial and banking shocks, which generates a recession and endogenously brings the interest rate, set by the monetary authority, to zero. While the monetary policy exit is generally an endogenous move away from the zero lower bound for the interest rate, fiscal exit is modelled as a change in the fiscal rule, in the direction of a faster consolidation of public debt. We examine a variety of such rules that differ for the speed of debt consolidation, the information provided to economic agents, the composition of fiscal adjustment, etc. This approach permits us to pose questions that are common in many current discussions on exit strategies, such as gradualism versus preemptive action, sequencing and delay and communication policy. Our main conclusions can be summarized as follows. First, exiting the post-crisis policy stance is beneficial; almost any exit strategy leads to an improvement in terms of our evaluation criteria (intertemporal changes in output, inflation and bank risk) relative to the status quo, i.e. the indefinite continuation of the post-crisis accommodative policy course. The gain is greater at long horizons, while in the short run (first 20 quarters) the results are more mixed. Active fiscal strategies, geared to an ambitious debt consolidation target and credibly communicated in advance, dominate gradual, unannounced ones. The composition of fiscal policy matters; spending-based fiscal strategies are superior to tax-based ones in most cases.

Final summary:

All objectives have been achieved.

Workpackage 4.4: Optimal debt

This workpackage led to one scientific paper. In the paper, optimal fiscal policy is reconsidered along the question of the maturity of sovereign debt. As the current European sovereign debt crisis emphasizes the maturity structure of government debt is a key variable. Deciding fiscal policy independently of funding conditions in the market is a doomed concept: taxes, public spending, public deficits, should take into account the funding conditions in the market for bonds. Therefore debt management should not be subservient to fiscal policy and simply be in charge of "minimizing costs", fiscal policy and debt management should be studied jointly. Any theory of debt management needs to explain the costs and benefits for fiscal policy of varying the average maturity.

The contribution studies Ramsey optimal fiscal policy under incomplete markets in the case where the government issues only long bonds of maturity N > 1. The results emphasize that that many features of optimal policy are sensitive to the introduction of long bonds, in particular tax variability and long run behaviour of debt. When government is in debt it is optimal to respond to an adverse shock by promising to reduce taxes in the distant future as this achieves a cut in the cost of debt. Hence, debt management concerns about the cost of debt override typical fiscal policy concerns such as tax smoothing. In the case when the government leaves bonds in the market until maturity two additional reasons why taxes are volatile due to debt management concerns have to be reported: debt has to be brought to zero in the long run and there are N-period cycles. The contribution formulates the equilibrium recursively applying the lagrangian approach for recursive contracts. Even with this approach the dimension of the state vector is very large. A flexible numerical method to address this issue, the "condensed PEA", which substantially reduces the required state space is proposed. This technique has a wide range of applications. To explore issues of policy coordination and commitment, an alternative model where monetary and fiscal authorities are independent is developed.

Final summary:

All objectives have been achieved.

Workpackage 5.1: Macro-econometric models database

The following deliverables were part of this workpackage:

* 5.1.1 Report on the collection of models and their translation into a common file structure. Formal exposition of a systematic approach to model comparison. Deliverable submitted as Wieland, Volker, Tobias Cwik, Gernot Mueller, Sebastian Schmidt, and Maik Wolters, "A New Comparative Approach to Macroeconomic Modeling and Policy Analysis," Working Paper, Goethe University Frankfurt, 2011.

* 5.1.2 Documentation of the creation of the computational platform "Macroeconomic Model Database" and user manual on how to conduct model comparisons and policy evaluations using the Modelbase software. Protocol how to include additional models to the database. Deliverable submitted as Appendix A and B of Wieland, Volker, Tobias Cwik, Gernot Mueller, Sebastian Schmidt, and Maik Wolters, "A New Comparative Approach to Macroeconomic Modeling and Policy Analysis," Working Paper, Goethe University Frankfurt, 2011.

* 5.1.3 Academic paper summarizing conclusions of model comparisons. The conclusions are presented in three papers

o Wieland, Volker, and Maik H. Wolters, "The Diversity of Forecasts from Macroeconomic Models of the U.S. Economy," Economic Theory, May 2011, 47: 247-292, 2011.

o Taylor, John B., and Volker Wieland, "Surprising Comparative Properties of Monetary Models: Results from a New Model Database," Review of Economics and Statistics, forthcoming.

o Schmidt, Sebastian and Volker Wieland, "The New Keynesian Approach to Dynamic General Equilibrium Modeling: Models, Methods and Macroeconomic Policy Evaluation," in preparation for P.B. Dixon and D.W. Jorgenson, Eds., Handbook of Computational General Equilibrium Modeling, Elsevier.

Summary of work

A new comparative: approach to model-based research and policy analysis has been formulated that enables individual researchers to conduct model comparisons easily, frequently, at low cost and on a large scale. This approach contains several systematic steps in order to make models exhibiting distinct structural assumptions, different variables and different notation comparable to each other. In particular, these steps involve the augmentation of the models with a set of common variables, parameters, shocks and equations. A detailed formal exposition of the approach is presented in Wieland, Volker, Tobias Cwik, Gernot Mueller, Sebastian Schmidt, and Maik Wolters, "A New Comparative Approach to Macroeconomic Modeling and Policy Analysis," Working Paper, Goethe University Frankfurt, 2011 (deliverable 5.1.1).

The approach has been used to build a model archive based on a common computational platform using the DYNARE software package. The database includes by now 50 macroeconomic models, ranging from small-, medium-, and large-scale DSGE models to earlier-generation New Keynesian models with rational expectations and more traditional Keynesian-style models with adaptive expectations. It includes models of the United States, the Euro Area, Canada and several small open emerging economies. Some of the models explicitly incorporate financial frictions. All models have been augmented following the approach described above in order to facilitate the systematic comparison of their empirical implications. Current objects for model comparisons include impulse response functions, autocorrelation functions and unconditional variances and the user can decide how many models he incorporates in his experiments. The selected models will be solved under a common monetary policy rule chosen by the user which allows for the evaluation of policies across models.

We have recently released the second update of the Modelbase software, which can be downloaded together with a comprehensive list of all models from our website www.macromodelbase.com. A user manual and model documentation (deliverable 5.1.2) as well as a list of papers that facilitate our comparative approach and/or employ models from the database is also available from the webpage.

Implemented models have been validated through replications of results presented in the original references. In the course of the recent Modelbase update, replication files have been made available for download.

Deliverable 5.1.3 is spread over three papers. In Wieland, Volker, and Maik H. Wolters, "The Diversity of Forecasts from Macroeconomic Models of the U.S. Economy," Economic Theory 47: 247-292, 2011 we conduct a detailed model comparison. Specifically, the forecast performance and heterogeneity during the current and the four preceding NBER dated US recessions of five structural macroeconomic models is evaluated and compared to less structural forecasting methods as well as professional forecasts from the Federal Reserve's Greenbook and the Survey of Professional Forecasters (SPF). We focus on two key macroeconomic variables, output growth and inflation. The model parameters and model forecasts are derived from historical data vintages so as to ensure comparability to historical forecasts by professionals. It turns out that among the structural models there is none that consistently outperforms the others. During a particular recession, the best forecasts at different horizons usually come from different models. However, some systematic differences in the performance can be identified. The CEE-SW model [Christiano et al. (2005) as estimated in Smets and Wouters (2007)] and the FRB-EDO model [Edge et al. (2008)] deliver fairly good forecasts in four out of five recessions. Several times, they yield the most accurate forecasts. In those cases where they are less precise than other models, the difference to the most accurate forecast is small. The two models have in common that they exhibit a rich economic structure and a parameterization which is tight enough to yield accurate forecasts.

The mean model forecast comes surprisingly close to the mean SPF and Greenbook forecasts in terms of accuracy even though the models only make use of a small number of data series. Model forecasts compare particularly well to professional forecasts at a horizon of three to four quarters and during recoveries. The extent of forecast heterogeneity is similar for model and professional forecasts but varies substantially over time. Thus, forecast heterogeneity constitutes a potentially important source of economic fluctuations.

In addition, model comparison exercises have been conducted in Taylor, John B. and Volker Wieland, "Surprising Comparative Properties of Monetary Models: Results from a New Model Database," forthcoming in Review of Economics and Statistics and in Schmidt, Sebastian and Volker Wieland, "The New Keynesian Approach to Dynamic General Equilibrium Modeling: Models, Methods and Macroeconomic Policy Evaluation," in preparation for P.B. Dixon and D.W. Jorgenson, Eds., Handbook of Computational General Equilibrium Modeling, Elsevier. Taylor and Wieland look at three monetary models of the U.S. economy contained in the database in order to compare the transmission mechanism of these models and to evaluate the robustness of optimized simple rules. They find that rules which respond to the growth rate of output and smooth the interest rate are not robust. In contrast, policy rules with no interest rate smoothing and no response to the growth rate but to the level of output are more robust. Schmidt and Wieland extend the analysis of Taylor and Wieland, incorporating models with financial frictions.

Finally, a policy brief on Forecasting recessions with structural models has been delivered, summarizing some of the findings in Wieland and Wolters (2011) and laying out the key messages for the current policy debate.

Final summary:

All objectives have been achieved.

Workpackage 5.2: Model validation

The following deliverables were part of this workpackage:

* 5.2.1 Optimal exchange rate report. Assess optimal exchange rate policies across a large class of models. Deliverable submitted as Cwik, Tobias and Volker Wieland, Report: Multi-country model validation and policy evaluation, 2011.

* 5.2.2 Working paper: summarizing above results. Due to the current economic events the working paper focused more on the role of the exchange rate in fiscal stimulus spillovers in the euro area. A second paper presents a model-based analysis of the impact of fiscal stimulus in the US.

o Cwik, Tobias, and Volker Wieland, "Keynesian government spending multipliers and spillovers in the euro area," Economic Policy July 2011. 26(67): 493-549.

o Cogan, John F., Tobias Cwik, John B. Taylor, and Volker Wieland, "New Keynesian versus Old Keynesian Government Spending Multipliers," Journal of Economic Dynamics and Control, March 2010, 34: 281-295.

Summary of work:

Using a subset of six open-economy models from the database, the report by Cwik and Wieland (deliverable 5.2.1) analyzes the desirability of monetary policy responses to exchange rate movements. They determine optimal simple rules for alternative policy objective functions and monetary policy rule specifications. The key finding is that in the majority of models, monetary policy should only respond modestly to the exchange rate. This holds true whether one considers the level of the real exchange rate, or the change in the real exchange rate or the change in the nominal exchange rate. However, if one considers parsimonious policy rules that can only respond to the rate of inflation, the level of output and an exchange rate measure, then systematic responses of monetary policy to the exchange rate can lead to more significant stabilization improvements in some of the models.

Deliverable 5.2.2 is spread over two papers. In the course of the current financial crisis and accompanying debates in policy circles as well as in academia about the usefulness of fiscal stimulus programs, the working paper that followed the report moved the focus towards fiscal spillovers in the euro area and the role of the exchange rate. The paper has been published as Cwik, Tobias and Volker Wieland, "Keynesian government spending multipliers and spillovers in the euro area," Economic Policy July 2011. We employ five structural euro area models with nominal rigidities to learn about the government spending multiplier and the robustness of its estimated size across models. Simulating the euro area fiscal stimulus measures in these models, we find that the impact on GDP is fairly small. Three of the models assume forward-looking rational expectations by individuals and firms, monopolistic competition and nominal rigidities in goods and labor markets. The models fully incorporate microeconomic foundations consistent with the optimizing decision-making of representative households and firms. The models have in common that they include a number of additional frictions such as price and wage indexation, habit-persistence in consumption, investment adjustment costs, serially correlated shocks and costs related to variable capital utilization.

The models are used to simulate the European Stimuli packages of 2009 and 2010 and to estimate the impact of the announced government spending on GDP. Euro area GDP increases as a result of additional government purchases, but the increase in GDP is less than one-for-one. In addition, we observe that when government spending returns to baseline by the end of 2010 GDP falls below baseline in two of the three models. The fiscal multipliers below one imply that increased government spending leads to a crowding-out of private spending. Indeed, private consumption and investment decline immediately and stay below baseline until well after the end of the fiscal stimulus. The key driving force behind these dynamics lies in the forward-looking perspective of the economic agents in these models. Households anticipate that debt-financed transitory increases in government expenditures will lead to higher taxes in the future, which cause a negative wealth effect. It is important to note that the results are robust to the incorporation of a reasonable share of so-called rule-of-thumb households, i.e. households that consume all of their current income.

We also examine the robustness of our results by employing two other Keynesian-style models. The first one is an earlier-generation New Keynesian multi-country model of the G7 economies that features forward-looking rational expectations with nominal rigidities due to overlapping wage contracts. Unlike the three state-of-the-art New Keynesian models, many equations are not derived from explicit optimization problems of households and firms. Nevertheless, results are relatively similar to those presented before. Interestingly, some of the stimulus dissipates towards greater demand for imports from countries outside the euro area.

Finally, we employ a more traditional backward-looking model that has been used for many years by ECB staff as an element in the construction of their euro area forecasts. The results based on this model differ from those presented above. The government-spending stimulus leads to a crowding-in of consumption and investment. In the second year of the stimulus program, the fiscal multiplier becomes nearly 2. The crucial difference between this model and the other four models considered in the experiment lies in the specification of agents' expectation formation. In the more traditional model, expectations are represented by lagged variables. Hence, households do neither take the transitory nature of the stimulus into account nor the future increase in taxes necessary to finance increased government spending. Importantly, when the stimulus program has phased out, the economy experiences a significant slump in subsequent years. Such an oscillatory response is common to dynamic models with backward-looking dynamics.

In the previous simulations, monetary policy is characterised by a nominal interest rate rule common to all five models. In the face of higher inflation and output due to a rise in government spending, monetary policy raises interest rates, which dampens aggregate demand. Instead, if nominal interest rates are held constant, for instance because the economy has hit the zero nominal interest rate bound, the effect of euro area government spending on GDP is a bit stronger than in the baseline scenario. On the other hand, if one realistically allows for an implementation lag of fiscal policy, fiscal multipliers are dampened.

The paper also analyzes the magnitude of potential spillover effects from the large stimulus package in Germany to other euro area countries. The German government's announced government spending program has been by far the largest plan accounting for 51% of the total euro area stimulus. Thus, we ask whether the effect of German government spending increases pull along other euro area countries. Specifically, we employ the multi-country model of the G7 economies that comprises the German, French and Italian economy separately. The model-based analysis suggests that the spillover effects are rather small. The finding is obtained even though the estimated export demand equations for Italy and France indicate an economically significant direct foreign demand effect. The demand effect however is outweighed by a real appreciation of the euro. The fiscal expansion in Germany puts upward pressure on the euro relative to the currencies of countries outside the monetary union. As a result, the currency union member countries lose competitiveness and exports to countries outside the euro area decline. The findings of Cwik and Wieland (2011) have also been summarized in a related policy brief.

In a second paper, we employ a widely-used medium-scale New Keynesian model of the US economy to evaluate the robustness of fiscal multiplier estimates based on Old Keynesian models under a competing model paradigm. The analysis shows that Old Keynesian fiscal multipliers are not robust, the multipliers in the New Keynesian model turn out to be of much more modest size, comparable to the results found in Cwik and Wieland (2011). The paper has been published as Cogan, John F., Tobias Cwik, John B. Taylor, and Volker Wieland, "New Keynesian versus Old Keynesian Government Spending Multipliers," Journal of Economic Dynamics and Control, March 2010, 34: 281-295.

Final summary:

All objectives have been achieved.

References

Adjemian, S., H. Bastani, M. Juillard, F. Mihoubi, G. Perendia, M. Ratto, and S. Villemot (2011, October). Dynare: Reference Manual, Version 4. DYNARE Working Papers Series 1, CEPREMAP.

An, S. and F. Schorfheide (2007). Bayesian analysis of DSGE models. Econometric Reviews 26 (2-4), 113-172.

Andrle, M. (2010). A note on identification patterns in DSGE models (august 11, 2010). ECB Working Paper 1235. Available at SSRN: http://ssrn.com/abstract=1656963.

Gilli, M. and Pauleto, G. (1998): Sparse Direct Methods for Model Simulation, Journal of Economic Dynamics and Control, 21, 1093-1111.

Iskrev, N. (2010a). Evaluating the strength of identification in DSGE models, an a priori approach. Unpublished manuscript.

Iskrev, N. (2010b). Local identification in DSGE models. Journal of Monetary Economics 57, 189-202.

JRC (2010). Development of the combined approach for assessing identification in the prior space of model parameters. software prototype. MONFISPOL grant agreement SSH-CT-2009-225149 - Deliverable 3.1.1 Joint Research Centre.

Kim, J. (2003). Functional equivalence between intertemporal and multisectoral investment adjustment costs. Journal of Economic Dynamics and Control 27 (4), 533-549.

Levine, P. and Pearlman, J. (2011). Linear-Quadratic Optimal Policy Problems in Perfect and Imperfect Information Settings. Mimeo.

Levine, P., Pearlman, J., Perendia, G., and Yang, B. (2010). Endogenous Persistence in an Estimated DSGE Model under Imperfect Information, Presented at the 2010 Royal Economic Society Conference, March 29 - 31, University of Surrey and Department of Economics Discussion Papers 0310, Department of Economics, University of Surrey .

Levine, P., Pearlman, J., and Pierse, R. (2008a). Linear-Quadratic Approximation, Efficiency and Target-Implementability. Journal of Economic Dynamics and Control, 32, 3315-3349.

Levine, P., McAdam, P., and Pearlman, J. (2008b). Quantifying and Sustaining Welfare Gains from Monetary Commitment. Journal of Monetary Economics, 55(7), 1253-1276.

Levine, P., Pearlman, J., and Yang, B. (2011a). Imperfect Information, Optimal Policy and the Informational Consistency Principle. Presented at the MONFISPOL final Conference at Goethe University, September 19 - 20, 2011.

Levine, P., Pearlman, J., and Yang, B. (2011). Explaining Business Cycles: News Versus Data Revisions. Presented at the MONFISPOL final Conference at Goethe University, September 19 - 20, 2011.

Levine, P., McAdam, P., and Pearlman, J. (2011b). Probability Models and Robust Policy Rules. European Economic Review. Forthcoming.

Mihoubi F. (2011): Solving and estimating stochastic models with block decomposition, mimeo.

Ratto, M. (2008). Analysing DSGE models with global sensitivity analysis. Computational Economics 31, 115-139.

Ratto, M. (2010). Report on alternative algorithms efficiency on different hardware specifications and - version of parallel routines. MONFISPOL grant agreement SSH-CT-2009-225149 - Deliverable 2.2.1 Joint Research Centre.

Ratto, M., I. Azzini, H. Bastani, and S. Villemot (2011). Beta-version of parallel routines: user manual. MONFISPOL grant agreement SSH-CT- 2009-225149 - Deliverable 2.2.2 Joint Research Centre.

Ratto, M. and N. Iskrev (2010). Computational advances in analyzing iden- tification of DSGE models. 6th DYNARE Conference, June 3-4, 2010, Gustavelund, Tuusula, Finland. Bank of Finland, DSGE-net and Dynare Project at CEPREMAP.

Ratto, M. and N. Iskrev (2011a). Algorithms for identification analysis under the dynare environment: final version of software. MONFISPOL grant agreement SSH-CT-2009-225149 - Deliverable 3.1.2 Joint Research Centre.

Ratto, M. and N. Iskrev (2011b). Identification analysis of DSGE models with DYNARE. MONFISPOL final conference, Frankfurt, 19-20 September 2011. Center for financial studies at the Goethe University of Frankfurt and the European Research project (FP7-SSH) MONFISPOL.

Strid, I. and Walentin, K. (2009): Block Kalman Filtering for Large-Scale DSGE Models, Computational Economics, 33, 277-304.

van't Veer,O. (2006): Solving large scale normalised rational expectation models, CPB Discussion Paper, 54.

Potential Impact:

The developments of the MONFISPOL project were expected to impact mainly institutional policy makers, academics and to some extent the general public. One of the goals of the project was to achieve the widest dissemination possible. To this end several dissemination mechanism were expected from the start. Below is the detail of the impact and dissemination activities along with the way results of the project were exploited.

Public conferences and workshops

One the main dissemination mechanism was the organization of public conferences and workshops. During the life span of the project, two MONFISPOL Conferences and two workshops were organized with a wide academic and institutional audience in mind. Workshops and conferences were the place for the consortium members to present their on-going work and consolidated results as well as a place to invite other academics and institutional policy makers interested in the development of the project. Attendance to those events proved the importance of the issues in the scope of the project and feedback from those public events was very positive. Below is a detail of those events.

The first MONFISPOL workshop

The first MONFISPOL workshop was held in Stresa, Italy between November 04, 2010 and November 05, 2010. The first workshop was essentially an event to present on-going research by consortium members. The event was publicised and was successful in attracting external attendees. Seven presentations by consortium members tool place along with thorough discussions of each of them:

- R. Winkler "Fiscal Calculus in a New Keynesian Model with Matching Frictions", with E. Faia. Discussant: P. Levine.

- S. Schmidt "A New Comparative Approach to Macroeconomic Modeling and Policy Analysis: Current State", with V. Wieland, T. Cwik, G. Mueller and M. Wolters. Discussant: W. Roeger.

- M. Wolters "The Diversity of Forecasts from Macroeconomic Models of the U.S. Economy", with V. Wieland. Discussant: P. Paruolo.

- M. Ratto "Analysing Identification Issues in DSGE models", with N. Iskrev.

Discussant: T. Cwik.

- J. Pearlman "Endogenous Persistence in an Estimated DSGE model under Imperfect Information", with P. Levine. Discussant: E. Iliopulos.

- Marcet "In Search of a Theory of Debt Management", with E. Faraglia and A. Scott. Discussant R. Winkler (prepared with E. Faia).

- M. Juillard "Computing Optimal Policy in Dynare". Discussant: B. Yang.

A meeting of the MONFISPOL consortium was also held at the workshop.

The first MONFISOL Conference

The first MONFISPOL Conference was held at London Metropolitan University, London, U.K. between November 4th and November 5th, 2010. Twelve presentations took place. Nine of them presented work achieved by consortium members:

- Cristiano Cantore "CES Technology and business cycle fluctuations" with P. Levine and B. Yang. Discussant: Peter McAdam,

- Marco Ratto "Parallel and identification toolbox for Dynare." Discussant: Vasco Gabriel,

- Sumudu Kankanamge "Solutions for discretionary equilibrium." Discussant: Andrew Blake,

- Ferhat Mihoubi "Solving stochastic models with block decomposition." Discussant: Sean Holly,

- Joe Pearlman "On the implementation and identification of optimal timeless policy" with P. Levine. Discussant: Kevin Sheedy,

- Ester Faia "Exit strategies" with I. Angeloni and R.Winkler. Discussant: Jagjit Chadha.

- Michel Juillard "Optimal policy and welfare approximation." Discussant: Matthias Paustian,

- Volker Wieland "A new comparative approach to model building and policy analysis: An update.". Discussant: Helen Solomon,

- Marek Jarocinski and Albert Marcet "Autoregressions in small samples, priors about observables and initial conditions.". Discussant: Stephen Wright.

Two presentations were given by members of the scientific committee of the project:

- Jesper Linde "Asymmetric shocks in a currency union with fiscal and monetary handcuffs" with Christopher Erceg.

- Harald Uhlig "Fiscal stimulus and distortionary taxation¨ with Thorsten Drautzburg."

Dominik Sobczak as the liaison officer with the European Union gave the final presentation.

A meeting of the MONFISPOL scientific committee and a meeting of the MONFISPOL consortium were held at the conference.

The second MONFISPOL workshop

The second MONFISPOL workshop took place in Paris, France between June 20, 2011 and June 24, 2011 and was coupled with the DYNARE Summer School. It was a public event to diffuse results and tools directly to the intended users of the developments of the project. Several tutorial classes and hands on sessions were held through the week and the results of the MONFISPOL project were presented.

The final MONFISOL Conference

The final MONFISPOL conference was held at the Goethe University of Frankfurt, Frankfurt, Germany, between September 19th and September 20th, 2011. Twelve presentations were given.

Eight of them presented work achieved by consortium members:

- The Informational Effects of Real-Time Data in DSGE Models, Joe Pearlman (London Metropolitan University) (joint with Cristiano Cantore, Paul Levine and Bo Yang). Discussant: Frank Smets (European Central Bank),

- Imperfect Information, Optimal Monetary Policy and the Informational Consistency Principle, Paul Levine (University of Surrey). Discussant: Martin Ellison (Oxford University),

- Forecasting and Policy Making, Volker Wieland (Goethe University and IMFS) (joint with Maik Wolters). Discussant: Kai Christoffel (European Central Bank),

- Identification Analysis of DSGE Models with DYNARE, Marco Ratto (Joint-Research-Centre Ispra). Discussant: Stephan Fahr (European Central Bank),

- Solving and Estimating Medium and Large Scale Stochastic Model with Block Decomposition, Ferhat Mihoubi (CEPREMAP). Discussant: Karl Walentin (Sveriges Riksbank),

- Monetary Policy and Risk Taking, Ester Faia (Goethe University of Frankfurt) (joint with Ignazio Angeloni, Marco Lo Duca). Discussant: Matthieu Darracq Paries (ECB),

- Optimal Fiscal Policy with Long Bonds, Albert Marcet (London School of Economics). Discussant: Gernot Müller (University of Bonn),

- Issues in Optimal Monetary Policy, Michel Juillard (CEPREMAP). Discussant: Roland Winkler (Goethe University Frankfurt).

Three presentations were given by members of the scientific committee of the project:

- The Financial Crisis and Macroeconomic Policy: Four Years On, John B. Taylor, Stanford University,

- Fiscal Policy in a Financial Crisis: Standard Policy vs. Bank Rescue Measures, Werner Roeger (European Commission). Discussant: Richard Werner (University of Southampton) (tbc),

- Fiscal Consolidations in Currency Unions: Spending Cuts versus Tax Hikes, Jesper Linde (Federal Reserve Board). Discussant: Peter McAdam (European Central Bank) (tbc).

The last presentation was a policy keynote speech: Monetary and Fiscal Policies in Times of Crisis, José Manuel González-Páramo (Member of the Board, European Central Bank).

A meeting of the MONFISPOL consortium committee was held at the conference.

Impact and dissemination through scientific papers, open-source software and a lively community.

The achievement of the first objective of the project, the development of solid computational tools, was paired with the development of routines for the software platform DYNARE. The steadily growing user community of the software, be it academics or institutional users as well as its implantation in many major Universities, Central Banks and other policy making institution helped disseminate the achievement throughout the three years of the project. Also, it is important to emphasize that DYNARE is open-source and free software, thus ensuring two things: (i) widespread dissemination and (ii) reduced complications with patents, copyrights and other related issues. The diffusion of DYNARE and the routines and tools developed in the project can be evaluated by the activity of online forums dedicated to its use and by the number of downloads of the software itself and its User Guide. During the time frame of the project, 3 minor versions of DYNARE (4.0.2 4.0.3 and 4.0.4) fallowed by a major release (4.1.0) 3 other minor versions (4.1.1 4.1.2 4.1.3) another major version (4.2.0) and finally two minor versions (4.2.1 and 4.2.2) have been released, partly to implement and disseminate the tools developed for the MONFISPOL project. The download record of the website at each release show the interest of the users for the developments being implemented. Anecdotal evidence of the software being used in many central banks, international organizations and some government agencies are also proof of this diffusion.

The achievements of the second objective of the project, the conception of innovative macroeconomic models resulted in the production of working papers of high scientific value that are beginning to be published in top academic journals. First, the status of the consortium members as well as their publication track record has ensured widespread interest in, and visibility of, the papers written by the consortium members. Second, the publication of the papers in major academic journals is insuring lasting impact and dissemination well beyond the time frame of the project itself. Moreover, part of the model conception part of the project was concerned with the development of Macroeconomic Model Database for model comparison and evaluation. The last update of this database that can be found at this website: www.macromodelbase.com includes by now 50 macroeconomic models, ranging from small-, medium-, and large-scale DSGE models to earlier-generation New Keynesian models with rational expectations and more traditional Keynesian-style models with adaptive expectations. It includes models of the United States, the Euro Area, Canada and several small open emerging economies. This database is an invaluable tool for macroeconomists and policy maker to measure the behaviour of a given model with other. Through the widespread diffusion of the database and its website the Macroeconomic Model Database in itself has a large impact on the dissemination of the achievement of the MONFISPOL project.

Impact and dissemination through internal dissemination.

The very visible nature of the researchers involved with the MONFISPOL consortium helped disseminate the developments of project in the last three years. The consortium is made up of some of Europe's top academics, with close ties with the public sector. The website of the project that can be found at this address: http://www.monfispol.eu is partly public and served and will continue serving the dissemination of the achievements of the project.

Impact and dissemination through the scientific committee.

The scientific committee of the MONFISPOL project was made of some of the best researcher in macroeconomics with an outstanding track record of scientific publication and achievements (one of the members is now a Nobel Prize winner in economics) and occupying positions in top Universities and major institutions throughout the world. The members of the scientific committee were the following:

- Lawrence Christiano, Professor of economics, Northwestern University, USA,

- Wouter Den Haan, Professor of economics, London School of Economics and Political Science, U.K.

- Kenneth Judd, Professor of economics, Stanford University, USA,

- Thomas Sargent, professor of economics, winner of the 2011 Nobel Prize in economics, New York University, USA,

- Harald Uhlig, professor of economics, University of Chicago, USA,

- Jesper Lindé, economist, Board of Governors of the Federal Reserve System, USA,

- Frank Smets, Director General of the Directorate General Research, European Central Bank.

The implication of the scientific committee members were important with most of them presenting and attending the events organized during the project, and providing feedback. First the reputation and visibility of the academic members of the committee helped spread the developments of the project, and encourage other researchers to adopt the findings. Second, the reputation of the committee members attracted researchers, policy makers and other interested targets to the public conferences. Third, institutional members of the committee ensured and will keep ensuring that the developments of the consortium will promptly influence actual policy-making.

Impact and dissemination through DSGE.net.

DSGE-NET is an international research network for DSGE modelling, monetary and fiscal policy. Currently, Bank of Finland, Bank of France, The Capital Group Companies, the European Central Bank, the Federal Reserve Bank of Atlanta, Norges Bank, Sveriges Riksbank and Swiss National Bank have joined efforts with CEPREMAP -that actively manages the network- and became institutional members of DSGE-net. DSGE.net organizes yearly meetings for its members and invites members to participate actively in conferences throughout the year on topics related to DSGE modelling. In these events, members of central banks meet to compare efforts and progress made in relation to DSGE modelling. Such venues were ideal to share findings, new tools and models developed by consortium members in the confines of this project. This ensured an efficient transfer of knowledge as well as quick and widespread application of current developments in the policy world.

List of Websites:

www.monfispol.eu