Data points in blue on a screen

Model hub

Welcome to the Sydney open-economy model hub
This hub provides open-source code for open-economy macroeconomic models.

About the hub

Policy-focused macroeconomic models are typically designed to answer policy questions relevant to the United States or the European Union. The policy challenges and options available to policymakers in open economies such as Australia’s are very different.

Central banks and fiscal authorities in open economies must take full account of how their policies interact with exchange rates, international capital flows, and trade flows.

Our goal is to provide resources for students, policymakers and professional economists who are interested in open economy questions.

DSGE models

This is based on the model used by the Reserve Bank of Australia's Economic Research Department to do scenario forecasting. The file available here includes Dynare code to solve and estimate the model, as well as code showing how to conduct scenario analysis.

Christopher Gibbs, Jonathan Hambur, and Gabriela Nodari added a housing sector to the multisector model of the Australian economy developed by Dan Rees, Penelope Smith, and Jamie Hall in 2016 (see citation below).

The model captures four sectors in total for the Australian economy: non-traded ex-housing goods and services; housing and housing services; traded goods and services; and resource extraction. The full model is described in Gibbs, Hambur, and Nodari (2018).

The model’s primary purpose is to conduct scenario analysis. Scenarios are constructed by assuming a path for a variable of interest then seeing how the model responds. For example, in Gibbs, Hambur, and Nodari (2018) a counterfactual scenario is conducted to see how much housing investment has added to GDP and inflation from 2012 to 2016.

The scenario assumed that housing investment for these five years grew at the same rate as it had prior to 2012. The path of investment was constructed by assuming the housing sector experienced a series of shocks to investment. These shocks have implications for the evolution of the remaining sectors of the model, which provides counterfactual predictions for GDP and inflation. By comparing the counterfactual predictions to the actual data, we can assess how much housing investment has added to economic growth.

This approach to scenario forecasting is frequently used by policymakers around the world to assess threats to forecasts. Just as the model can provide a counterfactual to actual data, it can provide counterfactual to forecasts. This includes forecasts that the model did not produce. By treating the forecasts as data, one can construct counterfactual scenarios just as described before to see how a forecast would change to specific concerns.

The files to replicate the scenarios discussed in Gibbs, Hambur, and Nodari (2018) and to create your own are available for MATLAB.

When using the model for your research, please cite:

For any questions and feedback, contact Christopher Gibbs.

Other references:

  • Rees, Daniel M., Penelope Smith, and Jamie Hall. "A Multi‐sector Model of the Australian Economy." Economic Record 92, no. 298 (2016): 374-408.

Methods for solving and estimating DSGE models

Professor Mariano Kulish and Emeritus Professor Adrian Pagan developed a method to solve linearised models with forward-looking expectations and structural changes under a variety of assumptions regarding agents’ beliefs about those structural changes. A ‘backward-forward’ algorithm is also used to construct a likelihood function for each associated solution.

They describe their method in Estimation and Solution of Models with Expectations and Structural Changes, published in the Journal of Applied Econometrics in 2017. This paper was also the winner of the 14th Richard Stone Prize in Applied Econometrics.

The technique developed is illustrated with two examples and compared with an alternative method where structural change is captured via regime switching using Markov switching. Through this, Kulish and Pagan demonstrate their method can produce accurate results much faster than the Markov switching method and can be adapted to handle beliefs departing from reality.

When using this model for your research, please cite: Estimation and Solution of Models with Expectations and Structural Changes, 2017, Journal of Applied Econometrics, with Adrian Pagan.

If you have questions or feedback, please contact Professor Mariano Kulish.

Adam Cagliarini and Mariano Kulish developed a method to solve linear stochastic rational expectations models in the face of a finite sequence of anticipated structural changes, including both anticipated changes to structural parameters and additive shocks.

Their method is described in “Solving Linear Rational Expectations Models with Predictable Structural Changes”, published in the Review of Economics and Statistics in 2013.

The authors apply their method to some numerical examples relevant to monetary policy using a version of the New Keynesian model presented in Ireland (2007).

When using this model for your research, please cite: Solving Linear Rational Expectations Models with Predictable Structural Changes, 2013, Review of Economics and Statistics, with Adam Cagliarini.

If you have questions or feedback, please contact Professor Mariano Kulish.

Callum Jones and Professor Mariano Kulish structure a New Keynesian model as an aggregate demand and supply curve relating inflation to output growth. This structure allows for a representation of structural shocks which induce simultaneous movements to both demand and supply.

Their work is illustrated in A Graphical Representation of an Estimated DSGE Model, published in Applied Economics in 2015.

Jones and Kulish also estimate the curves on US data from 1948 to 2010 and provide a case study for two recessions in 2001 and 2008-09. In particular, the Great Recession is explained in this framework as a collapse of aggregate demand driven by adverse preference and permanent technology shocks, compounded with expectations of low inflation.

When using this model for your research, please cite: A Graphical Representation of an Estimated DSGE Model, 2015, Applied Economics, with Callum Jones.

If you have questions or feedback, please contact Professor Mariano Kulish.

DSGE models with non-rational beliefs

Christopher Gibbs and Mariano Kulish study disinflations under imperfect credibility of the central bank and develop a framework to model imperfectly credible announcements and use it to study the distribution of the output costs for a given disinflation. Imperfect credibility is modelled as the extent to which agents rely on adaptive learning to form expectations. Lower credibility increases the mean, variance and skewness of the distribution of the sacrifice ratio. When credibility is low, disinflations become very costly for adverse realisations of the shocks

This model is described in Disinflations in a model of imperfectly anchored expectations, published in the European Economic Review in 2017.

Gibbs and Kulish use simulated data to reinterpret the reduced form evidence in sacrifice ratio regressions and not that coefficient estimates from these regressions can be misleading for policymakers considering the cost of disinflation.

When using this model for your research, please cite: Disinflations in a model of imperfectly anchored expectations, 2017 European Economic Review, Christopher Gibbs and Mariano Kulish.

If you have questions of feedback, please contact Christopher Gibbs.

DSGE model applications

Callum Jones and Mariano Kulish study two forms of unconventional monetary policy: the first where announcements about the future path of the short-term rate, and the second where long-term nominal rates operate as instruments of monetary policy. To this end, they develop a model where the risk premium on long-term debt is endogenously determined in part.

This model is described in Long-term Interest Rates, Risk Premia and Unconventional Monetary PolicyJournal of Economic Dynamics and Control, 2013.

Jones and Kulish determine that both policies are consistent with unique equilibria, that, at the zero-lower bound, announcements about the future path of the short-term rate can lower long-term interest rates through their impact on expectations as well as the risk premium. Further, long-term interest rate rules perform as well as, and at times better than, conventional Taylor rules. Simulations are employed to demonstrate that long-term interest rate rules generate sensible dynamics both when in operation and when expected to be applied.

When using this model for your research, please cite: Long-term Interest Rates, Risk Premia and Unconventional Monetary Policy, 2013, Journal of Economic Dynamics and Control, with Callum Jones.

If you have questions or feedback, please contact Professor Mariano Kulish.

Mariano Kulish, James Morley and Tim Robinson develop a DSGE model in which the central bank fixes the policy rate for an extended period of time. This model is then able to measure the severity of the zero lower bound constraint and the effects of unconventional policy.

This model is described in Estimating DSGE Models with Zero Interest Rate Policy, published in the Journal of Monetary Economics in 2017.

Kulish, Morley and Robinson apply their model to estimate expected duration of the US Federal Reserve’s zero interest rate policy in the wake of the Great Recession in 2009. They find large increases in expected duration of the policy in 2011 with the shift to calendar-based guidance, and a decrease in 2013 with the ‘taper tantrum’. These changes are identified by the influence of expected duration on output, inflation and interest rates at longer maturities.

When using this model for your research, please cite: Estimating DSGE Models with Zero Interest Rate Policy, 2017, Journal of Monetary Economics, with James Morley and Tim Robinson.

If you have questions or feedback, please contact Professor Mariano Kulish.

Mariano Kulish and Daniel Rees investigate the correlation between long-term nominal interest rates in the United States and a number of inflation-targeting, small open economies. This observation has led some to speculate there is a potential loss to monetary policy autonomy in these small open economies driven by the apparent decoupling of the long-end of the domestic yield curve from the short-end. They demonstrate that differences in the persistence of domestic and foreign disturbances can explain the observed correlation in long-term interest rates, suggesting there is no loss to monetary policy autonomy.

This model is described in The Yield Curve in a Small Open Economy, published in the Journal of International Economics in 2011.

Kulish and Rees set-up and estimate a two-country small open economy model where the expectations hypothesis and uncovered interest rate parity hold in order to study the comovement of long-term nominal interest rates of different currencies.

When using this model for your research, please cite: The Yield Curve in a Small Open Economy, 2011, Journal of International Economics, with Daniel Rees.

If you have questions or feedback, please contact Professor Mariano Kulish.

 

Time series models

Factor models provide a way to incorporate information from large datasets when examining the effects of monetary policy and other phenomena on the macroeconomy.

Luke Hartigan and James Morley have developed an approximate dynamic factor model, assuming two common factors for the Australian Economy.

In “A Factor Model Analysis of the Australian Economy and Effects of Inflation Targeting” (The Economic Record, Special Issue Dedicated to the Memory of Mardi Dungey, September 2020, vol. 96, pp. 271-293), they use this model to empirically investigate the effects of inflation targeting in Australia, not only on its stabilisation properties for inflation but on its effectiveness in reducing macroeconomic volatility more generally. Their data set includes 104 macroeconomic variables for the Australian economy over the sample period of 1976Q4 to 2017Q2. The two common factors used in the estimation appear to be sufficient to capture any persistent common variation in the dataset.

The factor model is “approximate” in the sense that elements of the vector of idiosyncratic components may be weakly dependent cross-sectionally and across time and “dynamic” in the sense that the common factors have an explicit VAR structure. Parameter and factor estimates obtained via the Kalman filter-smoother and the Expectation-Maximisation algorithm correspond to “quasi” maximum likelihood estimation (QMLE) given potential misspecifications in the estimated model. By applying a test developed by Han and Inoue (2015), Hartigan and Morley (2020) find a structural break in the factor structure soon after the introduction of inflation targeting in Australia that corresponds to a reduction in the volatility of key macroeconomic variables.

A block exogenous factor augmented vector autoregressive model (FAVARX) for a small open economy is also developed in which the two unobserved common factors implied by the “approximate” factor model are estimated using a “named-factor” normalisation. Estimates for the FAVARX model suggest that monetary policy shocks are more persistent, and their effects amplified following the introduction of inflation targeting. Furthermore, the Australian economy has become less sensitive to foreign shocks (for instance, a foreign demand or commodity price shock), with the RBA’s reaction being larger and more persistent following the introduction of inflation targeting.

The flexibility of these factor models opens up the possibility of many extensions and applications in open-economy settings. The code, using both MATLAB and R, to replicate the scenarios discussed in Hartigan and Morley (2020) and to create your own are available here.

The R code allows tests for the number of factors, estimation of the “approximate” factor model, and the FAVARX model. The Matlab code, from Han and Inoue (2015), allows for tests of a structural break in the factor structure.

Please cite the Hartigan and Morley (2020) article when building off of its analysis for your own research.

For any questions and feedback, contact James Morley.

Other references:

- Han, X. and A. Inoue (2015). Tests for Parameter Instability in Dynamic Factor Models. Econometric Theory 31(5), pp. 1117–1152.

 

Structural vector autoregressions (SVARs) are widely used in empirical macroeconomics to gather information about dynamics of multivariate systems.

Sam Ouliaris and Adrian Pagan have developed a textbook that comprehensively reviews SVARs and proposes solutions to many problems that arise in their use. They focus on implementation in EViews, which has a University Edition at low cost and free Student Lite version for students and academics affiliated with universities.

Free textbook

Companion files with models and programs in EViews

Earlier version on EViews website

Find out more about EViews

If you have any questions or feedback, please contact Emeritus Professor Adrian Pagan.

The idea of inherently different dynamics in expansions and recessions has a long history in business cycle analysis. Recent advances in econometrics have allowed this idea to be formally modelled and tested. Hamilton (1989) captures asymmetries using a Markov-switching model of real output, which captures the short, violent nature of recessions relative to expansions. However, other studies emphasize another distinctive feature of US business cycles not captured by Hamilton’s model: output growth tends to be relatively strong immediately following recessions.

In “Nonlinearity and the Permanent Effects of Recessions” (Journal of Applied Econometrics, Special Issue on "Recent Developments in Business Cycle Analysis" 2005, vol. 20, pp. 291-309), Chang-Jin Kim, James Morley, and Jeremy Piger present a new nonlinear time series model that captures a post-recession ‘bounce-back’ in the level of aggregate output. The ‘bounce-back’ effect is related to an endogenously estimated latent Markov-switching state variable.

Hamilton’s (1989) model can be extended to allow for a post-recession ‘bounce-back’ in the level of output, while maintaining endogenously estimated business cycle regimes. A ‘bounce-back’ effect is then introduced by implying that growth will be above average for the first certain number of quarters of an ‘expansionary’ regime. Kim, Morley, and Piger (2005) show that the Markov-switching form of nonlinearity is statistically significant and that the ‘bounce-back’ effect is large. Furthermore, an attractive feature of the model is that it provides a straightforward estimate of the permanent effects of recessions on the level of output. Permanent effects of recessions in the United States are substantially less than suggested by Hamilton (1989) and most linear models. Instead, evidence of a large ‘bounce-back’ effect during the recovery phase of the business cycle is provided. Finally, when applying the model to international data, the ‘bounce-back’ effect is smaller, corresponding to larger permanent effects of recessions for other countries.

In “Why Has the U.S. Economy Stagnated Since the Great Recession?” (Review of Economics and Statistics, 2022), Yunjong Eo and James Morley build on Hamilton (1989) and Kim, Morley, and Piger (2005) to develop a univariate Markov-switching time series model of real GDP growth that accommodates two different types of recessions, those which permanently (L-shaped recession) alter the level of real GDP and those with only temporary effects (U-shaped recession). Allowing for two different types of recessions by modelling regimes as being stochastic is a natural assumption given that the exact timing and nature of recessions is not predetermined in practice.

Applying the model to U.S. data shows that the Great Recession generated a large persistent negative output gap rather than any substantial hysteresis effects. The economy eventually recovers to a lower trend path which appears to be due to a reduction in productivity growth that began prior to the onset of the Great Recession. This finding is highly robust to different assumptions regarding the nature of structural change in trend growth. 

The code for replicating the results in Kim, Morley, and Piger (2005) using Gauss can be found here. The code for replicating the results in Eo and Morley (2022) using Gauss can be found here.

Please cite the respective article, Kim, Morley, and Piger (2005) and Eo and Morley (2022)  when using these models or code in your research.

For any questions and feedback, contact James Morley.

Other references:

- Hamilton, J.D. (1989). A new approach to the economic analysis of nonstationary time series and the business cycle. Econometrica 57, pp. 357–384.

The dramatic economic events of recent years have stimulated new debates about the relevance of aggregate demand and government spending as possible engines of economic activity.

In “State-Dependent Effects of Fiscal Policy” (Studies in Nonlinear Dynamics and Econometrics, June 2015, vol. 19, pp. 285-315), Steven Fazzari, James Morley, and Irina Panovska investigate the effects of government spending on US output with a threshold structural vector autoregressive model. In particular, they investigate the possibility of state-dependent effects of fiscal policy, by estimating a nonlinear structural vector autoregressive model that allows parameters to switch when a specified variable crosses an estimated threshold. The nonlinear version of a VAR model extends the threshold autoregressive model of Tong (1978, 1983) to a multivariate setting. It splits a time series process endogenously into different regimes. As candidate threshold variables, several alternative measures of economic slack, as well as the debt-to-GDP ratio and a measure of the real interest rate are considered.

The empirical results provide strong evidence in favour of state-dependent nonlinearity; specifically, government spending shocks have larger effects on output when they occur with relatively low resource utilisation than when they occur at times of high resource use. Furthermore, no evidence that higher government spending crowds out consumption is found. Indeed, consumption rises after positive government spending shocks in both the high- and low-utilization regimes, but the increase is almost twice as large during low utilization periods.

In “When Is Discretionary Fiscal Policy Effective?” (Studies in Nonlinear Dynamics and Econometrics, September 2021, vol. 25, pp. 229-254), Steven Fazzari, James Morley, and Irina Panovska contribute further to understanding fiscal policy by investigating the effects of discretionary changes in government spending and taxes using a medium-scale nonlinear vector autoregressive model with policy shocks identified via sign restrictions. 

First, they examine the exact nature and robustness of state dependence in the effectiveness of fiscal policy by considering an informationally-sufficient medium-scale threshold vector autoregressive (TVAR) model and by comparing and testing many different possible threshold variables. Second, using evolving-regime generalized impulse response analysis, they demonstrate that tax cuts and government spending increases have similarly large expansionary effects during deep recessions and sluggish recoveries, but they are much less effective, especially in the case of government spending increases, when the economy is in a robust expansion. Third, they investigate and determine the roles of consumption and investment in driving the effects of both government spending and taxes on aggregate output.

Results provide strong and robust empirical evidence in favour of nonlinearity and state dependence in the relationship between both types of fiscal policy and aggregate output. In particular, estimates from a threshold structural vector autoregressive model imply different responses of the economy both to government spending and to taxes during periods of excess slack compared to when the economy is closer to potential. 

The code for replicating the results in Fazzari, Morley, and Panovska (2015) using Gauss can be found here. The code for replicating the results in Fazzari, Morley and Panovska (2021) using Gauss can be found here.

Please cite the respective article, Fazzari, Morley, and Panovska (2015) and Fazzari, Morley and Panovska (2021) when using these models or code in your research.

For any questions and feedback, contact James Morley.

Other references

- Tong, H. (1978). On a Threshold Model In: Chen, C, (ed.) Pattern Recognition and Signal Processing. NATO ASI Series E: Applied Sc. (29). Sijthoff & Noordhoff, Netherlands, pp. 575-586.

- Tong, H. (1983). Threshold models in non-linear time series analysis. Lecture notes in statistics, No.21. Springer-Verlag, New York, USA.

Time series methods

The exact timing of structural breaks in parameters from time series models is generally unknown a priori. Much of the literature on structural breaks has focused on accounting for uncertainty about this timing when testing for the existence of structural breaks. However, there has also been considerable interest in how to make inference about the timing itself.

In “Likelihood-Ratio-Based Confidence Sets for the Timing of Structural Breaks” (Quantitative Economics, July 2015, vol. 6, pp. 463-497), Yunjong Eo and James Morley propose the use of likelihood-ratio-based confidence sets for the timing of structural breaks in parameters from time series regression models. The confidence sets are valid for the broad setting of a system of multivariate linear regression equations under fairly general assumptions about the error and regressors and allowing for multiple breaks in mean and variance parameters. 

Building on the literature on structural breaks, this setting allows for heteroskedasticity and autocorrelation in the errors, and multiple breaks in mean and variance parameters, and potentially produces more precise inferences as additional equations are added to the system. The asymptotic analysis provides critical values for a likelihood ratio test of a break date and an analytical expression for the expected length of a confidence set based on inverting the likelihood ratio test. 

The asymptotic validity for this approach is established for a broad setting of a system of multivariate linear regression equations under the assumption of a slowly shrinking magnitude of a break, with the asymptotic expected length of the 95% confidence sets being about half that of standard methods employed in the literature. A Monte Carlo analysis supports the asymptotic results in the sense that the ILR confidence sets have the shortest average length even in large samples, while at the same time demonstrating accurate, if somewhat conservative, coverage in small samples. 

To demonstrate the empirical relevance of the shorter expected length of the ILR confidence sets, various methods to make inference about the timing of structural breaks in post-war U.S. real gross domestic product (GDP) and consumption are applied.

Code for the paper, available in GAUSS and R, can be found here.

Please cite Eo and Morley (2015), when using these methods or code in your research.

For any questions and feedback, contact James Morley.

Threshold regression models specify that regression functions can be divided into several regimes based on the value of an observed variable, called a threshold variable, related to threshold parameters. Threshold regression models and their various extensions have become standard for the specification of nonlinear relationships between economic variables. 

In Improving Likelihood-Ratio-Based Confidence Intervals for Threshold Parameters in Finite Samples (Studies in Nonlinear Dynamics and Econometrics,  February 2018, vol. 22, pp. 1-11, Awarded Best Paper in 2018 for Studies in Nonlinear Dynamics and Econometrics), Luiggi Donayre , Yunjong Eo, and James Morley show asymptotically-valid likelihood-ratio-based confidence intervals for threshold parameters perform poorly in finite samples when the threshold effect is large. The coverage rates of the benchmark confidence interval derived in Hansen (2000) are substantially below nominal levels. 

They propose a conservative modification to the standard likelihood-ratio-based confidence interval that has coverage rates at least as high as the nominal level, while still being informative in the sense of including relatively few observations of the threshold variable. An application to thresholds for U.S. industrial production growth at a disaggregated level shows the empirical relevance of applying the conservative approach in practice by including zero in more cases than for the benchmark approach. 

The code to this paper is available in GAUSS and can be found here.

Please cite the Donayre, Eo, and Morley (2018) article, when using these methods or code in your research.

For any questions and feedback, contact James Morley.

Trend-cycle decomposition methods and applications

The Beveridge-Nelson filter provides a flexible and convenient way to calculate an intuitive and reliable measure of the output gap and conduct trend-cycle decomposition for time series.

Gunes Kamber, James Morley, and Benjamin Wong developed a new method for trend-cycle decomposition through a model-based approach with a smoothing assumption for the trend.

They describe their method in Intuitive and Reliable Estimates of the Output Gap from a Beveridge-Nelson Filter (Review of Economics and Statistic, July 2018,  vol. 100, Issue 3, pp. 550-566), along with an online supplemental appendix.

Kamber, Morley, and Wong apply their method to estimate the output gap for a number of countries and show that their estimates are as intuitive but more reliable in real-time settings than the commonly used Hodrick-Prescott filter.

This approach has been used in a number of other studies and was written up on CentralBanking.com.

There is MATLAB and R code available to apply the method:

Davaajargal Luvsannyam has developed: 

When applying the BN filter to data that has apparent changes in long-run mean growth, we recommend applying the dynamic demeaning procedure or, if structural breaks in long-run growth are estimated or known, accommodating the structural breaks. See the paper and code for more details on these options.

Please cite the Kamber, Morley, and Wong (2018) article when using this method for trend-cycle decomposition in your research.

For any questions and feedback, contact James Morley.

Beveridge and Nelson (1981) define the trend of a time series as its long-horizon conditional expectation minus any future deterministic drift. The intuition behind the BN decomposition is that the long-horizon conditional expectation of the cyclical component of a time series process is zero, meaning that the long-horizon conditional expectation of the time series will only reflect its trend.

James Morley and Benjamin Wong present a multivariate Beveridge-Nelson decomposition based on a vector autoregression estimated with Bayesian shrinkage to calculate the trend and cycle of a time series.

The method is described in Estimating and Accounting for the Output Gap with Large Bayesian Vector Autoregressions, (Journal of Applied Econometrics, January/February 2020, vol. 35, pp. 1-18, lead article).

Morley and Wong (2020) show how to determine which conditioning variables span the relevant information by directly accounting for the BN trend and cycle in terms of contributions from different forecast errors in the VAR. This accounting can be used to define a relevant information set and provides interpretability in terms of which sources of information are most important for estimating trend and cycle for a target variable. Furthermore, given an identification scheme that maps forecast errors to structural shocks, the approach can also be used for a structural decomposition of movements in trend and cycle. Bayesian shrinkage is introduced when estimating the VAR to avoid overfitting in finite samples given models that are large enough to include many possible sources of information.

In their paper, Morley and Wong (2020) present estimates of the U.S. output gap based on information sets containing as many as 138 variables. They find that the unemployment rate and inflation contain a large share of the relevant information for estimating the U.S. output gap. The findings are robust to consideration of structural change and using a real GDI measure of aggregate output.

The code, using MATLAB, to apply this method can be found here.

In addition, the online appendix contains further information and demonstrates how to do structural analysis with the framework. 

An extension by Berger, Morley and Wong (Journal of Econometrics, January 2023, vol. 232, pp. 18-34) considers monthly data to “nowcast” the output gap using this multivariate approach. Code can be found here and the most recent nowcasts are available here.

Please cite the Morley and Wong (2020) article when using this method for trend-cycle decomposition in your research.

For any questions and feedback, contact James Morley.

Other references:

- Beveridge, S. and C.R. Nelson (1981). A New Approach to Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the ‘Business Cycle’. Journal of Monetary Economics 7, pp. 151–174

The original univariate Beveridge-Nelson (BN) decomposition, introduced by Beveridge and Nelson (1981), implies that a stochastic trend accounts for most of the variation in output while the cycle component is small and noisy; whereas a univariate unobserved-components (UC) decomposition, introduced by Harvey (1985) and Clark (1987), implies a smooth trend and a cycle that is large in amplitude and highly persistent.

In “Why are the Beveridge-Nelson and Unobserved-Components decompositions of GDP so different?” (Review of Economics and Statistic, May 2003, vol. 85, Issue 2, pp. 235-243), James Morley, Charles Nelson, and Eric Zivot explain the differences between the reduced-form ARIMA process implied by the Clark-Harvey UC model and the unrestricted ARIMA model used in the BN approach. Morley et al. (2003) use the maximum likelihood method of Harvey (1981) based on the prediction error decomposition. Given estimated parameters, the Kalman filter generates the expectation of the trend component conditional on data.

The maximum likelihood estimates show that trend-cycle decompositions based on a UC model cast in state-space form and on the long-run forecast for the corresponding ARIMA model are at odds not because the methods differ but because models are not really equivalent. In particular, the differences arise from the restriction imposed in the UC approach that trend and cycle innovations are uncorrelated. The paper shows that when this restriction is relaxed for the UC model, the UC and BN approaches lead to identical trend-cycle decompositions and identical ARIMA processes.

The code, using GAUSS, MATLAB or R, to replicate the estimation results including the data file can be found here

Please cite the Morley, Nelson, and Zivot (2003) article when using this method for trend-cycle decomposition in your research.

For any questions and feedback, contact James Morley.

Other references:

 - Beveridge, S. and C.R. Nelson (1981). A New Approach to Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the ‘Business Cycle’. Journal of Monetary Economics 7, pp. 151–174.

- Clark, P.K. (1987).  The cyclical component of U.S. economic activity. The Quarterly Journal of Economics 102, pp. 797–814.

- Harvey, A.C. (1981), Time Series Models, Oxford: Philip Allen. 

- Harvey, A.C. (1985). Trends and Cycles in Macroeconomic Time Series. Journal of Business and Economic Statistics 3, pp. 216–227. 

Separating trend and cycle movements in macroeconomic variables is important for policy analysis, forecasting, and testing between competing theories. An important first step then in conducting empirical analysis of macroeconomic data is to test for the existence of stochastic trends with a stationarity test.

In “Testing Stationarity with Unobserved-Components Models” (Macroeconomic Dynamics, Volume 21, Issue 1, January 2017 , pp. 160-182), James Morley, Irina Panovska, and Tara Sinclair propose the use of a likelihood ratio test of stationarity based directly on the unobserved-components models used in estimation of stochastic trends. The paper demonstrates that a bootstrap version of this test has far better small-sample properties for empirically relevant data-generating processes than bootstrap versions of the standard Lagrange multiplier tests. The small-sample problems of the standard Lagrange multiplier tests given persistent time series processes are illustrated using a Monte Carlo simulation based on estimated time series models for post-war quarterly U.S. real GDP.

An application to post-war U.S. real GDP produces stronger support for the presence of large permanent shocks using the likelihood ratio test than using the standard tests. In particular, it supports the existence of a stochastic trend that is responsible for a large portion of the overall fluctuations in real economic activity, even excluding the recent Great Recession or allowing for structural breaks in the mean and variance of the growth rate

The code for this method and to replicate the results using Gauss can be found here.

Please cite the Morley, Panovska, and Sinclair (2017) article when using this method in your research.

For any questions and feedback, contact James Morley.


 

The business cycle is a broad term that connotes the inherent fluctuations in economic activity. It is a fundamental, yet elusive concept and its measurement has a long tradition in macroeconomics.

In “The Asymmetric Business Cycle” (Review of Economics and Statistics, February 2012, vol. 94, Issue 1, pp. 208-221), James Morley and Jeremy Piger revisit the problem of measuring the business cycle by arguing for the output-gap notion of the business cycle as transitory deviations in economic activity away from the trend, associated with work on the U.S. business cycle by Beveridge and Nelson (1981). The paper investigates to which extent a general model-based approach to estimate trend and cycle leads to measures of the business cycle that reflect models versus the data. The empirical results for the U.S., support a nonlinear regime-switching model that captures high-growth recoveries following deep recessions and produces a highly asymmetric business cycle with relatively small amplitude during expansions but large and negative movements during recessions. In particular, the asymmetry implies that NBER-dated recessions are periods of significant transitory variation in output, while output in expansions is dominated by movements in trend.

Model comparison reveals several close competitors that produce business cycle measures of widely differing shapes and magnitudes. To address this model-based uncertainty, Morley and Piger (2012) construct a model-averaged measure of the business cycle motivated by the principle of forecast combination, which is the idea that a combined forecast can be superior to all of the individual forecasts that go into its construction. For the weights used in combining model-based business cycle measures, an approximation to Bayesian posterior model probabilities based on the Schwarz information criterion is constructed. The resulting model-averaged business cycle measure also displays an asymmetric shape and is closely related to other measures of economic slack such as the unemployment rate and capacity utilization. They find that the model-averaged business cycle measure captures a meaningful macroeconomic phenomenon and sheds light on the nature of fluctuations in aggregate economic activity.

In “Is Business Cycle Asymmetry Intrinsic in Industrialized Economies” (Macroeconomic Dynamics, September 2020, vol. 24, Issue 6, pp. 1403-1436) Irina Panovska and James Morley further develop the model-averaged estimate of the output gap used by Morley and Piger (2012) into a simpler version that is more broadly applicable to data for other countries and appears to work better than estimating model weights in many cases, especially for countries that have more limited data availability and shorter data samples. While considering the same broad set of both linear and nonlinear models from Morley and Piger (2012), equal weights are placed on all models considered and prior beliefs from previous analysis incorporated when conducting Bayesian estimation of model parameters. This modified version is then used to measure economic slack in 10 industrialized economies while taking structural breaks in the long run into account.

The results in Morley and Panovska (2020) show, even though tests for nonlinearity give mixed statistical evidence in favour of nonlinearity, there is clear empirical support for the idea that output gaps are subject to much larger negative movements during recessions than positive movements in expansions for all countries considered in the analysis. This suggests that this form of business cycle asymmetry is not just a characteristic of the US economy but is intrinsic in industrialized economies more generally and should be addressed in theoretical models of the economy.

The code for replicating the results in Morley and Piger (2012) using Gauss can be found here. The code for replicating the results in Morley and Panovska (2020) using Gauss can be found here.

Please cite the respective article, Morley and Piger (2012) and Morley and Panovska (2020) when using the method in your research.

For any questions and feedback, contact James Morley.

Other references:

- Beveridge, S. and C.R. Nelson (1981). A New Approach to Decomposition of Economic Time Series into Permanent and Transitory Components with Particular Attention to Measurement of the ‘Business Cycle’. Journal of Monetary Economics 7, pp. 151–174.

Following Hall (1978), the permanent income hypothesis (PIH) implies that aggregate consumption should follow a random walk. The smoothness of aggregate consumption could only be explained by the PIH if aggregate income is largely predictable.

In “The Slow Adjustment of Aggregate Consumption to Permanent Income” (Journal of Money, Credit and Banking, March-April 2007, vol. 39, No. 2–3), James Morley develops a correlated unobserved components model with cointegration to investigate the relationship between aggregate consumption and permanent income. His approach involves the estimation of a cointegrated system that builds on Stock and Watson’s (1988) common stochastic trends representation and includes an empirical model for the logarithms of aggregate income and consumption. A crucial aspect of the modelling approach is that the permanent and transitory movements in aggregate income and consumption are estimated directly using the Kalman filter and are allowed to be correlated. Furthermore, the transitory components of income and consumption follow unobservable finite-order autoregressive processes.

This approach avoids any implicit restriction that permanent income is as smooth as consumption. Instead, for the U.S. permanent income appears to be relatively volatile, with consumption adjusting toward it only slowly over time. The results suggest that the PIH does not explain the smoothness of consumption over time. Instead, the slow adjustment of consumption toward permanent income is suggestive of habit formation in consumer preferences or the presence of a precautionary savings motive.

The code including the data is available for Gauss, Matlab, and R, and can be found here.

Please cite the Morley (2007) article when using this model in your research.

For any questions and feedback, contact James Morley.

Other references:

- Hall, R.E. (1978). Stochastic Implications of the Life Cycle-Permanent Income Hypothesis: Theory and Evidence. Journal of Political Economy, 86, pp. 971–88.

- Stock, J.H. and Mark W. Watson. (1988). Testing for Common Trends. Journal of the American Statistical Association, 83, pp. 1097–107. 

In “Estimating Household Consumption Insurance” (Journal of Applied Econometrics,  August 2021, vol. 36, pp. 628-635), Arpita Chatterjee, James Morley, and Aarti Singh re-visit the estimation in Blundell, Pistaferri, and Preston (BPP) (American Economic Review, 2008, 98(5), 1887-1921) to consider robustness issues, including to using quasi maximum likelihood estimation (QMLE) instead of GMM. 

QMLE involves casting the model into state-space form and assuming the shocks are normally distributed to use the Kalman filter to calculate the likelihood based on the prediction error decomposition. Because the actual shocks are likely to be non-Normal at the household level, this approach should be thought of as QMLE rather than regular MLE. Furthermore, it is considered how well QMLE performs relative to GMM for this model despite non-Normal shocks in a Monte Carlo analysis. Two methods for weighting the moment conditions are employed corresponding to the diagonally weighted minimum distance (DWMD) estimator used in BPP and the optimal minimum distance (OMD) estimator. DWMD generalizes an equally weighted minimum distance approach, but allows for heteroskedasticity, while OMD allows for covariance between moment conditions in the weighting matrix.

The results show that for the same sample size as the full BPP dataset, QMLE is the most accurate in terms of root mean squared error and its reported precision. GMM using diagonal weights is far less accurate, although GMM using optimal weights is closer in accuracy to QMLE. However, for a smaller effective sample size corresponding to the older household subgroup, QMLE performs much better than GMM regardless of the weighting scheme.

Thus, two main contributions to the literature on household consumption insurance can be examined. First, evidence is provided that consumption insurance is significantly higher than previously reported for BPP’s dataset. Second, the paper demonstrates the feasibility and apparent accuracy of QMLE in a panel setting with highly non-Normal shocks and a relatively small sample size.

Code (available for Gauss, MATLAB, and R) to apply this method can be found here.

Please cite the Chatterjee, Morley, and Singh (2021) article when using this code in your research.

For any questions and feedback, contact James Morley.

Other references:

- Blundell, R., L. Pistaferri, and I. Preston (2008). Consumption Inequality and Partial Insurance. American Economic Review 98, pp. 1887-1921. 

Data

It is widely recognised that empirical policy analysis can sometimes be misleading if it is conducted using the most recent vintage of available data as opposed to the data that was available at the time decisions were actually made. Revisions in data mean that the measurement of historical outcomes published today may differ substantially from the data on which plans were made and so ‘real-time’ datasets, containing all the vintages of data that were available in the past, are required to fully understand the plans.

The ready availability of a real-time fiscal dataset is useful, therefore Kevin Lee, James Morley, Kalvinder Shields, and Madeleine Sui-Lay Tan construct such a database for Australia. In “The Australian Real-Time Fiscal Database: An Overview and an Illustration of its Use in Analysing Planned and Realised Fiscal Policies”  (The Economic Record, March 2020, vol. 96, pp. 87-106), they describe a fiscal database for Australia including measures of government spending, revenue, deficits, debt and various sub-aggregates as initially published and subsequently revised. 

The data vintages are collated from various sources and provide a comprehensive description of the Australian fiscal environment as experienced in real-time. The vintages of data are collated from various sources and accommodate multiple definitional changes, providing a comprehensive description of the fiscal environment as experienced by Australian policymakers at the time decisions are made. The Australian Real-Time Fiscal Database includes a total of twelve variables relating to budget outcomes over time plus nine variables describing the evolving state of the government’s debt/wealth.

The empirical analysis of the paper illustrates the point highlighting the predictability of the gaps between announced plans and realised outcomes and showing the importance of distinguishing between policy responses and policy initiatives in estimating the fiscal multiplier.

The database is available through the University of Melbourne along with a Data Manual describing the sources and definitions of the series in more detail. The Australian Real-Time Fiscal Database can be found here.

Please reference when you use the database in your own research.

For any questions and feedback, contact James Morley