Joshua Chan joined UTS as a professor in 2017. Before joining UTS, he held academic positions at Australian National University, Purdue University and University of Queensland. He received his PhD from the University of Queensland in 2010.
His research focuses on inflation modeling, output gap estimation, model comparison and nonlinear state space models.
Can supervise: YES
His long-term research interests include inflation modeling, output gap estimation, model comparison and nonlinear state space models.
His current research is supported by the Australian Research Council through two research grants: an ARC Discovery Early Career Researcher Award and an ARC Discovery Project.
The first project develops new nonlinear time-varying macroeconometric models with an emphasis on understanding the impact of uncertainty on business cycles. The second project uses these new time-varying models to construct model-based measures of inflation expectations and inflation expectations uncertainty.
© The Author(s) 2014. This textbook on statistical modeling and statistical inference will assist advanced undergraduate and graduate students. Statistical Modeling and Computation provides a unique introduction to modern Statistics from both classical and Bayesian perspectives. It also offers an integrated treatment of Mathematical Statistics and modern statistical computation, emphasizing statistical modeling, computational techniques, and applications. Each of the three parts will cover topics essential to university courses. Part I covers the fundamentals of probability theory. In Part II, the authors introduce a wide variety of classical models that include, among others, linear regression and ANOVA models. In Part III, the authors address the statistical analysis and computation of various advanced models, such as generalized linear, state-space and Gaussian models. Particular attention is paid to fast Monte Carlo techniques for Bayesian inference on these models. Throughout the book the authors include a large number of illustrative examples and solved problems. The book also features a section with solutions, an appendix that serves as a MATLAB primer, and a mathematical supplement.
© 2019 Elsevier B.V. We study identifying restrictions that allow news and noise shocks to be recovered empirically within a Bayesian structural VARMA framework. In population, the identification scheme we consider exactly recovers news and noise shocks. Monte Carlo evidence further demonstrates its excellent performance, as it recovers the key features of the postulated data-generation process—the real-business cycle model of Barsky and Sims (2011) augmented with noise shocks about future total factor productivity (TFP)—with great precision. In an empirical application, evidence suggests that TFP noise shocks play a minor role in macroeconomic fluctuations.
© 2018 American Statistical Association We introduce a class of large Bayesian vector autoregressions (BVARs) that allows for non-Gaussian, heteroscedastic, and serially dependent innovations. To make estimation computationally tractable, we exploit a certain Kronecker structure of the likelihood implied by this class of models. We propose a unified approach for estimating these models using Markov chain Monte Carlo (MCMC) methods. In an application that involves 20 macroeconomic variables, we find that these BVARs with more flexible covariance structures outperform the standard variant with independent, homoscedastic Gaussian innovations in both in-sample model-fit and out-of-sample forecast performance.
Chan, J, Jacobi, L & Zhu, D 2019, 'How Sensitive are VAR Forecasts to Prior Hyperparameters? An Automated Sensitivity Analysis', Advances in Econometrics, vol. 40A, pp. 229-248.View/Download from: UTS OPUS or Publisher's site
Vector autoregressions (VAR) combined with Minnesota-type priors are widely used for macroeconomic forecasting. The fact that strong but sensible priors can substantially improve forecast performance implies VAR forecasts are sensitive to prior hyperparameters. But the nature of this sensitivity is seldom investigated. We develop a general method based on Automatic Differentiation to systematically compute the sensitivities of forecasts – both points and intervals – with respect to any prior hyperparameters. In a forecasting exercise using US data, we find that forecasts are relatively sensitive to the strength of shrinkage for the VAR coefficients, but they are not much affected by the prior mean of the error covariance matrix or the strength of shrinkage for the intercepts.
A flexible multivariate model of a time-varying joint distribution of asset returns is developed which allows for regime switching and a joint skew-normal distribution. A suite of tests for linear and nonlinear financial market contagion is developed within the framework. The model is illustrated through an application to contagion between US and European equity markets during the Global Financial Crisis. The results show that correlation contagion dominates coskewness contagion, but that coskewness contagion is significant for Greece. A flight to safety to the US is also evident in the significance of breaks in the skewness parameter in the crisis regime. Comparison to the Asian crisis shows that similar patterns emerge, with a flight to safety to Japan, and Malaysia affected by coskewnes contagion with Hong Kong.
Tobias, JL & Chan, CC 2019, 'An Alternate Parameterization for Bayesian Non-parametric / Semiparametric Regression', Advances in Econometrics, vol. 40B, pp. 47-64.View/Download from: UTS OPUS or Publisher's site
We present a new procedure for nonparametric Bayesian estimation of regression functions. Specifically, our method makes use of an idea described in Frühwirth-Schnatter and Wagner (2010) to impose linearity exactly (conditional upon an unobserved binary indicator), yet also permits departures from linearity while imposing smoothness of the regression curves. An advantage of this approach is that the posterior probability of linearity is essentially produced as a by-product of the procedure. We apply our methods in both generated data experiments as well as in an illustrative application involving the impact of body mass index (BMI) on labor market earnings.
Chan, J, Leon-Gonzalez, R & Strachan, RW 2018, 'Invariant Inference and Efficient Computation in the Static Factor Model', Journal of the American Statistical Association, vol. 113, no. 522, pp. 819-828.View/Download from: UTS OPUS or Publisher's site
Factor models are used in a wide range of areas. Two issues with Bayesian versions of these models are a lack of invariance to ordering of and scaling of the variables and computational inefficiency. This article develops invariant and efficient Bayesian methods for estimating static factor models. This approach leads to inference that does not depend upon the ordering or scaling of the variables, and we provide arguments to explain this invariance. Beginning from identified parameters which are subject to orthogonality restrictions, we use parameter expansions to obtain a specification with computationally convenient conditional posteriors. We show significant gains in computational efficiency. Identifying restrictions that are commonly employed result in interpretable factors or loadings and, using our approach, these can be imposed ex-post. This allows us to investigate several alternative identifying (noninvariant) schemes without the need to respecify and resample the model. We illustrate the methods with two macroeconomic datasets.
2016 Copyright © Taylor & Francis Group, LLCWe propose an easy technique to test for time-variation in coefficients and volatilities. Specifically, by using a noncentered parameterization for state space models, we develop a method to directly calculate the relevant Bayes factor using the Savage–Dickey density ratio—thus avoiding the computation of the marginal likelihood altogether. The proposed methodology is illustrated via two empirical applications. In the first application, we test for time-variation in the volatility of inflation in the G7 countries. The second application investigates if there is substantial time-variation in the nonaccelerating inflation rate of unemployment (NAIRU) in the United States.
Chan, JCC & Eisenstat, E 2018, 'Bayesian model comparison for time-varying parameter VARs with stochastic volatility', Journal of Applied Econometrics, vol. 33, no. 4, pp. 509-532.View/Download from: UTS OPUS or Publisher's site
Copyright © 2018 John Wiley & Sons, Ltd. We develop importance sampling methods for computing two popular Bayesian model comparison criteria, namely, the marginal likelihood and the deviance information criterion (DIC) for time-varying parameter vector autoregressions (TVP-VARs), where both the regression coefficients and volatilities are drifting over time. The proposed estimators are based on the integrated likelihood, which are substantially more reliable than alternatives. Using US data, we find overwhelming support for the TVP-VAR with stochastic volatility compared to a conventional constant coefficients VAR with homoskedastic innovations. Most of the gains, however, appear to have come from allowing for stochastic volatility rather than time variation in the VAR coefficients or contemporaneous relationships. Indeed, according to both criteria, a constant coefficients VAR with stochastic volatility outperforms the more general model with time-varying parameters.
© 2018 Elsevier B.V. Empirical questions such as whether the Phillips curve or the Okun's law is stable can often be framed as a model comparison—e.g., comparing a vector autoregression (VAR) in which the coefficients in one equation are constant versus one that has time-varying parameters. We develop Bayesian model comparison methods to compare a class of time-varying parameter VARs we call hybrid TVP-VARs—VARs with time-varying parameters in some equations but constant coefficients in others. Using US data, we find evidence that the VAR coefficients in some, but not all, equations are time varying. Our finding highlights the empirical relevance of these hybrid TVP-VARs.
Chan, JCC & Song, Y 2018, 'Measuring Inflation Expectations Uncertainty Using High-Frequency Data', Journal of Money, Credit and Banking, vol. 50, no. 6, pp. 1139-1166.View/Download from: UTS OPUS or Publisher's site
© 2018 The Ohio State University Inflation expectations play a key role in determining future economic outcomes. The associated uncertainty provides a direct gauge of how well-anchored the inflation expectations are. We construct a model-based measure of inflation expectations uncertainty by augmenting a standard unobserved components model of inflation with information from noisy and possibly biased measures of inflation expectations obtained from financial markets. This new model-based measure of inflation expectations uncertainty is more accurately estimated and can provide valuable information for policymakers. Using U.S. data, we find significant changes in inflation expectations uncertainty during the Great Recession.
Chan, JCC, Clark, TE & Koop, G 2018, 'A New Model of Inflation, Trend Inflation, and Long-Run Inflation Expectations', Journal of Money, Credit and Banking, vol. 50, no. 1, pp. 5-53.View/Download from: UTS OPUS or Publisher's site
© 2018 The Ohio State University This paper develops a bivariate model of inflation and a survey-based long-run forecast of inflation that allows for the estimation of the link between trend inflation and the long-run forecast. Thus, our model allows for the possibilities that long-run forecasts taken from surveys can be equated with trend inflation, that the two are completely unrelated, or anything in between. Using a variety of inflation measures and survey-based forecasts for several countries, we find that long-run forecasts can provide substantial help in refining estimates and fitting and forecasting inflation. It is less helpful to simply equate trend inflation with the long-run forecasts.
Chan, JCC 2017, 'The Stochastic Volatility in Mean Model With Time-Varying Parameters: An Application to Inflation Modeling', Journal of Business and Economic Statistics, vol. 35, no. 1, pp. 17-28.View/Download from: UTS OPUS or Publisher's site
© 2017 American Statistical Association. This article generalizes the popular stochastic volatility in mean model to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. The methodology is illustrated with an application that involves U.S., U.K., and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, highlighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks.
Chan, JCC & Eisenstat, E 2017, 'Efficient estimation of Bayesian VARMAs with time-varying coefficients', Journal of Applied Econometrics, vol. 32, no. 7, pp. 1277-1297.View/Download from: UTS OPUS or Publisher's site
Copyright © 2017 John Wiley & Sons, Ltd. Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time-varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.
Chan, JCC, Henderson, DJ, Parmeter, CF & Tobias, JL 2017, 'Nonparametric estimation in economics: Bayesian and frequentist approaches', Wiley Interdisciplinary Reviews: Computational Statistics, vol. 9, no. 6.View/Download from: UTS OPUS or Publisher's site
© 2017 Wiley Periodicals, Inc. We review Bayesian and classical approaches to nonparametric density and regression estimation and illustrate how these techniques can be used in economic applications. On the Bayesian side, density estimation is illustrated via finite Gaussian mixtures and a Dirichlet Process Mixture Model, while nonparametric regression is handled using priors that impose smoothness. From the frequentist perspective, kernel-based nonparametric regression techniques are presented for both density and regression problems. Both approaches are illustrated using a wage dataset from the Current Population Survey. WIREs Comput Stat 2017, 9:e1406. doi: 10.1002/wics.1406. For further resources related to this article, please visit the WIREs website.
Grant, AL & Chan, JCC 2017, 'A Bayesian Model Comparison for Trend-Cycle Decompositions of Output', Journal of Money, Credit and Banking, vol. 49, no. 2-3, pp. 525-552.View/Download from: UTS OPUS or Publisher's site
© 2017 The Ohio State University We compare a number of widely used trend-cycle decompositions of output in a formal Bayesian model comparison exercise. This is motivated by the often markedly different results from these decompositions—different decompositions have broad implications for the relative importance of real versus nominal shocks in explaining variations in output. Using U.S. quarterly real GDP, we find that the overall best model is an unobserved components model with two features: (i) a nonzero correlation between trend and cycle innovations and (ii) a break in trend output growth in 2007. The annualized trend output growth decreases from about 3.4% to 1.2%–1.5% after the break. The results also indicate that real shocks are more important than nominal shocks. The slowdown in trend output growth is robust when we expand the set of models to include bivariate unobserved components models.
Grant, AL & Chan, JCC 2017, 'Reconciling output gaps: Unobserved components model and Hodrick-Prescott filter', JOURNAL OF ECONOMIC DYNAMICS & CONTROL, vol. 75, pp. 114-121.View/Download from: UTS OPUS or Publisher's site
Chan, JCC & Grant, AL 2016, 'Fast computation of the deviance information criterion for latent variable models', Computational Statistics and Data Analysis, vol. 100, pp. 847-859.View/Download from: UTS OPUS or Publisher's site
© 2014 Elsevier B.V. The deviance information criterion (DIC) has been widely used for Bayesian model comparison. However, recent studies have cautioned against the use of certain variants of the DIC for comparing latent variable models. For example, it has been argued that the conditional DIC–based on the conditional likelihood obtained by conditioning on the latent variables–is sensitive to transformations of latent variables and distributions. Further, in a Monte Carlo study that compares various Poisson models, the conditional DIC almost always prefers an incorrect model. In contrast, the observed-data DIC–calculated using the observed-data likelihood obtained by integrating out the latent variables–seems to perform well. It is also the case that the conditional DIC based on the maximum a posteriori (MAP) estimate might not even exist, whereas the observed-data DIC does not suffer from this problem. In view of these considerations, fast algorithms for computing the observed-data DIC for a variety of high-dimensional latent variable models are developed. Through three empirical applications it is demonstrated that the observed-data DICs have much smaller numerical standard errors compared to the conditional DICs. The corresponding MATLAB code is available upon request.
Chan, JCC & Grant, AL 2016, 'On the Observed-Data Deviance Information Criterion for Volatility Modeling', JOURNAL OF FINANCIAL ECONOMETRICS, vol. 14, no. 4, pp. 772-802.View/Download from: UTS OPUS or Publisher's site
Chan, JCC, Koop, G & Potter, SM 2016, 'A Bounded Model of Time Variation in Trend Inflation, Nairu and the Phillips Curve', Journal of Applied Econometrics, vol. 31, no. 3, pp. 551-565.View/Download from: UTS OPUS or Publisher's site
In this paper, we develop a bivariate unobserved components model for inflation and unemployment. The unobserved components are trend inflation and the non-accelerating inflation rate of unemployment (NAIRU). Our model also incorporates a time-varying Phillips curve and time-varying inflation persistence. What sets this paper apart from the existing literature is that we do not use unbounded random walks for the unobserved components, but rather bounded random walks. For instance, NAIRU is assumed to evolve within bounds. Our empirical work shows the importance of bounding. We find that our bounded bivariate model forecasts better than many alternatives, including a version of our model with unbounded unobserved components. Our model also yields sensible estimates of trend inflation, NAIRU, inflation persistence and the slope of the Phillips curve.
Eisenstat, E, Chan, JCC & Strachan, RW 2016, 'Stochastic Model Specification Search for Time-Varying Parameter VARs', ECONOMETRIC REVIEWS, vol. 35, no. 8-10, pp. 1638-1665.View/Download from: UTS OPUS or Publisher's site
Chan, JCC & Tobias, JL 2015, 'Priors and Posterior Computation in Linear Endogenous Variable Models with Imperfect Instruments', JOURNAL OF APPLIED ECONOMETRICS, vol. 30, no. 4, pp. 650-674.View/Download from: UTS OPUS or Publisher's site
Chan, JCC & Koop, G 2014, 'Modelling breaks and clusters in the steady states of macroeconomic variables', Computational Statistics and Data Analysis, vol. 76, pp. 186-193.View/Download from: UTS OPUS or Publisher's site
Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. Methods are drawn from the Bayesian clustering literature to develop an econometric methodology which (i) finds groups of variables which have the same number of breaks and (ii) determines the nature of the break process within each group. An application involving a five-variate steady-state VAR is presented. The results indicate that new methodology works well and breaks are occurring in the steady states of only two variables. © 2013 Elsevier B.V. All rights reserved.
Chan, JCC 2013, 'Moving average stochastic volatility models with application to inflation forecast', JOURNAL OF ECONOMETRICS, vol. 176, no. 2, pp. 162-172.View/Download from: UTS OPUS or Publisher's site
Chan, JCC, Koop, G, Leon-Gonzalez, R & Strachan, RW 2012, 'Time Varying Dimension Models', JOURNAL OF BUSINESS & ECONOMIC STATISTICS, vol. 30, no. 3, pp. 358-367.View/Download from: UTS OPUS or Publisher's site
Chan, JCC & Kroese, DP 2011, 'Rare-event probability estimation with conditional Monte Carlo', ANNALS OF OPERATIONS RESEARCH, vol. 189, no. 1, pp. 43-61.View/Download from: UTS OPUS or Publisher's site
Chan, JCC, Glynn, PW & Kroese, DP 2011, 'A comparison of cross-entropy and variance minimization strategies', Journal of Applied Probability, vol. 48, no. A, pp. 183-194.View/Download from: UTS OPUS or Publisher's site
The variance minimization (VM) and cross-entropy (CE) methods are two versatile adaptive importance sampling procedures that have been successfully applied to a wide variety of difficult rare-event estimation problems. We compare these two methods via various examples where the optimal VM and CE importance densities can be obtained analytically. We find that in the cases studied both VM and CE methods prescribe the same importance sampling parameters, suggesting that the criterion of minimizing the CE distance is very close, if not asymptotically identical, to minimizing the variance of the associated importance sampling estimator. © Applied Probability Trust 2011.
Chan, JCC & Kroese, DP 2010, 'Efficient estimation of large portfolio loss probabilities in t-copula models', EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, vol. 205, no. 2, pp. 361-367.View/Download from: Publisher's site
Chan, JCC & Jeliazkov, I 2009, 'Efficient simulation and integrated likelihood estimation in state space models', International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 1-2, pp. 101-120.View/Download from: Publisher's site
We consider the problem of implementing simple and efficient Markov chain Monte Carlo (MCMC) estimation algorithms for state space models. A conceptually transparent derivation of the posterior distribution of the states is discussed, which also leads to an efficient simulation algorithm that is modular, scalable and widely applicable. We also discuss a simple approach for evaluating the integrated likelihood, defined as the density of the data given the parameters but marginal of the state vector. We show that this high-dimensional integral can be easily evaluated with minimal computational and conceptual difficulty. Two empirical applications in macroeconomics demonstrate that the methods are versatile and computationally undemanding. In one application, involving a time-varying parameter model, we show that the methods allow for efficient handling of large state vectors. In our second application, involving a dynamic factor model, we introduce a new blocking strategy which results in improved MCMC mixing at little cost. The results demonstrate that the framework is simple, flexible and efficient. Copyright © 2009 Inderscience Enterprises Ltd.
Chan, JC-C & Jeliazkov, I 2009, 'MCMC Estimation of Restricted Covariance Matrices', JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, vol. 18, no. 2, pp. 457-480.View/Download from: Publisher's site
© 2020, Springer Nature Switzerland AG. Bayesian vector autoregressions are widely used for macroeconomic forecasting and structural analysis. Until recently, however, most empirical work had considered only small systems with a few variables due to parameter proliferation concern and computational limitations. We first review a variety of shrinkage priors that are useful for tackling the parameter proliferation problem in large Bayesian VARs. This is followed by a detailed discussion of efficient sampling methods for overcoming the computational problem. We then give an overview of some recent models that incorporate various important model features into conventional large Bayesian VARs, including stochastic volatility, non-Gaussian, and serially correlated errors. Efficient estimation methods for fitting these more flexible models are then discussed. These models and methods are illustrated using a forecasting exercise that involves a real-time macroeconomic dataset. The corresponding Matlab code is also provided [Matlab code is available at http://joshuachan.org/].
Chan, JCC & Hsiao, CYL 2014, 'Estimation of Stochastic Volatility Models with Heavy Tails and Serial Dependence' in Jeliazkov, I & Yang, X (eds), Bayesian Inference in the Social Sciences, John Wiley & Sons, USA, pp. 155-176.View/Download from: UTS OPUS or Publisher's site
Financial time series often exhibit properties that depart from the usual assumptions of serial
independence and normality. These include volatility clustering, heavy-tailedness and serial
dependence. A voluminous literature on different approaches for modeling these empirical
regularities has emerged in the last decade. In this chapter we review the estimation of a
variety of highly flexible stochastic volatility models, and introduce some efficient algorithms
based on recent advances in state space simulation techniques. These estimation methods are
illustrated via empirical examples involving precious metal and foreign exchange returns. The
corresponding MATLAB code is also provided.1
The remaining of the chapter is structured as follows. Section 6.2 first discusses the basic
stochastic volatility model and its estimation. In particular, we provide details of the auxiliary
mixture sampler and the precision sampler for linear Gaussian state space models. In Section
6.3 we extend the basic stochastic volatility model to allow for moving average errors. We
then discuss an efficient estimation method based on fast band matrix routines.
Lastly, Section 6.4 considers another extension—instead of the conventional assumption of a
Gaussian error distribution, we discuss some heavy-tailed distributions that can be written as
scale mixtures of Gaussian distributions. We demonstrate the relevance of these heavy-tailed
stochastic volatility models through an empirical example.
Brereton, TJ, Chan, JCC & Kroese, DP 2011, 'Fitting mixture importance sampling distributions via improved cross-entropy', Proceedings of the 2011 Winter Simulation Conference (WSC), Winter Simulation Conference, IEEE, Phoenix, AZ, USA, pp. 422-428.View/Download from: UTS OPUS or Publisher's site
In some rare-event settings, exponentially twisted distributions perform very badly. One solution to this problem is to use mixture distributions. However, it is difficult to select a good mixture distribution for importance sampling. We here introduce a simple adaptive method for choosing good mixture importance sampling distributions. © 2011 IEEE.