Joshua Chan joined UTS as a professor in 2017. Before joining UTS, he held academic positions at Australian National University, Purdue University and University of Queensland. He received his PhD from the University of Queensland in 2010.
His research focuses on inflation modeling, output gap estimation, model comparison and nonlinear state space models.
His long-term research interests include inflation modeling, output gap estimation, model comparison and nonlinear state space models.
His current research is supported by the Australian Research Council through two research grants: an ARC Discovery Early Career Researcher Award and an ARC Discovery Project.
The first project develops new nonlinear time-varying macroeconometric models with an emphasis on understanding the impact of uncertainty on business cycles. The second project uses these new time-varying models to construct model-based measures of inflation expectations and inflation expectations uncertainty.
© The Author(s) 2014. This textbook on statistical modeling and statistical inference will assist advanced undergraduate and graduate students. Statistical Modeling and Computation provides a unique introduction to modern Statistics from both classical and Bayesian perspectives. It also offers an integrated treatment of Mathematical Statistics and modern statistical computation, emphasizing statistical modeling, computational techniques, and applications. Each of the three parts will cover topics essential to university courses. Part I covers the fundamentals of probability theory. In Part II, the authors introduce a wide variety of classical models that include, among others, linear regression and ANOVA models. In Part III, the authors address the statistical analysis and computation of various advanced models, such as generalized linear, state-space and Gaussian models. Particular attention is paid to fast Monte Carlo techniques for Bayesian inference on these models. Throughout the book the authors include a large number of illustrative examples and solved problems. The book also features a section with solutions, an appendix that serves as a MATLAB primer, and a mathematical supplement.
Chan, J., Leon-Gonzalez, R. & Strachan, R.W. 2017, 'Invariant Inference and Efficient Computation in the Static Factor Model', Journal of the American Statistical Association, pp. 0-0.View/Download from: Publisher's site
2016 Copyright © Taylor & Francis Group, LLCWe propose an easy technique to test for time-variation in coefficients and volatilities. Specifically, by using a noncentered parameterization for state space models, we develop a method to directly calculate the relevant Bayes factor using the Savage–Dickey density ratio—thus avoiding the computation of the marginal likelihood altogether. The proposed methodology is illustrated via two empirical applications. In the first application, we test for time-variation in the volatility of inflation in the G7 countries. The second application investigates if there is substantial time-variation in the nonaccelerating inflation rate of unemployment (NAIRU) in the United States.
Chan, J.C.C. 2017, 'The Stochastic Volatility in Mean Model With Time-Varying Parameters: An Application to Inflation Modeling', Journal of Business and Economic Statistics, vol. 35, no. 1, pp. 17-28.View/Download from: Publisher's site
© 2017 American Statistical Association. This article generalizes the popular stochastic volatility in mean model to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. The methodology is illustrated with an application that involves U.S., U.K., and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, highlighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks.
Chan, J.C.C. & Eisenstat, E. 2017, 'Efficient estimation of Bayesian VARMAs with time-varying coefficients', Journal of Applied Econometrics, vol. 32, no. 7, pp. 1277-1297.View/Download from: Publisher's site
Copyright © 2017 John Wiley & Sons, Ltd. Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time-varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.
Chan, J.C.C., Henderson, D.J., Parmeter, C.F. & Tobias, J.L. 2017, 'Nonparametric estimation in economics: Bayesian and frequentist approaches', Wiley Interdisciplinary Reviews: Computational Statistics, vol. 9, no. 6.View/Download from: Publisher's site
© 2017 Wiley Periodicals, Inc. We review Bayesian and classical approaches to nonparametric density and regression estimation and illustrate how these techniques can be used in economic applications. On the Bayesian side, density estimation is illustrated via finite Gaussian mixtures and a Dirichlet Process Mixture Model, while nonparametric regression is handled using priors that impose smoothness. From the frequentist perspective, kernel-based nonparametric regression techniques are presented for both density and regression problems. Both approaches are illustrated using a wage dataset from the Current Population Survey. WIREs Comput Stat 2017, 9:e1406. doi: 10.1002/wics.1406. For further resources related to this article, please visit the WIREs website.
Grant, A.L. & Chan, J.C.C. 2017, 'A Bayesian Model Comparison for Trend-Cycle Decompositions of Output', Journal of Money, Credit and Banking, vol. 49, no. 2-3, pp. 525-552.View/Download from: UTS OPUS or Publisher's site
© 2017 The Ohio State University We compare a number of widely used trend-cycle decompositions of output in a formal Bayesian model comparison exercise. This is motivated by the often markedly different results from these decompositions—different decompositions have broad implications for the relative importance of real versus nominal shocks in explaining variations in output. Using U.S. quarterly real GDP, we find that the overall best model is an unobserved components model with two features: (i) a nonzero correlation between trend and cycle innovations and (ii) a break in trend output growth in 2007. The annualized trend output growth decreases from about 3.4% to 1.2%–1.5% after the break. The results also indicate that real shocks are more important than nominal shocks. The slowdown in trend output growth is robust when we expand the set of models to include bivariate unobserved components models.
Grant, A.L. & Chan, J.C.C. 2017, 'Reconciling output gaps: Unobserved components model and Hodrick–Prescott filter', Journal of Economic Dynamics and Control, vol. 75, pp. 114-121.View/Download from: UTS OPUS or Publisher's site
© 2017 Elsevier B.V. This paper reconciles two widely used trend–cycle decompositions of GDP that give markedly different estimates: the correlated unobserved components model yields output gaps that are small in amplitude, whereas the Hodrick–Prescott (HP) filter generates large and persistent cycles. By embedding the HP filter in an unobserved components model, we show that this difference arises due to differences in the way the stochastic trend is modeled. Moreover, the HP filter implies that the cyclical components are serially independent—an assumption that is decidedly rejected by the data. By relaxing this restrictive assumption, the augmented HP filter provides comparable model fit relative to the standard correlated unobserved components model.
Chan, J.C.C. & Grant, A.L. 2016, 'Fast computation of the deviance information criterion for latent variable models', Computational Statistics and Data Analysis, vol. 100, pp. 847-859.View/Download from: UTS OPUS or Publisher's site
© 2014 Elsevier B.V. The deviance information criterion (DIC) has been widely used for Bayesian model comparison. However, recent studies have cautioned against the use of certain variants of the DIC for comparing latent variable models. For example, it has been argued that the conditional DIC–based on the conditional likelihood obtained by conditioning on the latent variables–is sensitive to transformations of latent variables and distributions. Further, in a Monte Carlo study that compares various Poisson models, the conditional DIC almost always prefers an incorrect model. In contrast, the observed-data DIC–calculated using the observed-data likelihood obtained by integrating out the latent variables–seems to perform well. It is also the case that the conditional DIC based on the maximum a posteriori (MAP) estimate might not even exist, whereas the observed-data DIC does not suffer from this problem. In view of these considerations, fast algorithms for computing the observed-data DIC for a variety of high-dimensional latent variable models are developed. Through three empirical applications it is demonstrated that the observed-data DICs have much smaller numerical standard errors compared to the conditional DICs. The corresponding MATLAB code is available upon request.
© 2015 Elsevier B.V. We compare a number of GARCH and stochastic volatility (SV) models using nine series of oil, petroleum product and natural gas prices in a formal Bayesian model comparison exercise. The competing models include the standard models of GARCH(1,1) and SV with an AR(1) log-volatility process, as well as more flexible models with jumps, volatility in mean, leverage effects, and t distributed and moving average innovations. We find that: (1) SV models generally compare favorably to their GARCH counterparts; (2) the jump component and t distributed innovations substantially improve the performance of the standard GARCH, but are unimportant for the SV model; (3) the volatility feedback channel seems to be superfluous; (4) the moving average component markedly improves the fit of both GARCH and SV models; and (5) the leverage effect is important for modeling crude oil prices-West Texas Intermediate and Brent-but not for other energy prices. Overall, the SV model with moving average innovations is the best model for all nine series.
Chan, J.C.C. & Grant, A.L. 2016, 'On the observed-data deviance information criterion for volatility modeling', Journal of Financial Econometrics, vol. 14, no. 4, pp. 772-802.View/Download from: Publisher's site
© The Author, 2016. Published by Oxford University Press. All rights reserved. We propose importance sampling algorithms based on fast band matrix routines for estimating the observed-data likelihoods for a variety of stochastic volatility models. This is motivated by the problem of computing the deviance information criterion (DIC)-a popular Bayesian model comparison criterion that comes in a few variants. Although the DIC based on the conditional likelihood-obtained by conditioning on the latent variables-is widely used for comparing stochastic volatility models, recent studies have argued against its use on both theoretical and practical grounds. Indeed, we show via a Monte-Carlo study that the conditional DIC tends to favor overfitted models, whereas the DIC based on the observed-data likelihood-calculated using the proposed importance sampling algorithms-seems to perform well. We demonstrate the methodology with an application involving daily returns on the Standard & Poors 500 index.
© 2016 Elsevier B.V. All rights reserved. Vector Autoregressive Moving Average (VARMA) models have many theoretical properties which should make them popular among empirical macroeconomists. However, they are rarely used in practice due to over-parameterization concerns, difficulties in ensuring identification and computational challenges. With the growing interest in multivariate time series models of high dimension, these problems with VARMAs become even more acute, accounting for the dominance of VARs in this field. In this paper, we develop a Bayesian approach for inference in VARMAs which surmounts these problems. It jointly ensures identification and parsimony in the context of an efficient Markov chain Monte Carlo (MCMC) algorithm. We use this approach in a macroeconomic application involving up to twelve dependent variables. We find our algorithm to work successfully and provide insights beyond those provided by VARs.
Chan, J.C.C., Koop, G. & Potter, S.M. 2016, 'A Bounded Model of Time Variation in Trend Inflation, Nairu and the Phillips Curve', Journal of Applied Econometrics, vol. 31, no. 3, pp. 551-565.View/Download from: UTS OPUS or Publisher's site
In this paper, we develop a bivariate unobserved components model for inflation and unemployment. The unobserved components are trend inflation and the non-accelerating inflation rate of unemployment (NAIRU). Our model also incorporates a time-varying Phillips curve and time-varying inflation persistence. What sets this paper apart from the existing literature is that we do not use unbounded random walks for the unobserved components, but rather bounded random walks. For instance, NAIRU is assumed to evolve within bounds. Our empirical work shows the importance of bounding. We find that our bounded bivariate model forecasts better than many alternatives, including a version of our model with unbounded unobserved components. Our model also yields sensible estimates of trend inflation, NAIRU, inflation persistence and the slope of the Phillips curve.
Eisenstat, E., Chan, J.C.C. & Strachan, R.W. 2016, 'Stochastic Model Specification Search for Time-Varying Parameter VARs', Econometric Reviews, vol. 35, no. 8-10, pp. 1638-1665.View/Download from: UTS OPUS or Publisher's site
© 2016, Taylor & Francis Group, LLC. This article develops a new econometric methodology for performing stochastic model specification search (SMSS) in the vast model space of time-varying parameter vector autoregressions (VARs) with stochastic volatility and correlated state transitions. This is motivated by the concern of overfitting and the typically imprecise inference in these highly parameterized models. For each VAR coefficient, this new method automatically decides whether it is constant or time-varying. Moreover, it can be used to shrink an otherwise unrestricted time-varying parameter VAR to a stationary VAR, thus providing an easy way to (probabilistically) impose stationarity in time-varying parameter models. We demonstrate the effectiveness of the approach with a topical application, where we investigate the dynamic effects of structural shocks in government spending on U.S. taxes and gross domestic product (GDP) during a period of very low interest rates.
© 2015, Copyright Taylor & Francis Group, LLC. We consider an adaptive importance sampling approach to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. This approach is motivated by the difficulty of obtaining an accurate estimate through existing algorithms that use Markov chain Monte Carlo (MCMC) draws, where the draws are typically costly to obtain and highly correlated in high-dimensional settings. In contrast, we use the cross-entropy (CE) method, a versatile adaptive Monte Carlo algorithm originally developed for rare-event simulation. The main advantage of the importance sampling approach is that random samples can be obtained from some convenient density with little additional costs. As we are generating independent draws instead of correlated MCMC draws, the increase in simulation effort is much smaller should one wish to reduce the numerical standard error of the estimator. Moreover, the importance density derived via the CE method is grounded in information theory, and therefore, is in a well-defined sense optimal. We demonstrate the utility of the proposed approach by two empirical applications involving women's labor market participation and U.S. macroeconomic time series. In both applications, the proposed CE method compares favorably to existing estimators.
Chan, J.C.C. & Grant, A.L. 2015, 'Pitfalls of estimating the marginal likelihood using the modified harmonic mean', Economics Letters, vol. 131, pp. 29-33.View/Download from: UTS OPUS or Publisher's site
© 2015 Elsevier B.V. The modified harmonic mean is widely used for estimating the marginal likelihood. We investigate the empirical performance of two versions of this estimator: one based on the observed-data likelihood and the other on the complete-data likelihood. Through an empirical example using US and UK inflation, we show that the version based on the complete-data likelihood has a substantial bias and tends to select the wrong model, whereas the version based on the observed-data likelihood works well.
Chan, J.C.C. & Tobias, J.L. 2015, 'Priors and Posterior Computation in Linear Endogenous Variable Models with Imperfect Instruments', Journal of Applied Econometrics, vol. 30, no. 4, pp. 650-674.View/Download from: UTS OPUS or Publisher's site
© 2014 John Wiley & Sons, Ltd. In this paper we, like several studies in the recent literature, employ a Bayesian approach to estimation and inference in models with endogeneity concerns by imposing weaker prior assumptions than complete excludability. When allowing for instrument imperfection of this type, the model is only partially identified, and as a consequence standard estimates obtained from the Gibbs simulations can be unacceptably imprecise. We thus describe a substantially improved 'semi-analytic' method for calculating parameter marginal posteriors of interest that only require use of the well-mixing simulations associated with the identifiable model parameters and the form of the conditional prior. Our methods are also applied in an illustrative application involving the impact of body mass index on earnings.
Chan, J.C.C. & Koop, G. 2014, 'Modelling breaks and clusters in the steady states of macroeconomic variables', Computational Statistics and Data Analysis, vol. 76, pp. 186-193.View/Download from: UTS OPUS or Publisher's site
Macroeconomists working with multivariate models typically face uncertainty over which (if any) of their variables have long run steady states which are subject to breaks. Furthermore, the nature of the break process is often unknown. Methods are drawn from the Bayesian clustering literature to develop an econometric methodology which (i) finds groups of variables which have the same number of breaks and (ii) determines the nature of the break process within each group. An application involving a five-variate steady-state VAR is presented. The results indicate that new methodology works well and breaks are occurring in the steady states of only two variables. © 2013 Elsevier B.V. All rights reserved.
Chan, J.C.C. 2013, 'Moving average stochastic volatility models with application to inflation forecast', Journal of Econometrics, vol. 176, no. 2, pp. 162-172.View/Download from: UTS OPUS or Publisher's site
We introduce a new class of models that has both stochastic volatility and moving average errors, where the conditional mean has a state space representation. Having a moving average component, however, means that the errors in the measurement equation are no longer serially independent, and estimation becomes more difficult. We develop a posterior simulator that builds upon recent advances in precision-based algorithms for estimating these new models. In an empirical application involving US inflation we find that these moving average stochastic volatility models provide better in-sample fitness and out-of-sample forecast performance than the standard variants with only stochastic volatility. © 2013 Elsevier B.V. All rights reserved.
This article introduces a new model of trend inflation. In contrast to many earlier approaches, which allow for trend inflation to evolve according to a random walk, ours is a bounded model which ensures that trend inflation is constrained to lie in an interval. The bounds of this interval can either be fixed or estimated from the data. Our model also allows for a time-varying degree of persistence in the transitory component of inflation. In an empirical exercise with CPI inflation, we find the model to work well, yielding more sensible measures of trend inflation and forecasting better than popular alternatives such as the unobserved components stochastic volatility model. This article has supplementary materials online. © 2013 American Statistical Association Journal of Business & Economic Statistics.
The cross-entropy (CE) method is an adaptive importance sampling procedure that has been successfully applied to a diverse range of complicated simulation problems. However, recent research has shown that in some high-dimensional settings, the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. We consider a variation of the CE method whose performance does not deteriorate as the dimension of the problem increases. We then illustrate the algorithm via a high-dimensional estimation problem in risk management. © 2011 Springer Science+Business Media, LLC.
Chan, J.C.C., Koop, G., Leon-Gonzalez, R. & Strachan, R.W. 2012, 'Time varying dimension models', Journal of Business and Economic Statistics, vol. 30, no. 3, pp. 358-367.View/Download from: UTS OPUS or Publisher's site
Time varying parameter (TVP) models have enjoyed an increasing popularity in empirical macroeconomics. However, TVP models are parameter-rich and risk over-fitting unless the dimension of the model is small. Motivated by this worry, this article proposes several Time Varying Dimension (TVD) models where the dimension of the model can change over time, allowing for the model to automatically choose a more parsimonious TVP representation, or to switch between different parsimonious representations. Our TVD models all fall in the category of dynamic mixture models. We discuss the properties of these models and present methods for Bayesian inference. An application involving U.S. inflation forecasting illustrates and compares the different TVD models.We find our TVD approaches exhibit better forecasting performance than many standard benchmarks and shrink toward parsimonious specifications. This article has online supplementary materials. © 2012 American Statistical Association.
Chan, J.C.C. & Kroese, D.P. 2011, 'Rare-event probability estimation with conditional Monte Carlo', Annals of Operations Research, vol. 189, no. 1, pp. 43-61.View/Download from: UTS OPUS or Publisher's site
Estimation of rare-event probabilities in high-dimensional settings via importance sampling is a difficult problem due to the degeneracy of the likelihood ratio. In fact, it is generally recommended that Monte Carlo estimators involving likelihood ratios should not be used in such settings. In view of this, we develop efficient algorithms based on conditional Monte Carlo to estimate rare-event probabilities in situations where the degeneracy problem is expected to be severe. By utilizing an asymptotic description of how the rare event occurs, we derive algorithms that involve generating random variables only from the nominal distributions, thus avoiding any likelihood ratio. We consider two settings that occur frequently in applied probability: systems involving bottleneck elements and models involving heavy-tailed random variables. We first consider the problem of estimating (X 1 + +X n > ), where X 1 ,...,X n are independent but not identically distributed (ind) heavy-tailed random variables. Guided by insights obtained from this model, we then study a variety of more general settings. Specifically, we consider a complex bridge network and a generalization of the widely popular normal copula model used in managing portfolio credit risk, both of which involve hundreds of random variables. We show that the same conditioning idea, guided by an asymptotic description of the way in which the rare event happens, can be used to derive estimators that outperform existing ones. © 2009 Springer Science+Business Media, LLC.
Chan, J.C.C., Glynn, P.W. & Kroese, D.P. 2011, 'A comparison of cross-entropy and variance minimization strategies', Journal of Applied Probability, vol. 48, no. A, pp. 183-194.View/Download from: UTS OPUS or Publisher's site
The variance minimization (VM) and cross-entropy (CE) methods are two versatile adaptive importance sampling procedures that have been successfully applied to a wide variety of difficult rare-event estimation problems. We compare these two methods via various examples where the optimal VM and CE importance densities can be obtained analytically. We find that in the cases studied both VM and CE methods prescribe the same importance sampling parameters, suggesting that the criterion of minimizing the CE distance is very close, if not asymptotically identical, to minimizing the variance of the associated importance sampling estimator. © Applied Probability Trust 2011.
Chan, J.C.C. & Kroese, D.P. 2010, 'Efficient estimation of large portfolio loss probabilities in t-copula models', European Journal of Operational Research, vol. 205, no. 2, pp. 361-367.View/Download from: Publisher's site
We consider the problem of accurately measuring the credit risk of a portfolio consisting of loans, bonds and other financial assets. One particular performance measure of interest is the probability of large portfolio losses over a fixed time horizon. We revisit the so-called t-copula that generalizes the popular normal copula to allow for extremal dependence among defaults. By utilizing the asymptotic description of how the rare event occurs, we derive two simple simulation algorithms based on conditional Monte Carlo to estimate the probability that the portfolio incurs large losses under the t-copula. We further show that the less efficient estimator exhibits bounded relative error. An extensive simulation study demonstrates that both estimators outperform existing algorithms. We then discuss a generalization of the t-copula model that allows the multivariate defaults to have an asymmetric distribution. Lastly, we show how the estimators proposed for the t-copula can be modified to estimate the portfolio risk under the skew t-copula model. © 2010 Elsevier B.V. All rights reserved.
Chan, J.C.C. & Jeliazkov, I. 2009, 'Efficient simulation and integrated likelihood estimation in state space models', International Journal of Mathematical Modelling and Numerical Optimisation, vol. 1, no. 1-2, pp. 101-120.View/Download from: Publisher's site
We consider the problem of implementing simple and efficient Markov chain Monte Carlo (MCMC) estimation algorithms for state space models. A conceptually transparent derivation of the posterior distribution of the states is discussed, which also leads to an efficient simulation algorithm that is modular, scalable and widely applicable. We also discuss a simple approach for evaluating the integrated likelihood, defined as the density of the data given the parameters but marginal of the state vector. We show that this high-dimensional integral can be easily evaluated with minimal computational and conceptual difficulty. Two empirical applications in macroeconomics demonstrate that the methods are versatile and computationally undemanding. In one application, involving a time-varying parameter model, we show that the methods allow for efficient handling of large state vectors. In our second application, involving a dynamic factor model, we introduce a new blocking strategy which results in improved MCMC mixing at little cost. The results demonstrate that the framework is simple, flexible and efficient. Copyright © 2009 Inderscience Enterprises Ltd.
Chan, J.C.C. & Jeliazkov, I. 2009, 'MCMC estimation of restricted covariance matrices', Journal of Computational and Graphical Statistics, vol. 18, no. 2, pp. 457-480.View/Download from: Publisher's site
This article is motivated by the difficulty of applying standard simulation techniques when identification constraints or theoretical considerations induce covariance restrictions in multivariate models. To deal with this difficulty, we build upon a decomposition of positive definite matrices and show that it leads to straightforward Markov chain Monte Carlo samplers for restricted covariance matrices.We introduce the approach by reviewing results for multivariate Gaussian models without restrictions, where standard conjugate priors on the elements of the decomposition induce the usual Wishart distribution on the precision matrix and vice versa. The unrestricted case provides guidance for constructing efficient Metropolis-Hastings and accept-reject Metropolis-Hastings samplers in more complex settings, and we describe in detail how simulation can be performed under several important constraints. The proposed approach is illustrated in a simulation study and two applications in economics. Supplemental materials for this article (appendixes, data, and computer code) are available online. © 2009 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.
Chan, J.C.C. 2005, 'Replication of the results in 'learning about heterogeneity in returns to schooling'', Journal of Applied Econometrics, vol. 20, no. 3, pp. 439-443.View/Download from: Publisher's site
A recent article (Koop and Tobias, 2004) proposes a direct way to characterize the extent of heterogeneity in returns to education. They investigate the adequacy of several competing models and conclude that returns to schooling are heterogeneous and are best modelled as a bivariate normal distribution. The results of this replication paper basically agree with the authors. Copyright © 2005 John Wiley & Sons, Ltd.
Chan, J.C.C. & Hsiao, C.Y.L. 2014, 'Estimation of Stochastic Volatility Models with Heavy Tails and Serial Dependence' in Bayesian Inference in the Social Sciences, John Wiley & Sons, USA, pp. 155-176.View/Download from: UTS OPUS or Publisher's site
Financial time series often exhibit properties that depart from the usual assumptions of serial
independence and normality. These include volatility clustering, heavy-tailedness and serial
dependence. A voluminous literature on different approaches for modeling these empirical
regularities has emerged in the last decade. In this chapter we review the estimation of a
variety of highly flexible stochastic volatility models, and introduce some efficient algorithms
based on recent advances in state space simulation techniques. These estimation methods are
illustrated via empirical examples involving precious metal and foreign exchange returns. The
corresponding MATLAB code is also provided.1
The remaining of the chapter is structured as follows. Section 6.2 first discusses the basic
stochastic volatility model and its estimation. In particular, we provide details of the auxiliary
mixture sampler and the precision sampler for linear Gaussian state space models. In Section
6.3 we extend the basic stochastic volatility model to allow for moving average errors. We
then discuss an efficient estimation method based on fast band matrix routines.
Lastly, Section 6.4 considers another extension—instead of the conventional assumption of a
Gaussian error distribution, we discuss some heavy-tailed distributions that can be written as
scale mixtures of Gaussian distributions. We demonstrate the relevance of these heavy-tailed
stochastic volatility models through an empirical example.
Brereton, T.J., Chan, J.C.C. & Kroese, D.P. 2011, 'Fitting mixture importance sampling distributions via improved cross-entropy', Proceedings of the 2011 Winter Simulation Conference (WSC), Winter Simulation Conference, IEEE, Phoenix, AZ, USA, pp. 422-428.View/Download from: UTS OPUS or Publisher's site
In some rare-event settings, exponentially twisted distributions perform very badly. One solution to this problem is to use mixture distributions. However, it is difficult to select a good mixture distribution for importance sampling. We here introduce a simple adaptive method for choosing good mixture importance sampling distributions. © 2011 IEEE.
Chan, J.C.C. & Kroese, D.P. 2008, 'Randomized methods for solving the winner determination problem in combinatorial auctions', Proceedings - Winter Simulation Conference, pp. 1344-1349.View/Download from: Publisher's site
Combinatorial auctions, where buyers can bid on bundles of items rather than bidding them sequentially, often lead to more economically efficient allocations of financial resources. However, the problem of determining the winners once the bids are submitted, the so-called Winner Determination Problem (WDP), is known to be NP hard. We present two randomized algorithms to solve this combinatorial optimization problem. The first is based on the Cross-Entropy (CE) method, a versatile adaptive algorithm that has been successfully applied to solve various well-known difficult combinatorial optimization problems. The other is a new adaptive simulation approach by Botev and Kroese, which evolved from the CE method and combines the adaptiveness and level-crossing ideas of CE with Markov Chain Monte Carlo techniques. The performance of the proposed algorithms are illustrated by various examples. © 2008 IEEE.