UTS site search

Professor John Geweke


Internationally renowned econometrician John Geweke came to UTS as Distinguished Research Professor in the School of Business in 2009. Professor Geweke is distinguished for his contributions to econometric theory in time series analysis and Bayesian modelling, and for applications in the fields of macroeconomics, finance, and microeconomics. He is a Fellow of the Econometric Society and the American Statistical Association. He has been co-editor of the Journal of Econometrics, the Journal of Applied Econometrics, and editor of the Journal of Business and Economic Statistics. His most recent book is Complete and Incomplete Econometric Models, published by Princeton University Press in January 2010. Currently he directs the six-investigator ARC – sponsored project, “Massively Parallel Algorithms for Bayesian Inference and Decision Making.”

Awards and Recognition

Fellow of the Econometric Society, since 1982
Fellow of the American Statistical Association, since 1990
Alfred P. Sloan Research Fellow, 1982-1984
H.I. Romnes Faculty Fellow, University of Wisconsin, 1982-1983
Dayton-Hudson Fellowship, 1970-1974
National Merit Scholar, 1966-1970
Member, Phi Beta Kappa and Phi Kappa Phi
Listed in Marquis' Who's Who in America, similar publications

Previous Academic Positions

Harlan McGregor Chair in Economic Theory and Professor of Economics and Statistics, University of Iowa, 1999-2009
Professor of Economics, University of Minnesota, 1990-2001
Director, Institute of Statistics and Decision Sciences, Duke University, 1987-1990
Professor of Statistics and Decision Sciences, Duke University, 1987-1990
William R. Kenan, Jr., Professor of Economics, Duke University, 1986-1990
Professor of Economics, Duke University, 1983-1986
Visiting Professor of Economics, Carnegie-Mellon University, 1982-1983
Visiting Professor of Statistics, Carnegie-Mellon University, 1982-1983
Professor of Economics, University of Wisconsin-Madison, 1982-1983
Associate Professor of Economics, University of Wisconsin-Madison, 1979-1982
Visiting Fellow, Warwick University, 1979
Assistant Professor of Economics, University of Wisconsin-Madison, 1975-1979

Image of John Geweke
Distinguished Visiting Professor, Economics Discipline Group
Associate Member, AAI - Advanced Analytics Institute
Doc. of Philosophy
+61 2 9514 9797


Geweke, J. 2010, Complete and Incomplete Econometric Models, 1, Princeton University Press, Princeton, USA.
View/Download from: UTS OPUS
Econometric models are widely used in the creation and evaluation of economic policy in the public and private sectors. But these models are useful only if they adequately account for the phenomena in question, and they can be quite misleading if they do not. In response, econometricians have developed tests and other checks for model adequacy. All of these methods, however, take as given the specification of the model to be tested. In this book, John Geweke addresses the critical earlier stage of model development, the point at which potential models are inherently incomplete. Summarizing and extending recent advances in Bayesian econometrics, Geweke shows how simple modern simulation methods can complement the creative process of model formulation. These methods, which are accessible to economics PhD students as well as to practicing applied econometricians, streamline the processes of model development and specification checking. Complete with illustrations from a wide variety of applications, this is an important contribution to econometrics that will interest economists and PhD students alike.


Geweke, J., Durham, G. & Xu, H. 2015, 'Bayesian Inference for Logistic Regression Models Using Sequential Posterior Simulation' in Upadhyay, S., Singh, U., Dey, D. & Loganathan, A. (eds), Current Trends in Bayesian Methodology with Applications, CRC Press, USA, pp. 290-310.
View/Download from: UTS OPUS
Durham, G. & Geweke, J. 2014, 'Adaptive Sequential Posterior Simulators for Massively Parallel Computing Environments' in Jeliazkov, I. & Poirier, D. (eds), Bayesian Model Comparison (Advances in Econometrics), Emerald Group Publishing Limited, USA, pp. 1-44.
View/Download from: UTS OPUS or Publisher's site
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
Geweke, J., Koop, G. & van Dijk, H. 2011, 'Introduction' in Geweke, J., Koop, G. & van Dijk, H. (eds), The Oxford Handbook of Bayesian Econometrics, Oxford University Press, Oxford, pp. 1-8.
View/Download from: UTS OPUS
Bayesian econometric methods have enjoyed an increase in popularity in recent years. Econometricians, empirical economists, and policymakers are increasingly making use of Bayesian methods. This handbook is a single source for researchers and policymakers wanting to learn about Bayesian methods in specialized fields, and for graduate students seeking to make the final step from textbook learning to the research frontier. It contains contributions by leading Bayesians on the latest developments in their specific fields of expertise. The volume provides broad coverage of the application of Bayesian econometrics in the major fields of economics and related disciplines, including macroeconomics, microeconomics, finance, and marketing. It reviews the state of the art in Bayesian econometric methodology, with chapters on posterior simulation and Markov chain Monte Carlo methods, Bayesian nonparametric techniques, and the specialized tools used by Bayesian time series econometricians such as state space models and particle filtering. It also includes chapters on Bayesian principles and methodology.
Geweke, J. 2009, 'The SETAR Model of Tong and Lim and Advances in Computation' in Chan, K.S. (ed), Exploration of a Nonlinear World: An Appreciation of Howell Tong's Contributions to Statistics., World Scientific, Singapore, pp. 85-94.
View/Download from: UTS OPUS
This discussion revisits Tong and Lim's seminal 1980 paper on the SETAR model in the context of advances in computation since that time. Using the Canadian lynx data set from that paper, it compares exact maximum likelihood estimates with those in the original paper. It illustrates the application of Bayesian MCMC methods, developed in the intervening years, to this model and data set. It shows that SETAR is a limiting case of mixture of experts models and studies the application of one variant of those models to the lynx data set. The application is successful, despite the small size of the data set and the complexity of the model. Predictive likelihood ratios favor Tong and Lim's original model.
Geweke, J., Horowitz, J.L. & Pesaran, H. 2008, 'Econometrics' in Durlauf, S.N. & Blume, L.E. (eds), The New Palgrave Dictionary of Economics online, Palgrave Macmillan, Online, pp. 1-32.
View/Download from: UTS OPUS or Publisher's site
As a unified discipline, econometrics is still relatively young and has been transforming and expanding very rapidly. Major advances have taken place in the analysis of cross-sectional data by means of semiparametric and nonparametric techniques. Heterogeneity of economic relations across individuals, firms and industries is increasingly acknowledged and attempts have been made to take it into account either by integrating out its effects or by modelling the sources of heterogeneity when suitable panel data exist. The counterfactual considerations that underlie policy analysis and treatment valuation have been given a more satisfactory foundation. New time-series econometric techniques have been developed and employed extensively in the areas of macroeconometrics and finance. Nonlinear econometric techniques are used increasingly in the analysis of cross-section and time-series observations. Applications of Bayesian techniques to econometric problems have been promoted largely by advances in computer power and computational techniques. The use of Bayesian techniques has in turn provided the investigators with a unifying framework where the tasks of forecasting, decision making, model evaluation and learning can be considered as parts of the same interactive and iterative process, thus providing a basis for âreal time econometricsâ.
Keane, M. & Geweke, J. 2006, 'Bayesian Cross-Sectional Analysis of the Conditional Distrubution of Earnings of Men in the USA (1967-1996)' in Upadhyay, S.K., Singh, U. & Dey, D.K. (eds), Bayesian Statistics and Its Applications, Anshan Ltd, New Delhi, pp. 160-197.
View/Download from: UTS OPUS
Geweke, J. & Whiteman, C. 2006, 'Bayesian forecasting' in Elliot, G., Granger, C.W.J. & Timmerman, A. (eds), Handbook of Economic Forecasting, Elsevier, The Netherlands, pp. 3-80.
View/Download from: UTS OPUS or Publisher's site
Bayesian forecasting is a natural product of a Bayesian approach to inference. The Bayesian approach in general requires explicit formulation of a model, and conditioning on known quantities, in order to draw inferences about unknown ones. In Bayesian forecasting, one simply takes a subset of the unknown quantities to be future values of some variables of interest. This chapter presents the principles of Bayesian forecasting, and describes recent advances in computational capabilities for applying them that have dramatically expanded the scope of applicability of the Bayesian approach. It describes historical developments and the analytic compromises that were necessary prior to recent developments, the application of the new procedures in a variety of examples, and reports on two long-term Bayesian forecasting exercises

Journal articles

Bateman, H., Eckert, C., Geweke, J., Louviere, J.J., Satchell, S.E. & Thorp, S.J. 2016, 'Risk presentation and retirement portfolio choice', Review of Finance, vol. 20, no. 1, pp. 201-229.
View/Download from: Publisher's site
Bateman, H., Eckert, C., Geweke, J., Louviere, J., Satchell, S. & Thorp, S. 2016, 'Risk Presentation and Portfolio Choice', Review of Finance, vol. 20, no. 1, pp. 201-229.
View/Download from: UTS OPUS or Publisher's site
Efficient investment of personal savings depends on clear risk disclosures. We study the propensity of individuals to violate some implications of expected utility under alternative 'mass-market descriptions of investment risk, using a discrete choice experiment. We found violations in around 25% of choices, and substantial variation in rates of violation, depending on the mode of risk disclosure and participants' characteristics. When risk is described as the frequency of returns below or above a threshold we observe more violations than for range and probability-based descriptions. Innumerate individuals are more likely to violate expected utility than those with high numeracy. Apart from the very elderly, older individuals are less likely to violate the restrictions. The results highlight the challenges of disclosure regulation.
Bateman, H., Eckert, C., Geweke, J., Louviere, J.J., Thorp, S.J. & Satchell, S. 2012, 'Financial competence and expectations formation: Evidence from Australia', The Economic Record, vol. 88, no. 280, pp. 39-63.
View/Download from: UTS OPUS or Publisher's site
We study the financial competence of Australian retirement savers using self-assessed and quantified measures. Responses to financial literacy questions show large variation and compare poorly with some international surveys. Basic and sophisticated financial literacy vary significantly with most demographics, self-assessed financial competence, income, superannuation accumulation and net worth. General numeracy scores are largely constant across gender, age, higher education and income. Financial competence also significantly affects expectations of stock market performance. Using a discrete choice model, we show that individuals with a higher understanding of risk, diversification and financial assets are more likely to assign a probability to future financial crises rather than expressing uncertainty.
Geweke, J., Koop, G. & Paap, R. 2012, 'Introduction for the annals issue of the Journal of Econometrics on "Bayesian Models, Methods and Applications"', Journal of Econometrics, vol. 171, no. 2, pp. 99-100.
View/Download from: UTS OPUS or Publisher's site
This Annals issue of the Journal of Econometrics grew out of the European Seminar on Bayesian Econometrics (ESOBE) which was held at Erasmus University, Rotterdam on November 56, 2010. This conference was important for two reasons. First it inaugurated ESOBE, which has become a successful annual conference which brings European and international Bayesians together. Second, it celebrated the retirement of Herman van Dijk after a long and successful career in Bayesian econometric
Geweke, J. & Amisano, G. 2012, 'Prediction With Misspecified Models', American Economic Review, vol. 102, no. 3, pp. 482-486.
View/Download from: UTS OPUS or Publisher's site
Many decision-makers in the public and private sectors routinely consult the im- plications of formal economic and statistical models in their work. Especially in large organizations and for important decisions, there are often competing models. Of course no model under consideration is a literal representation of reality for the purposes at hand .more succinctly, no model is .true..and di¤erent models focus on di¤erent aspects of the relevant environment. This fact can often be supported by formal econometric tests concluding that the models at hand are, indeed, misspecified in various dimensions.
Geweke, J. 2012, 'Nonparametric Bayesian modelling of monotone preferences for discrete choice experiments', Journal of Econometrics, vol. 171, no. 2, pp. 185-204.
View/Download from: UTS OPUS or Publisher's site
Discrete choice experiments are widely used to learn about the distribution of individual preferences for product attributes. Such experiments are often designed and conducted deliberately for the purpose of designing new products. There is a long-standing literature on nonparametric and Bayesian modelling of preferences for the study of consumer choice when there is a market for each product, but this work does not apply when such markets fail to exist as is the case with most product attributes. This paper takes up the common case in which attributes can be quantified and preferences over these attributes are monotone. It shows that monotonicity is the only shape constraint appropriate for a utility function in these circumstances. The paper models components of utility using a Dirichlet prior distribution and demonstrates that all monotone nondecreasing utility functions are supported by the prior. It develops a Markov chain Monte Carlo algorithm for posterior simulation that is reliable and practical given the number of attributes, choices and sample sizes characteristic of discrete choice experiments. The paper uses the algorithm to demonstrate the flexibility of the model in capturing heterogeneous preferences and applies it to a discrete choice experiment that elicits preferences for different auto insurance policies.
Geweke, J. & Amisano, G. 2011, 'Hierarchical Markov normal mixture models with applications to financial asset returns', Journal of Applied Econometrics, vol. 26, no. 1, pp. 1-29.
View/Download from: UTS OPUS or Publisher's site
Abstract: Motivated by the common problem of constructing predictive distributions for daily asset returns over horizons of one to several trading days, this article introduces a new model for time series. This model is a generalization of the Markov normal mixture model in which the mixture components are themselves normal mixtures, and it is a specific case of an artificial neural network model with two hidden layers. The article characterizes the implications of the model for time series in two ways. First, it derives the restrictions placed on the autocovariance function and linear representation of integer powers of the time series in terms of the number of components in the mixture and the roots of the Markov process. Second, it uses the prior predictive distribution of the model to study the implications of the model for some interesting functions of asset returns. The article uses the model to construct predictive distributions of daily S&P 500 returns 1971-2005, US dollar -- UK pound returns 1972-1998, and one- and ten-year maturity bonds 1987-2006. It compares the performance of the model for these returns with ARCH and stochastic volatility models using the predictive likelihood function. The model's performance is about the same as its competitors for the bond returns, better than its competitors for the S&P 500 returns, and much better than its competitors for the dollar-pound returns. In- and out-of-sample validation exercises with predictive distributions identify some remaining deficiencies in the model and suggest potential improvements. The article concludes by using the model to form predictive distributions of one- to ten-day returns during volatile episodes for the S&P 500, dollar-pound and bond return series.
Geweke, J. & Jiang, Y. 2011, 'Inference and prediction in a multiple-structural-break model', Journal Of Econometrics, vol. 163, no. 2, pp. 172-185.
View/Download from: UTS OPUS or Publisher's site
This paper develops a new Bayesian approach to structural break modeling. The focuses of the approach are the modeling of in-sample structural breaks and forecasting time series allowing out-of-sample breaks. The model has several desirable features. Fir
Geweke, J. & Amisano, G. 2011, 'Optimal prediction pools', Journal Of Econometrics, vol. 164, no. 1, pp. 130-141.
View/Download from: UTS OPUS or Publisher's site
We consider the properties of weighted linear combinations of prediction models, or linear pools, evaluated using the log predictive scoring rule. Although exactly one model has limiting posterior probability, an optimal linear combination typically includes several models with positive weights. We derive several interesting results: for example, a model with positive weight in a pool may have zero weight if some other models are deleted from that pool. The results are illustrated using S&P 500 returns with six prediction models. In this example models that are clearly inferior by the usual scoring criteria have positive weights in optimal linear pools.
Geweke, J. & Amisano, G. 2010, 'Comparing And Evaluating Bayesian Predictive Distributions Of Asset Returns', International Journal of Forecasting, vol. 26, no. 2, pp. 216-230.
View/Download from: UTS OPUS or Publisher's site
Bayesian inference in a time series model provides exact out-of-sample predictive distributions that fully and coherently incorporate parameter uncertainty. This study compares and evaluates Bayesian predictive distributions from alternative models, usin
Geweke, J. 2010, 'Comment', International Journal of Forecasting, vol. 26, no. 2, pp. 435-438.
View/Download from: UTS OPUS or Publisher's site
The article by Zellner and Ando proposes methods for coping with the excess kurtosis that is often observed in disturbances in applications of the seemingly unrelated regressions (SUR) model. This is an important topic which is of particular relevance in
Ackerberg, D., Geweke, J. & Hahn, J. 2009, 'Comments on 'Convergence Properties Likelihood of Computed Dynamic Models'', Econometrica, vol. 77, no. 6, pp. 2009-2017.
View/Download from: UTS OPUS or Publisher's site
We show by counterexample that Proposition 2 in Fernández-Villaverde, Rubio- Ramírez, and Santos (Econometrica (2006), 74, 93119) is false. We also show that even if their Proposition 2 were corrected, it would be irrelevant for parameter estimates. As a more constructive contribution, we consider the effects of approximation error on parameter estimation, and conclude that second order approximation errors in the policy function have at most second order effects on parameter estimates.
Geweke, J. & Keane, M. 2007, 'Smoothly mixing regressions', Journal of Econometrics, vol. 138, no. 1, pp. 252-290.
View/Download from: UTS OPUS or Publisher's site
This paper extends the conventional Bayesian mixture of normals model by permitting state probabilities to depend on observed covariates. The dependence is captured by a simple multinomial probit model. A conventional and rapidly mixing MCMC algorithm provides access to the posterior distribution at modest computational cost. This model is competitive with existing econometric models, as documented in the paper's illustrations. The first illustration studies quantiles of the distribution of earnings of men conditional on age and education, and shows that smoothly mixing regressions are an attractive alternative to nonBayesian quantile regression. The second illustration models serial dependence in the S&P 500 return, and shows that the model compares favorably with ARCH models using out of sample likelihood criteria.
Geweke, J. 2007, 'Interpretation and Inference in Mixture Models: Simple MCMC Works', Computational Statistics and Data Analysis, vol. 51, no. 7, pp. 3529-3550.
View/Download from: UTS OPUS or Publisher's site
Abstract: The mixture model likelihood function is invariant with respect to permutation of the components of the mixture. If functions of interest are permutation sensitive, as in classification applications, then interpretation of the likelihood function requires valid inequality constraints and a very large sample may be required to resolve ambiguities. If functions of interest are permutation invariant, as in prediction applications, then there are no such problems of interpretation. Contrary to assessments in some recent publications, simple and widely used Markov chain Monte Carlo (MCMC) algorithms with data augmentation reliably recover the entire posterior distribution.
Geweke, J. 2007, 'Bayesian model comparison and validation', American Economic Review, vol. 97, no. 2, pp. 60-64.
View/Download from: UTS OPUS or Publisher's site
Bayesian econometrics provides a tidy theory and practical methods of comparing and combining several alternative, completely specified models for a common data set. It is always possible that none of the specified models describe important aspects of the data well. The investigation of this possibility, a process known as model validation or model specification checking, is an important part of applied econometric work. Bayesian theory and practice for model validation are less well developed. A well-established Bayesian literature argues that non-Bayesian methods are essential in model validation. This line of though persists in Bayesian econometrics as well; the paper reviews these methods. The paper proposes an alternative, fully Bayesian method of model validation based on the concept of incomplete models, and argues that this method is also strategically advantageous in applied Bayesian econometrics.
Geweke, J., Groenen, P.J.E., Paap, R. & van Dijk, H.K. 2007, 'Computational techniques for applied econometric analysis of macroeconomic and financial processes', COMPUTATIONAL STATISTICS & DATA ANALYSIS, vol. 51, no. 7, pp. 3506-3508.
View/Download from: Publisher's site
Abrantes-Metz, R., Froeb, L., Geweke, J. & Taylor, C. 2006, 'A Variance Screen for Collusion', International Journal Of Industrial Organisation, vol. 24, no. 3, pp. 467-486.
View/Download from: UTS OPUS or Publisher's site
Abstract: In this paper, we examine price movements over time around the collapse of a bid-rigging conspiracy. While the mean decreased by sixteen percent, the standard deviations increased by over two hundred percent. We hypothesize that conspiracies in other industries would exhibit similar characteristics and search for "pockets" of low price variation as indicators of collusion in the retail gasoline industry in Louisville. We observe no such areas around Louisville in 1996-2002.
Geweke, J. 2004, 'Getting it Right: Joint Distribution Tests of Posterior Simulators', Journal of the American Statistical Association, vol. 99, no. 467, pp. 799-804.
View/Download from: UTS OPUS or Publisher's site
Abstract: Analytical or coding errors in posterior simulators can produce reasonable but incorrect approximations of posterior moments. This article develops simple tests of posterior simulators that detect both kinds of errors, and uses them to detect and correct errors in two previously published papers. The tests exploit the fact that a Bayesian model specifies the joint distribution of observables (data) and unobservables (parameters). There are two joint distribution simulators. The marginal conditional simulator draws unobservables from the prior and then observables conditional on unobservables. The successive-conditional simulator alternates between the posterior simulator and an observables simulator. Formal comparison of moment approximations of the two simulators reveals existing analytical or coding errors in the posterior simulator.
Geweke, J. & H, T. 2003, 'Note on the Sampling Distribution for the Metropolis-Hastings Algorithm', Communications In Statistics-theory And Methods, vol. 32, pp. 775-789.
View/Download from: UTS OPUS or Publisher's site
Abstract: The Metropolis-Hastings algorithm has been important in the recent development of Bayes methods. This algorithm generates random draws from a target distribution utilizing a sampling (or proposal) distribution. This article compares the properties of three sampling distributions-the independence chain, the random walk chain, and the Taylored chain suggested by Geweke and Tanizaki (Geweke, J., Tanizaki, H. (1999). On Markov Chain Monte-Carlo methods for nonlinear and non-Gaussian state-space models. Communications in Statistics, Simulation and. Computation 28(4):867-894, Geweke, J., Tanizaki, H. (2001). Bayesian estimation of state-space model using the Metropolis-Hastings algorithm within Gibbs sampling. Computational Statistics and Data Analysis 37(2):151-170).
Geweke, J., Gowrisankaran, G. & Town, R. 2003, 'Bayesian Inference for Hospital Quality in a Selection Model', Econometrica, vol. 71, pp. 1215-1238.
View/Download from: UTS OPUS or Publisher's site
This paper develops new econometric methods to infer hospital quality in a model with discrete dependent variables and nonrandom selection. Mortality rates in patient discharge records are widely used to infer hospital quality. However, hospital admission is not random and some hospitals may attract patients with greater unobserved severity of illness than others. In this situation the assumption of random admission leads to spurious inference about hospital quality. This study controls for hospital selection using a model in which distance between the patient's residence and alternative hospitals are key exogenous variables. Bayesian inference in this model is feasible using a Markov chain Monte Carlo posterior simulator, and attaches posterior probabilities to quality comparisons between individual hospitals and groups of hospitals. The study uses data on 74,848 Medicare patients admitted to 114 hospitals in Los Angeles County from 1989 through 1992 with a diagnosis of pneumonia. It finds the smallest and largest hospitals to be of the highest quality. There is strong evidence of dependence between the unobserved severity of illness and the assignment of patients to hospitals, whereby patients with a high unobserved severity of illness are disproportionately admitted to high quality hospitals. Consequently a conventional probit model leads to inferences about quality that are markedly different from those in this study's selection model.
Geweke, J. 2002, 'Commentary: Econometric issues in using the AHEAD Panel', Journal of Econometrics, vol. 112, no. 1, pp. 115-120.
View/Download from: UTS OPUS or Publisher's site
This study provides an illuminating perspective on the relation between health and socio-economic status. It is notable in meeting, head on, various technical but critical issues that arise in using the AHEAD panel to address issues of causation between health and socio-economic status (SES). This panel provides multiple measures of both health and SES, and there is no prior consensus reduction of these many dimensions. Household wealth is the candidate summary measure of economic status, but as users of self-reported wealth know and the authors lucidly demonstrate, severe measurement errors raise a host of methodological problems of their own. These comments focus on the way the authors have addressed these and some of the other technical issues that have to be confronted in one way or another in order to address the central issues.
Geweke, J. 2001, 'Bayesian Econometrics and Forecasting', Journal of Econometrics, vol. 100, no. 1, pp. 11-15.
View/Download from: UTS OPUS or Publisher's site
Abstract: Contemporary Bayesian forecasting methods draw on foundations in subjective probability and preferences laid down in the mid-twentieth century, and utilize numerical methods developed since that time in their implementation. These methods unify the tasks of forecasting and model evaluation. They also provide tractable solutions for problems that prove difficult when approached using non-Bayesian methods. These advantages arise from the fact that the conditioning in Bayesian probability forecasting is the same as the conditioning in the underlying decision problems.
Geweke, J. 2001, 'Bayesian inference and posterior simulators', Canadian Journal of Agricultural Economics, vol. 49, no. 3, pp. 313-325.
View/Download from: UTS OPUS or Publisher's site
Abstract: Recent advances in simulation methods have made possible the systematic application of Bayesian methods to support decision making with econometric models. This paper outlines the key elements of Bayesian investigation, and the simulation methods applied to bring them to bear in application.
Geweke, J. & Tanizaki, H. 2001, 'Bayesian Estimation of State-Space Models Using Metropolis-Hastings Algorithm with Gibbs Sampling', Computational Statistics and Data Analysis, vol. 37, no. 2, pp. 151-170.
View/Download from: UTS OPUS
Abstract: In this paper, an attempt is made to show a general solution to nonlinear and/or non-Gaussian state-space modeling in a Bayesian framework, which corresponds to an extension of Carlin et al. (J. Amer. Statist. Assoc. 87(418) (1992) 493â500) and Carter and Kohn (Biometrika 81(3) (1994) 541â553; Biometrika 83(3) (1996) 589â601). Using the Gibbs sampler and the MetropolisâHastings algorithm, an asymptotically exact estimate of the smoothing
Geweke, J. 2001, 'A Note on Some Limitations of CRRS Utility', Economic Letters, vol. 71, no. 3, pp. 341-345.
View/Download from: UTS OPUS
Abstract: In a standard environment for choice under uncertainty with constant relative risk aversion (CRRA), the existence of expected utility is fragile with respect to changes in the distributions of random variables, changes in prior information, or the assumption of rational expectations.
Geweke, J. 1999, 'Using Simulation Methods for Bayesian Econometric Models: Inference, Development and Communication', Econometric Reviews, vol. 18, no. 1, pp. 1-73.
View/Download from: UTS OPUS
Abstract: This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models.
Geweke, J. & Zhou, G. 1996, 'Measuring the Pricing Error of the Arbitrage Price Theory', Review of financial studies, vol. 9, no. 2, pp. 557-587.
View/Download from: UTS OPUS
Abstract: This article provides an exact Bayesian framework for analyzing the arbitrage pricing theory (APT). Based on the Gibbs sampler, we show how to obtain the exact posterior distributions for functions of interest in the factor model. In particular, we propose a measure of the APT pricing deviations and obtain its exact posterior distribution. Using monthly portfolio returns grouped by industry and market capitalization, we find that there is little improvement in reducing the pricing errors by including more factors beyond the first one.
Geweke, J. & Runkle, D. 1995, 'A Fine Time for Monetary Policy?', Federal Reserve Bank of Minneapolis Quarterly Review, vol. 19, no. 1, pp. 18-31.
View/Download from: UTS OPUS
Almost everyone would agree--even we in the Federal Reserve System--that monetary policy can be improved. But improving it requires accurate empirical descriptions of the current policy and the relationship between that policy and the economic variables policymakers care about. With those descriptions, we could, conceivably, predict how economic outcomes would change under alternative policies and hence find policies that lead to better economic outcomes. The first requirement of this policymaking problem is policy identification, and it is the focus of this study. Policy identification entails a specification of the instrument the Federal Reserve controls and a description of how that instrument is set based on information available when a policy decision is made. Because policy identification is a crucial step in the search for improved monetary policy, it has received much attention in the literature.
Geweke, J., Keane, M. & Runkle, D. 1994, 'Alternative Computational Approaches to Statistical Inference In The Multinomial Probit Model', Review Of Economics And Statistics, vol. 76, no. 4, pp. 609-632.
View/Download from: UTS OPUS or Publisher's site
This research compares several approaches to inference in the multinomial probit model, based on two Monte Carlo experiments for a seven choice model. The methods compared are the simulated maximum likelihood estimator using the GHK recursive probabilit
Matchar, D., Simel, D., Geweke, J. & Feussner, J. 1990, 'A Bayesian Method for Evaluating Medical Test Operating Characteristics When Some Patients Condituions Fail to be Diagnosed by the Reference Standard', Medical Decision Making, vol. 10, no. 2, pp. 102-115.
View/Download from: Publisher's site
Abstract: The evaluation of a diagnostic test when the reference standard fails to establish a diagnosis in some patients is a common and difficult analytical problem. Conventional operating characteristics, derived from a 2 x 2 matrix, require that tests have only positive or negative results, and that disease status be designated definitively as present or absent. Results can be displayed in a 2 x 3 matrix, with an additional column for undiagnosed patients, when it is not possible always to ascertain the disease status definitively. The authors approach this problem using a Bayesian method for evaluating the 2 x 3 matrix in which test operating characteristics are described by a joint probability density function. They show that one can derive this joint probability density function of sensitivity and specificity empirically by applying a sampling algorithm. The three-dimensional histogram resulting from this sampling procedure approximates the true joint probability density function for sensitivity and specificity. Using a clinical example, the authors illustrate the method and demonstrate that the joint probability density function for sensitivity and specificity can be influenced by assumptions used to interpret test results in undiagnosed patients. This Bayesian method represents a flexible and practical solution to the problem of evaluating test sensitivity and specificity when the study group includes patients whose disease could not be diagnosed by the reference standard. Keywords: Bayesian analysis; test operating characteristics; probability density functions. (Med Decis Making 1990;10:102-111)
Geweke, J. 1988, 'Antithetic Acceleration of Monte Carlo Integration in Bayesian Inference', Journal of Econometrics, vol. 38, no. 1-2, pp. 73-90.
View/Download from: UTS OPUS or Publisher's site
It is proposed to sample antithetically rather than randomly from the posterior density in Bayesian inference using Monte Carlo integration. Conditions are established under which the number of replications required with antithetic sampling relative to the number required with random sampling is inversely proportional to sample size, as sample size increases. The result is illustrated in an experiment using a bivariate vector autoregression.
Geweke, J. & Weisbrod, B. 1982, 'Clinical Evaluation vs. Economic Evaluation: The Case of a New Drug', Medical Care, vol. 20, pp. 821-830.
View/Download from: Publisher's site
Abstract: To economically evaluate a new. drug or other medical innovation one must assess both the changes in costs and in benefits. Safety and efficacy matter, but so do resource costs and social benefits. This paper evaluates the effects on expenditures of the recent introduction of cimetidine, a drug used in the prevention and treatment of duodenal ulcers. This evaluation is of interest in its own right and also as a "guide" for studying similar effects of other innovations. State Medicaid records are used to test the effects on hospitalization and aggregate medical care expenditures of this new medical innovation. After controlling to the extent possible for potential selection bias, we find that: 1) usage of cimetidine is associated with a lower level of medical care expenditures and fewer days of hospitalization per patient for those duodenal ulcer patients who had zero health care expenditures and zero days of hospitalization during the presample period; an annual cost saving of some $320.00 (20 per cent) per patient is indicated. Further analysis disclosed, however, that this saving was lower for patients with somewhat higher levels of health care expenditures and hospitalization in the presample period, and to some extent was reversed for the patients whose prior year's medical care expenditures and hospitalization were highest.
Geweke, J. & Singleton, K. 1981, 'Latent Variable Models for Time Series: A Frequency Domain Approach with an Application to the Permanent Income Hypothesis', Journal of Econometrics, vol. 17, no. 3, pp. 287-304.
View/Download from: UTS OPUS or Publisher's site
Abstract: The theory of estimation and inference in a very general class of latent variable models for time series is developed by showing that the distribution theory for the finite Fourier transform of the observable variables in latent variable models for time series is isomorphic to that for the observable variables themselves in classical latent variable models. This implies that analytic work on classical latent variable models can be adapted to latent variable models for time series, an implication which is illustrated here in the context of a general canonical form. To provide an empirical example a latent variable model for permanent income is developed, its parameters are shown to be identified, and a variety of restrictions on these parameters implied by the permanent income hypothesis are tested.
Geweke, J. 1981, 'The Approximate Slopes of Econometric Tests', Econometrica, vol. 49, no. 6, pp. 1427-1442.
View/Download from: UTS OPUS or Publisher's site
Abstract: In this paper the concept of approximate slope, introduced by R. R. Bahadur, is used to make asymptotic global power comparisons of econometric tests. The approximate slope of a test is the rate at which the logarithm of the asymptotic marginal significance level of the test decreases as sample size increases, under a given alternative. A test with greater approximate slope may therefore be expected to reject the null hypothesis more frequently under that alternative than one with smaller approximate slope. Two theorems, which facilitate the computation and interpretation of the approximate slopes of most econometric tests, are established. These results are used to undertake some illustrative comparisons. Sampling experiments and an empirical illustration suggest that the comparison of approximate slopes may provide an adequate basis for evaluating the actual performance of alternative tests of the same hypothesis
Geweke, J. & Meese, R. 1981, 'Estimating Regression Models of Finite but Unknown Order', International Economic Review, vol. 22, no. 1, pp. 54-70.
View/Download from: UTS OPUS
Examines problems associated with the estimation of the normal linear regression model of finite but unknown sequence of nested alternatives. Estimation criteria for the model selection; Derivation of the numerical bounds on the finite sample distribution; Relation of the estimation criterion functions proposed to other estimation criterion functions.
Geweke, J. & Singleton, K. 1981, 'Maximum Likelihood 'Confirmatory' Factor Analysis of Economic Time Series', International Economic Review, vol. 22, no. 1, pp. 37-54.
View/Download from: UTS OPUS or Publisher's site
Explains the theory of identification, estimation and inference in the dynamic confirmatory factor model for the economic time series. Derivation of the frequency domain representation of the model; Illustration of the nature of the identification problem for the dynamic confirmatory model; Dynamic confirmatory model of the business cycle motivated by Lucas theory of aggregate activity.
Geweke, J. 1981, 'A Comparison of Tests of the Independence of Two Covariance Stationary Time Series', Journal of the American Statistical Association, vol. 76, no. 374, pp. 363-373.
View/Download from: UTS OPUS or Publisher's site
Abstract: The approximate slopes of several tests of the independence of two covariance stationary time series are derived and compared. It is shown that the approximate slopes of regression tests are at least as great as those based on the residuals of univariate ARIMA models, and that there are cases in which the former are arbitrarily great while the latter are arbitrarily small. These analytical findings are supported by a Monte Carlo study that shows that in samples of size 100 and 250 the asymptotic distribution theory under the null hypothesis is adequate for all tests, but under alternatives to the null hypothesis the rate of Type II error for the test based on ARIMA model residuals is often more than double that of the regression tests.
Geweke, J. & Singleton, K. 1980, 'Interpreting the Likelihood Ratio Statistic in Factor Models When Sample Size is Small', Journal of the American Statistical Association, vol. 75, no. 369, pp. 133-137.
View/Download from: UTS OPUS or Publisher's site
Abstract: The use of the likelihood ratio statistic in testing the goodness of fit of the exploratory factor model has no formal justification when, as is often the case in practice, the usual regularity conditions are not met. In a Monte Carlo experiment it is found that the asymptotic theory seems to be appropriate when the regularity conditions obtain and sample size is at least 30. When the regularity conditions are not satisfied, the asymptotic theory seems to be misleading in all sample sizes considered.
Geweke, J. 1978, 'Testing the Exogeneity Specification in the Complete Dynamic Simultaneous Equation Model', Journal of Econometrics, vol. 7, no. 2, pp. 163-185.
View/Download from: UTS OPUS
Abstract: It is shown that in the complete dynamic simultaneous equation model exogenous variables cause endogenous variables in the sense of Granger (1969) and satisfy the criterion of econometric exogeneity discussed by Sims (1977a), but that the stationarity assumptions invoked by Granger and Sims are not necessary for this implication. Inference procedures for testing each implication are presented and a new joint test of both implications is derived. Detailed attention is given to estimation and testing when the error vector of the final form of the complete dynamic simultaneous equation model is both singular and serially correlated. The theoretical points of the paper are illustrated by testing the exogeneity specification in a small macroeconometric model.
Geweke, J. 1978, 'Temporal Aggregation in the Multiple Regression Model', Econometrica, vol. 46, no. 3, pp. 643-661.
View/Download from: UTS OPUS or Publisher's site
Abstract: The regression relation between regularly sampled Y(t) and X"1(t),..., X"N(t) implied by an underlying model in which time enters more generally is studied. The underlying model includes continuous distributed lags, discrete models, and stochastic differential equations as special cases. The relation between parameters identified by regular samplings of Y and X"j and those of the underlying model is characterized. Sufficient conditions for identification of the underlying model in the limit as disaggregation over time proceeds are set forth. Empirical evidence presented suggests that important gains can be realized from temporal disaggregation in the range of conventional measurement frequencies for macroeconomic data.


Berg, J.E., Geweke, J. & Rietz, T.A. 2010, 'Memoirs of an Indifferent Trader: Estimating Forecast Distributions from Prediction Markets'.
View/Download from: UTS OPUS
Prediction markets for future events are increasingly common and they often trade several contracts for the same event. This paper considers the distribution of a normative risk-neutral trader who, given any portfolio of contracts traded on the event, would choose not to reallocate that portfolio of contracts even if transactions costs were zero. Because common parametric distributions can conflict with observed prediction market prices, the distribution is given a nonparametric representation together with a prior distribution favoring smooth and concentrated distributions. Posterior modal distributions are found for popular vote shares of the U.S. presidential candidates in the 100 days leading up to the elections of 1992, 1996, 2000, and 2004, using bid and ask prices on multiple contracts from the Iowa Electronic Markets. On some days, the distributions are multimodal or substantially asymmetric. The derived distributions are more concentrated than the historical distribution of popular vote shares in presidential elections, but do not tend to become more concentrated as time to elections diminishes.