Find us on Facebook Find us on LinkedIn Follow us on Twitter Subscribe to our YouTube channel Instagram

2014 Seminars

21st Feb 2014 - 11:00 am

Venue: Room 498, Merewether Building (H04)

Speaker: Professor Bo Chen; University of Warwick, Coventry, University of Warwick, Coventry

Title: Incentive Schemes to Resolve Parkinson's Law in Project Management

In project management, the widely observed behavioural phenomenon Parkinson's Law results in the benefit towards project completion time from potential early completion of tasks being wasted. In many projects, this leads to poor project performance. We describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under the critical path method (CPM). This scheme can be applied to any project where the tasks that are allocated to a single task owner are independent, i.e., none is a predecessor of another. Our scheme is also applicable to resolving the Student Syndrome. We further describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under critical chain project management (CCPM). The incentive payments received by all task owners under CCPM weakly dominate those under CPM; moreover, the minimum guaranteed payment to the project manager remains unchanged. Finally, we develop an incentive compatible mechanism for repeated projects, where commitments to early completion continue for subsequent projects. Our work provides an alternative to CPM planning which is vulnerable to Parkinson's Law and to CCPM planning which lacks formal control of project progress.

25th Feb 2014 - 11:00 am

Venue: Room 498, Merewether Building (H04)

Speaker: Professor Masayuki Hirukawa, Professor Masayuki Hirukawa; Setsunan University

Title: Consistent Estimation of Linear Regression Models Using Matched Data*

Regression estimation using matched samples is not uncommon in applied economics.  This paper demonstrates that ordinary least squares estimation of linear regression models using matched samples is inconsistent and that the convergence rate to its probability limit depends on the number of matching variables.  In line with these findings, bias-corrected estimators are proposed, and their asymptotic properties are explored.  The estimators can be interpreted as a version of indirect inference estimators.  Monte Carlo simulations confirm that the bias correction works well in finite samples.

Download Paper

28th Feb 2014 - 11:00 am

Venue: Room 498, Merewether Building (H04)

Speaker: Associate Professor Danny Oron, The University of Sydney

Title: Scheduling controllable processing time jobs with time dependent effects and batching considerations



In classical scheduling models jobs are assumed to have fixed processing times. However, in many real life applications job processing times are controllable through the allocation of a limited resource. The most common and realistic model assumes that there exists a non-linear and convex relationship between the amount of resource allocated to a job and its processing time. The scheduler's task when dealing with controllable processing times is twofold. In addition to solving the underlying sequencing problem, one must establish an optimal resource allocation policy. We combine the convex resource allocation model with two widespread scheduling models on a single machine setting; we begin by studying a batching model, whereby jobs undergo a batching, or burn in, process where different tasks are grouped into batches and processed simultaneously. The processing time of each batch is equal to the longest processing time among all jobs contained in the batch. The latter model focuses on linear deterioration, where job processing times are a function of the waiting time prior to their execution. In the most general setting, each job comprises of a basic processing time which is independent of its start time, and a start time dependent deterioration function. Some common examples of deteriorating systems include fire fighting, pollution containment and medical treatments. We provide interesting polynomial time algorithms for the makespan and total completion time criteria.

14th Mar 2014 - 11:00 am

Venue: Room 498 Merewether Bldg H04

Speaker: Professor Michael Smith, Melbourne Business School; University of Melbourne

Title: Copula Modelling of Dependence in Multivariate Time Series

Almost all existing nonlinear multivariate time series models remain linear, conditional on a point in time or latent regime. Here, an alternative is proposed, where nonlinear serial and cross-sectional dependence is captured by a copula model. The copula defines a multivariate time series on the unit cube. A drawable vine copula is employed, along with a factorization which allows the marginal and transitional densities of the time series to be expressed analytically. The factorization also provides for simple conditions under which the series is stationary and/or Markov, as well as being parsimonious. A parallel algorithm for computing the likelihood is proposed, along with a Bayesian approach for computing inference based on model averages over parsimonious representations of the vine copula. The model average estimates are shown to be more accurate in a simulation study. Two five-dimensional time series from the Australian electricity market are examined. In both examples, the fitted copula captures substantial asymmetric tail dependence, both over time and between elements in the series.

Keywords: Copula Model, Nonlinear Multivariate Time Series, Bayesian Model Averaging, Multivariate Stationarity.

21st Mar 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Suk-Joong Kim, Discipline of Finance; The University of Sydney

Title: Modelling the Crash Risk of the Australian Dollar Carry Trade

This paper investigates the nature and the determinants of the Australian dollar (AUD) carry trades using a Markov regime shifting model over the period 2 Jan 1999 to 31 Dec 2012. We find that the AUD has been used, except for a number of short periods notably surrounding the outbreak of the GFC, as an investment currency in a carry trade regime. We also investigate the determinants of the AUD carry trade regime probabilities. For daily horizon, prior to September 2008, carry trade regime probabilities are significantly lower in response to higher realized volatility of the USD/AUD exchange rate, number of trades, unexpected inflation and unexpected unemployment announcements. They are significantly higher when order flows are positive (more buyer than seller initialed trades of AUD) and when RBA policy interest rate unexpectedly increase. At weekly horizon, realized skewness and net long futures position on
the AUD contributed to the carry trade regime probabilities. On the other hand, post-September 2008 period shows a breakdown on the relationship between carry trade regime probabilities and the determinants.

JEL: E44; F31; G15
Keywords: AUD carry trade, Regime shifting, News, Order flows, Speculative positions


28th Mar 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor David Allen, Centre for Applied Financial Studies ; University of South Australia

Title: Modelling and Forecasting Intraday Market Risk with Application to Stock Indices

On the afternoon of May 6, 2010 the Dow Jones Industrial Average (DJIA) plunged about 1000 points (about 9%) in a matter of minutes before rebounding almost as quickly. This was the biggest one day point decline on an intraday basis in the DJIA¿¿s history. An almost similar dramatic change in intraday volatility1 was observed on April 4, 2000 when the DJIA dropped by 4.8%. These historical events present a very compelling argument for the need for robust econometrics models which can forecast intraday asset volatility. There are numerous models available in the finance literature to model financial asset volatility. Various Autoregressive Conditional Heteroskedastic (ARCH) time series models are widely used for modelling daily (end of day) volatility of the financial assets. The family of basic GARCH models works well for modelling daily volatility but they are proven to be not as efficient for intraday volatility. The last two decades have seen some research augmenting the GARCH family of models to forecast intraday volatility, the Multiplicative Component GARCH (MCGARCH) model of Engle & Sokalska (2012) being the most recent of them. MCGARCH models the conditional variance as the multiplicative product of daily, diurnal, and stochastic intraday volatility of the financial asset. In this paper we use the MCGARCH model to forecast the intraday volatility of Australia¿¿s S&P/ASX-50 stock market index and the USA Dow Jones Industrial Average (DJIA) stock market index. We also use the model to forecast their intraday Value at Risk (VaR) and Expected Shortfall (ES). As the model requires a daily volatility component, we test a GARCH based estimate of the daily volatility component against the daily realized volatility (RV) estimates obtained from the Heterogeneous Autoregressive model for Realized Volatility (HARRV). The results in the paper show that 1 minute VaR forecasts obtained from the MCGARCH model using the HARRV based daily volatility component outperform the ones obtained using the GARCH based daily volatility component.

Keywords: Intraday returns, VaR, Expected Shortfall, GARCH, realized variance.

Note: The terms volatility and variance are used interchangeably throughout the text. Unless otherwise stated both refer to the variance of the return series.


*Joint work with Abhay K Singha and Robert J Powella,   School of Business, Edith Cowan University, Perth, WA


4th Apr 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Isabel Casas, Department of Business and Economics; University of Southern Denmark

Title: Time Varying Impulse Response

The vector autoregressive model (VAR) is a useful alternative to structural econometrics model when the aim is to study macroeconomic behaviour and the impulse response function (IRF) measures the effect of exogenous shocks on other economic variables.  It exploits the fact that macroeconomic variables are interrelated and depend on historical data.  Classical VAR and IRF are too inflexible because they do not capture changes of parameters in time.  We assume that the process of interest is locally stationary and propose the time-varying nonparametric local linear estimator of a time-varying VAR and its covariance matrix.  We apply this model to the monetary problem that relates the unemployment, interest rate and inflation.


7th Apr 2014 - 01:00 pm

Venue: Merewether Rm 498

Speaker: Prof. Timo Terasvirta, Department of Economics and Business; Aarhus University

Title: Specification and Testing of Multiplicative Time-Varying GARCH Models with Applications

In this paper we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Ter¿¿svirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by applying a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by Monte Carlo simulations. The modelling strategy is illustrated in practice with two real examples, an empirical application to daily exchange rate returns and another one to daily coffee futures returns.

* joint work with Cristina Amado


11th Apr 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Rodney Wolff, The University of Queensland

Title: Computing Portfolio Risk with Optimised Polynomial Expansions

The application of orthogonal polynomial expansions to estimation of probability density functions has considerable attraction in financial portfolio theory, particularly for accessing features of a portfolio's profit/loss distribution.  This is because such expansions are given by the sum of known orthogonal polynomials multiplied by an associated weight function.  When the weight function is the Normal density, in classical models for financial profit/loss, the Hermite system constitutes the orthogonal polynomials.  Now in the case of estimators, orthogonal polynomials are simply linear combinations of moments.  For low orders, such moments have substantive interpretation as concepts in finance, namely for tail shape.  Hence, orthogonal polynomial expansion methods provide a transparent indication of how empirical moments can affect the distribution of portfolio profit/loss, and hence associated risk measures which are based on tail probability calculations.  However, naive applications of expansion methods are flawed.  The shape of the estimator's tail can undulate, under the influence of the constituent polynomials in the expansion, and can even exhibit regions of negative density.  This paper presents techniques to redeem these flaws and to improve quality of risk estimation.  We show that by targeting a smooth density which is sufficiently close to the target density, we can obtain expansion-based estimators which do not have the shortcomings of equivalent naive estimators.  In particular, we apply optimisation and smoothing techniques which place greater weight on the tails than the body of the distribution.  Numerical examples using both real and simulated data illustrate our approach.  We further outline how our techniques can apply to a wide class of expansion methods, and indicate opportunities to extend to the multivariate case, where distributions of individual component risk factors in a portfolio can be accessed for the purpose of risk management. 

* joint work with Kohei Marumo (Saitama University, Japan)


17th Apr 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Ping Yu, Department of Economics; University of Auckland

Title: Marginal Quantile Treatment Effect

This paper studies estimation and inference based on the marginal quantile treatment effect. First, we illustrate the importance of the rank preservation assumption in the quantile treatment effects evaluation, show the identifiability of the marginal quantile treatment effect, and clarify the relationship between the marginal quantile treamtent effect and other quantile treatment parameters. Second, we develop sharp bounds for the quantile treatment effect with and without the monotonicity assumption, and also sufficient and necessary conditions for point identification. Third, we estimate the marginal quantile treatment effect and associated quantile treatment effect and integrated quantile treatment effect based on the distribution regression, derive the corresponding weak limits and show the validity of the bootstrap inferences. The inference procedure can be used to construct uniform confidence bands for quantile treatment parameters and test unconfoundedness and stochastic dominance. We also develop goodness of fit tests to choose regressors in the distribution regression. Fourth, we conduct two counterfactual analyses: deriving the transition matrix and developing the relative marginal policy relevant quantile treatment effect parameter under the policy invariance. Fifth, we compare the identification schemes in some important literature with that by the marginal quantile treatment effect, and point out advantages and also weaknesses of each scheme, e.g., Chernozhukov and Hansen (2005) concentrate mainly on the quantile treatment effect with the selection select but without the essential heterogeneity; Abadie, Angrist and Imbens (2002), Aakvik, Heckman and Vytlacil (2005) and Chernozhukov and Hansen (2006) suffer from some obvious misspecification problems. Meanwhile, an alternative estimator of the local quantile treatment effect is developed and its weak limit is derived. Finally, we apply the estimation methods to the famous return to schooling dataset of Angrist and Krueger (1991) to illustrate the usefulness of the techniques developed in this paper to practitioners.

7th May 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Pavel Shevchenko, Senior Principal Research Scientist; CSIRO - Computational Informatics

Title: Loss Distribution Approach for Operational Risk Capital Modelling: Challenges and Pitfalls

The management of operational risk in the banking industry has undergone explosive changes over the last decade due to substantial changes in the operational environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for Banking Supervision has developed a regulatory Basel II framework that introduced operational risk category and corresponding capital requirements. Over the past five years, many major banks have received accreditation under the Basel II Advanced Measurement Approach by adopting the Loss Distribution Approach (LDA) despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. This talk is devoted to quantitative issues in operational risk LDA such as modelling large losses, methods to combine different data sources (internal data, external data and scenario analysis) and modelling dependence which are still unresolved issues in operational risk capital modelling. Presented results are based on our work with the banking industry, discussions with regulators and academic research.


16th May 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Demetris Christodolou, Discipline of Accounting; The University of Sydney

Title: Identification and Intepretation of Estimates from the Rank Deficient Accounting Data Matrix

Regression models that rely on inputs from published financial statements ignore the fact that the observed accounting data matrix is rank deficient by design. The standard practice is to identify zero-parameter restrictions on the accounting matrix in order to impose full rank and enable estimation but this approach renders the interpretation of recovered estimates as composite deviations from the identity parameters that have been omitted. The alternative approach would be to identify suitable restrictions on linear combinations, but again the interpretation of estimates is conditional on validity of the restriction. This is a standard result in the analysis of intercepts but there is lack of insight on how to deal with rank deficient systems of slope coefficients, and this has proven to be an acute problem for empirical accounting research that fails to acknowledge the relevant effects. We discuss the problem of identification and interpretation of estimates from the rank deficient accounting data matrix, particularly within the context of equity pricing models.

* joint work with Professor Richard Gerlach 


23rd May 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Jeffrey Racine, Department of Econometrics; McMaster University

Title: Infinite Order Cross-Validated Local Polynomial Regression

Many practical problems require nonparametric estimates of regression functions, and local polynomial regression has emerged as a leading approach. In applied settings practitioners often adopt either the local constant or local linear variants, or choose the order of the local polynomial to be slightly greater than the order of the maximum derivative estimate required. But such ad hoc determination of the polynomial order may not be optimal in general, while the joint determination of the polynomial order and bandwidth presents some interesting theoretical and practical challenges. In this paper we propose a data-driven approach towards the joint determination of the polynomial order and bandwidth, provide theoretical underpinnings, and demonstrate that improvements in both finite-sample efficiency and rates of convergence can thereby be obtained. In the case where the true data generating process (DGP) is in fact a polynomial whose order does not depend on the sample size, our method is capable of attaining the  ¿¿¿n  rate often associated with correctly specified parametric models, while the estimator is shown to be uniformly consistent for a much larger class of DGPs. Theoretical underpinnings are provided and finite-sample properties are examined.

Keywords:  Model Selection, Efficiency, Rates of Convergence

* joint work with Peter G. Hall 


30th May 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Julian Mestre; ARC Discovery Early Career Research Fellow, School of Information Technologies; The University of Sydney

Title: Universal Scheduling on a Single Machine

Consider scheduling jobs to minimize weighted average completion times on an unreliable machine that can experience unexpected changes in processing speed or even full breakdowns. We aim for a universal non-adaptive solution that performs on any possible machine behavior.

Even though it is not obvious that such a schedule should exist, we show that there is a deterministic algorithm that finds a universal scheduling sequence with a solution value within 4 times the value of an optimal solution tailored to that particular machine behavior. A randomized version of this algorithm attains an approximation ratio of e. Furthermore, we show that both results are best possible among universal solutions.

Finally, we study the problem of finding the best possible universal schedule. Even though the problem is NP-hard, we show that it admits a polynomial time approximation scheme.

[1] "Universal sequencing on a single machine¿¿¿  by Epstein, Levin, Marchetti-Spaccamela, Megow, Mestre, Skutella, and Stougie. SIAM Journal on Computing, 2013.
[2] "Instance-sensitive robustness guarantees for sequencing with unknown packing and covering constraints ¿¿¿ by Megow and Mestre. Proc of the 4th conference on Innovations in Theoretical Computer Science, 2013.


13th Jun 2014 - 11:00 am

Venue: Rm. 498 Merewether Bldg H04

Speaker: Professor Peter Schmidt, Department of Economics; Michigan State University

Title: A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency

In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N+ (µ,σ2).   This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier.  We will distinguish the pre-truncation mean (µ) and variance (σ2) from the post-truncation mean µ*=E(u) and variance σ*2= var(u).  Existing models parameterize the pre-truncation mean and/or variance in terms of the environmental variables and some parameters.  Changes in the environmental variables cause changes in the pre-truncation mean and/or variance, and imply changes in both the post-truncation mean and variance.  The expressions for the changes in the post-truncation mean and variance can be quite complicated.  In this paper, we suggest parameterizing the post-truncation mean and variance instead.  This leads to simple expressions for the effects of changes in the environmental variables on the mean and variance of u, and it allows the environmental variables to affect the mean of u only, or the variance of u only, or both.

* joint work with Christine Amsler (Michigan State University) and Wen-Jen Tsay (Academic Sinica) 

1st Aug 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Robin Sickles, Chair of Economics; Rice University

Title: Algorithmic Trading, Market Timing and Market Efficiency

In recent years large panel data models have been developed to make full use of the Information content of such datasets. Despite the large number of contributions, an important issue that is rarely pursued in much of the existing literature concerns the risk of neglecting structural beaks in the data generating process.  While a substantial literature on structural break analysis exists for univariate time series, a relatively small number of techniques have been developed for panel data models.  This paper provides a new treatment to deal with the problem of multiple structural breaks that occur at unknown date points in the panel model parameters.  Our method is related to the Haar wavelet technique that we adjust according to the structure of the explanatory variables in order to detect the change points of the parameters consistently.  We apply the technique to high frequency  securities data to examine the effects of algorithmic trading (AT) on standard measures of market quality that proxy for some dimension of liquidity.  Specifically, we examine whether AT has time varying effects on liquidity and discuss asset pricing implications.


8th Aug 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Peter Exterkate, Department of Economics and Business; Aarhus University

Title: Distribution Forecasting in Non-Linear Models with Stochastic Volatility

Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors.  This makes it a powerful forecasting tool, which is applicable in many different contexts. However, it is usually applied only to independent and identically distributed observations.  This paper introduces a variant of kernel ridge regression for time series with stochastic volatility.  The conditional mean and volatility are both modelled as nonlinear functions of observed variables. We set up the estimation problem in a Bayesian manner and derive a Gibbs sampler to obtain draws from the predictive distribution.  A simulation study and an application to forecasting the distribution of returns on the S\&P500 index are presented, and we find that our method outperforms most popular GARCH variants in terms of one-day-ahead predictive ability.  Notably, most of this improvement comes from a more adequate approximation to the tails of the distribution.

15th Aug 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Associate Professor Jamie Alcock, Discipline of Finance; The University of Sydney

Title: Characterising the Assymetric Dependence Premium

We examine the relative importance of asymmetric dependence (AD) and systematic risk in the cross-section of US equities. Using a ß-invariant AD metric, we demonstrate a lower-tail dependence premium equivalent to 35% of the market risk premium, compared with an upper-tail dependence discount that is 41% of the market risk premium. Lower-tail dependence displays a constant price between 1989-2009, while the discount associated with upper-tail dependence appears to be increasing in recent years. Subsequently, we find that return changes in US equities between 2007-2009 reflected changes in systematic risk and upper-tail dependence. This suggests that both systematic risk and AD should be managed in order to reduce the return impact of market downturns. Our findings have substantial implications for the cost of capital, investor expectations, portfolio management and performance assessment.

*joint work with Anthony Hatherley

20th Aug 2014 - 03:00 pm

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor William Greene, Stern School of Business; New York University

Title: True Random Effects in Stochastic Frontier Models

This study is a mixture of an empirical investigation and econometric method.

In his analysis of the famous (and notorious) year 2000 World Health Organization health efficiency study, Greene (2004) raised the possibility that what the WHO had measured as inefficiency was probably cross country heterogeneity in a panel data set.  He developed the 'true random effects (TRE) stochastic frontier model' as part of that exploration, as a way to distinguish between heterogeneity and inefficiency.   The WHO study has been the subject of a huge amount of public comment for the past 14 years (almost none of it well informed).  The predictions of Greene's TRE model are very different from the WHO results.  

Numerous specifications have since been developed to accommodate 'panel data' effects in models of efficiency.  The most recent developments solve a longstanding modeling problem, in theory, but are impractical in practice.  This paper examines the path of development of random effects models for stochastic frontiers and presents a practical implementation of this current leading development of this modeling approach.  The estimator is based on the method of maximum simulated likelihood.  As part of the development, we reconsider some aspects of this method of maximum likelihood estimation.


22nd Aug 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Sally Wood, Discipline of Business Analytics; The University of Sydney

Title: AdaptSPEC: Adaptive Spectral Estimation for Non Stationary Time Series

Many time series are non-stationary and the ease and rapidity of data capture means that researchers can now model the non-stationarity in a flexible manner. 


The talk outlines an approach for analyzing possibly non-stationary time series. The data are assumed to be generated from an unknown but finite number of locally stationary processes and these locally stationary process are combined in a flexible manner to produce a non-stationary time series.  The method presented is flexible in the sense that a parametric data generating process is not assumed for the locally stationary series. The model is formulated in a Bayesian framework, and the estimation relies on reversible jump Markov chain Monte Carlo (RJMCMC) methods. The frequentist properties of the method are investigated by simulation, and applications to intracranial electroencephalogram (IEEG), and the El Nino Southern Oscillation phenomenon are described in detail.


29th Aug 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Mohamad Khaled, School of Economics; University of Queensland

Title: Modelling Multivariate Discrete Time Series with Quadratic Exponential Famillies

A parsimonious model for multiple discrete time series is introduced using an exponential family theory framework. The talk will focus on the model probabilistic properties and on statistical inference using maximum likelihood estimation.  In discrete exponential families, one of the major challenges is the computational intractability induced by their inherent combinatorial complexity.  As well as solving that problem, we will show that the model gives rise to a non-homogenous Markov chain whose asymptotic behavior will be studied.  An empirical application will be given as an illustration.


5th Sep 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor Renate Meyer, Department of Statistics; University of Auckland

Title: Bayesian Semi-Parametric Likelihood Approximations for Stationary Time Series

Time series abound in many fields such as econometrics, medicine, ecology and astrophysics. Parametric models like ARMA or the more recent GARCH models dominate standard time series modeling. In particular, Bayesian time series analysis (Steel, 2008) is inherently parametric in that a completely specified likelihood function is needed. Even though nonparametric Bayesian inference has been a rapidly growing topic over the last decade, as reviewed by Hjort (2010), only very few nonparametric Bayesian approaches to time series analysis have been developed. Most notably, Carter and Kohn (1997), Gangopadhyay (1998), Choudhuri et al. (2004), Hermansen (2008), and Rover et al. (2011) used Whittle's likelihood (Whittle, 1957) for Bayesian modeling of the spectral density as the main nonparametric characteristic of stationary time series. On the other hand, frequentist time series analyses are often based on nonparametric techniques encompassing a multitude of bootstrap methods, see e.g. Hardle et al. (2003), Kirch and Politis (2011), Kreiss and Lahiri (2011).

Whittle's likelihood is an approximation of the true likelihood. Even for non-Gaussian stationary time series, which are not completely specified by their first and second-order structure, the Whittle likelihood results in asymptotically correct statistical inference in many situations. But as shown in Contreras et al. (2006), the loss of efficiency of the nonparametric approach using Whittle's likelihood can be substantial. On the other hand, parametric methods are more powerful than nonparametric methods if the observed time series is close to the considered model class but fail if the model is misspecified. Therefore, we suggest a nonparametric correction of a parametric likelihood approximation that takes advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. We use a nonparametric Bernstein polynomial prior on the spectral density with weights induced by a Dirichlet process distribution. We show that Bayesian nonparametric posterior computations can be performed via a MH-within-Gibbs sampler by making use of the Sethuraman representation of the Dirichlet process.

* joint work with Claudia Kirch (Karlsruhe Institute of Technology, Germany) 


19th Sep 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Filippo Massari, Australian School of Business; University of New South Wales

Title: What the Market Believes

This paper examines the implications of the market selection hypothesis on equilibrium prices. The main focus is on the effect of different risk attitudes on the accuracy of the probabilities implied by equilibrium prices and on the "learning" mechanism of markets. I find that, given traders' beliefs and an initial allocation of wealth, the probabilities implicit in equilibrium prices depend on risk attitudes. This dependency can be used to define a class of probabilities generated by traders' beliefs. This class of probabilities is rich as it includes Bayes' rule as well as known non-Bayesian models. In economies populated by traders with identical CRRA utilities that are more risk averse than log, these probabilities are, in term of likelihood, asymptotically unambiguously superior to Bayes' rule. This result challenges on a theoretical level the optimality of the Bayesian procedure and the common idea that Bayesian learning is the only "rational" way to learn.


10th Oct 2014 - 11:00 am

Venue: Room 498 Merewether Bldg H04

Speaker: Dr Chris Strickland, Australian School of Business; University of New South Wales

Title: Exploiting Structure for Efficient Bayesian Estimation of Complex Models Used in the Analysis of Large Space and Space-Time Data Sets

We develop scalable methods for the analysis of large space and space-time data sets. The algorithms we propose have a computational cost that scales either linearly or super-linearly with the number of observations. The proposed methods exploit structure enabling calculations to be performed on relatively low dimensional subspaces. This ensures that the proposed methodology remains computational feasible, even for large space time data sets. We use this methodology to analyse remotely sensed data.


17th Oct 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Dr Gareth Peters, University of London

Title: Optimal Insurance Purchase Strategies via Optimal Multiple Stopping Times

This talk will present recent results in which we study a class of insurance products where the policy holder has the option to insure k of its annual Operational Risk losses in a horizon of T years. This involves a choice of k out of T years in which to apply the insurance policy coverage by making claims against losses in the given year. The insurance product structure presented can accommodate any kind of annual mitigation, but we present three basic generic insurance policy structures that can be combined to create more complex types of coverage. Following the Loss Distributional Approach (LDA) with Poisson distributed annual loss frequencies and Inverse-Gaussian loss severities we are able to characterize in closed form analytical expressions for the multiple optimal decision strategy that minimizes the expected Operational Risk loss over the next T years. For the cases where the combination of insurance policies and LDA model does not lead to closed form expressions for the multiple optimal decision rules, we also develop a principled class of closed form approximations to the optimal decision rule. These approximations are developed based on a class of orthogonal Askey polynomial series basis expansion representations of the annual loss compound process distribution and functions of this annual loss.

24th Oct 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Professor William Griffiths, University of Melbourne

Title: Constructing a Panel of Country-Level Income Distributions

Grouped income distribution data from the World Bank or WIDER are used to construct income distributions for several country-year combinations, with a view to using these distributions to analyse pro-poor growth, and changes in inequality and poverty at national, regional and global levels. The construction of income distributions involves 3 steps.

  1. For country/years where data are available, we use GMM to estimate mixtures of lognormal distributions.
  2. For country/years with no data, but where the years lie between other years for which data are available, we interpolate income shares and then estimate lognormal mixtures using GMM.
  3. For country years where no data are available, and data are not available for interpolation, we extrapolate income shares using weighted averages of changes in shares from other countries with weights derived from Kullback-Leibler distance. GMM estimation is applied to these extrapolated shares.

The talk will focus on the methodology for these 3 steps and for computing inequality, poverty and pro-poor growth measures from mixtures of lognormal distributions.

* joint work with Duangkamon Chotikapanich, Monash University; Gholamreza Hajargasht, University of Melbourne; Charley Xia, University of Melbourne

31st Oct 2014 - 11:00 am

Venue: Rm 498 Merewether Bldg H04

Speaker: Associate Professor Scott. A. Sisson, University of New South Wales

Title: Functional regression ABC for Gaussian process density estimation

We propose a novel Bayesian nonparametric method for modelling a set of related density functions, where grouped data in the form of samples from the density functions are available. Borrowing strength across the groups is a major challenge in this context. To address this problem, we introduce a hierarchically structured prior, defined over a set of univariate density functions, using convenient transformations of Gaussian processes. Inference is performed through a combination of Approximate Bayesian Computation (ABC) and a functional regression-adjustment. This work provides a flexible and computationally tractable way to perform hierarchical nonparametric density estimation, and is the first attempt to use ABC to estimate infinite-dimensional parameters. The proposed method is illustrated by a simulated example and a real analysis of rural high school exam performance in Brazil.

* joint work with G. S. Rodrigues and D. J. Nott

21st Nov 2014 - 11:00 am

Venue: Room 498 Merewether Building (H04)

Speaker: Professor Michael McAleer, Department of Quantitative Finance; National Tsing Hua University

Title: On Univariate and Multivariate Models of Volatility

Part 1 of the presentation is on one of the two most widely estimated univariate asymmetric conditional volatility models. The exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH parameters.

Part 2 of the presentation is on one of the most widely-used multivariate conditional volatility models, namely the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters.

24th Nov 2014 - 11:00 am

Venue: Room 214/215 Economic and Business Building (H69)

Speaker: Associate Professor Tomohiro Ando, Graduate School of Business Administration; Keio University

Title: Asset pricing with high-dimensional multi-factor models

This presentation analyzes multifactor models in the presence of a large number of potential observable risk factors and unobservable common/group-specific pervasive factors. We show how relevant observable factors can be found from a large given set and how to determine the number of common/group-specific unobservable factors. The proposed method allows consistent estimation of the beta coefficients in the presence of correlations between the observable and unobservable factors. Even when the group membership of each asset and the number of groups are left unspecified, we show the consistency and asymptotic normality of the estimated beta coefficients.
The theory and method are applied to the study of asset returns for A-shares and B-shares traded on the Shanghai and Shenzhen stock exchanges. We also apply the method to the analysis of the U.S. mutual fund returns.


*joint work with Professor Jushan Bai (Columbia University)

28th Nov 2014 - 11:00 am

Venue: Room 498, Merewether Building (H04)

Speaker: Dr Edward Cripps, School of Mathematics and Statistics; University of Western Australia

Title: Flexible clustering of longitudinal trajectories with applications in psychology

This talk presents a Bayesian approach to clustering longitudinal trajectories into latent classes.
In psychology, the implicit theory of abilities (ITA) proposes that individuals are classified as one of two groups: entity theorists who believe ability is innate and incremental theorists who believe ability is an acquired set of skills. Entity theorists are more likely to interpret failure as evidence of a lack of ability and doubt their future capacity to learn the task. Incremental theorists are more likely to interpret failures as part of a learning strategy, potentially leading to recovery over time. The hypothesis in psychology is that learning performances of entity theorists are more prone to downward \spirals" than incremental theorists. To assess this claim we formulate two models. The first model assumes individual performance trajectories are generated from a mixture of potentially two random effects models with nonparametric mean functions, generated from integrated Wiener process priors. The second assumes the regression coefficients of the random effects model are generated from a time-varying mixture of an unknown but nite number of processes. Both approaches model the clustering probability of an individual trajectory as a function of an individual's ITA classification. Such an approach is referred to as a mixtures-of-experts model. The statistical methods illustrate how the Bayesian paradigm, together with Markov chain Monte Carlo algorithms, afford flexibility to tailor models to address specific re-search questions in applications and in turn motivate more general statistical procedures.

10th Dec 2014 - 11:00 am

Venue: Room 498, Merewether (H04)

Speaker: Dr Zhenzhen Fan, University of Amsterdam

Title: Excitation Asymmetry and the Dominant Region Bias in International Portfolio Choice

Literature has found that the large investments in the US cannot be explained by standard portfolio allocation models and diversification motives. Indeed, according to the mean variance portfolio theory, the market portfolio implies that investors expect a return in the US market hundreds of basis points higher than its empirical counterpart. In this paper, we explain the overweighting of US market (and underweighting of some other economies accordingly) in the market portfolio by excitation asymmetry. In particular, we employ a mutually exciting jump diffusion model to generate jump excitations both over time and across different equity markets. We characterize the dominance role of US over peripheral markets by imposing a larger cross section excitor of US price jumps, so that crashes in the US can get reflected quickly in smaller economies but not the other way round. We solve in closed-form the portfolio optimization problem in this market in terms of optimal portfolio exposure to risk factors. We show that the optimal portfolio (1) is sufficiently diversified, in the sense that it consists of a large number of individual assets in order to diversify away idiosyncratic risks and that it exploits the diversification potentials in the instantaneous covariance matrix of international asset returns; (2) is biased, in the sense that dominant regions, such as US, which transmit their risks more easily to other markets, have larger shares in the optimal portfolio than implied by classical portfolio choice models, giving rise to a phenomenon named ``the Dominant Region Bias". By calibrating the model to historical prices of MSCI US, Japan and Europe, we show that our model is able to reproduce the observed biases in the market portfolio.

*Authors: Zhenzhen Fan (University of Amsterdam), Roger Laeven (University of Amsterdam)
Rob van den Goorbergh (APG Asset Management)

*Key words: Portfolio choice; International diversification; Dominant region bias; Contagion asymmetry; Mutually exciting jumps.