21st Feb 2014  11:00 am  

Speaker: 
Professor Bo Chen; University of Warwick, Coventry, 
Affiliation: 
University of Warwick, Coventry 
Venue: 
Room 498, Merewether Building (H04) 
Title: 
Incentive Schemes to Resolve Parkinson's Law in Project Management 
Abstract: 
In project management, the widely observed behavioural phenomenon Parkinson's Law results in the benefit towards project completion time from potential early completion of tasks being wasted. In many projects, this leads to poor project performance. We describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under the critical path method (CPM). This scheme can be applied to any project where the tasks that are allocated to a single task owner are independent, i.e., none is a predecessor of another. Our scheme is also applicable to resolving the Student Syndrome. We further describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under critical chain project management (CCPM). The incentive payments received by all task owners under CCPM weakly dominate those under CPM; moreover, the minimum guaranteed payment to the project manager remains unchanged. Finally, we develop an incentive compatible mechanism for repeated projects, where commitments to early completion continue for subsequent projects. Our work provides an alternative to CPM planning which is vulnerable to Parkinson's Law and to CCPM planning which lacks formal control of project progress. 
25th Feb 2014  11:00 am  

Speaker: 
Professor Masayuki Hirukawa, 
Affiliation: 
Professor Masayuki Hirukawa; Setsunan University 
Venue: 
Room 498, Merewether Building (H04) 
Title: 
Consistent Estimation of Linear Regression Models Using Matched Data* 
Abstract: 
Regression estimation using matched samples is not uncommon in applied economics. This paper demonstrates that ordinary least squares estimation of linear regression models using matched samples is inconsistent and that the convergence rate to its probability limit depends on the number of matching variables. In line with these findings, biascorrected estimators are proposed, and their asymptotic properties are explored. The estimators can be interpreted as a version of indirect inference estimators. Monte Carlo simulations confirm that the bias correction works well in finite samples. 
28th Feb 2014  11:00 am  

Speaker: 
Associate Professor Danny Oron, 
Affiliation: 
The University of Sydney 
Venue: 
Room 498, Merewether Building (H04) 
Title: 
Scheduling controllable processing time jobs with time dependent effects and batching considerations 
Abstract: 
ABSTRACT
In classical scheduling models jobs are assumed to have fixed processing times. However, in many real life applications job processing times are controllable through the allocation of a limited resource. The most common and realistic model assumes that there exists a nonlinear and convex relationship between the amount of resource allocated to a job and its processing time. The scheduler's task when dealing with controllable processing times is twofold. In addition to solving the underlying sequencing problem, one must establish an optimal resource allocation policy. We combine the convex resource allocation model with two widespread scheduling models on a single machine setting; we begin by studying a batching model, whereby jobs undergo a batching, or burn in, process where different tasks are grouped into batches and processed simultaneously. The processing time of each batch is equal to the longest processing time among all jobs contained in the batch. The latter model focuses on linear deterioration, where job processing times are a function of the waiting time prior to their execution. In the most general setting, each job comprises of a basic processing time which is independent of its start time, and a start time dependent deterioration function. Some common examples of deteriorating systems include fire fighting, pollution containment and medical treatments. We provide interesting polynomial time algorithms for the makespan and total completion time criteria. 
14th Mar 2014  11:00 am  

Speaker: 
Professor Michael Smith, 
Affiliation: 
Melbourne Business School; University of Melbourne 
Venue: 
Room 498 Merewether Bldg H04 
Title: 
Copula Modelling of Dependence in Multivariate Time Series 
Abstract: 
Almost all existing nonlinear multivariate time series models remain linear, conditional on a point in time or latent regime. Here, an alternative is proposed, where nonlinear serial and crosssectional dependence is captured by a copula model. The copula defines a multivariate time series on the unit cube. A drawable vine copula is employed, along with a factorization which allows the marginal and transitional densities of the time series to be expressed analytically. The factorization also provides for simple conditions under which the series is stationary and/or Markov, as well as being parsimonious. A parallel algorithm for computing the likelihood is proposed, along with a Bayesian approach for computing inference based on model averages over parsimonious representations of the vine copula. The model average estimates are shown to be more accurate in a simulation study. Two fivedimensional time series from the Australian electricity market are examined. In both examples, the fitted copula captures substantial asymmetric tail dependence, both over time and between elements in the series. Keywords: Copula Model, Nonlinear Multivariate Time Series, Bayesian Model Averaging, Multivariate Stationarity. 
21st Mar 2014  11:00 am  

Speaker: 
Professor SukJoong Kim, 
Affiliation: 
Discipline of Finance; The University of Sydney 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Modelling the Crash Risk of the Australian Dollar Carry Trade 
Abstract: 
This paper investigates the nature and the determinants of the Australian dollar (AUD) carry trades using a Markov regime shifting model over the period 2 Jan 1999 to 31 Dec 2012. We find that the AUD has been used, except for a number of short periods notably surrounding the outbreak of the GFC, as an investment currency in a carry trade regime. We also investigate the determinants of the AUD carry trade regime probabilities. For daily horizon, prior to September 2008, carry trade regime probabilities are significantly lower in response to higher realized volatility of the USD/AUD exchange rate, number of trades, unexpected inflation and unexpected unemployment announcements. They are significantly higher when order flows are positive (more buyer than seller initialed trades of AUD) and when RBA policy interest rate unexpectedly increase. At weekly horizon, realized skewness and net long futures position on JEL: E44; F31; G15

28th Mar 2014  11:00 am  

Speaker: 
Professor David Allen, 
Affiliation: 
Centre for Applied Financial Studies ; University of South Australia 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Modelling and Forecasting Intraday Market Risk with Application to Stock Indices 
Abstract: 
On the afternoon of May 6, 2010 the Dow Jones Industrial Average (DJIA) plunged about 1000 points (about 9%) in a matter of minutes before rebounding almost as quickly. This was the biggest one day point decline on an intraday basis in the DJIA¿¿s history. An almost similar dramatic change in intraday volatility1 was observed on April 4, 2000 when the DJIA dropped by 4.8%. These historical events present a very compelling argument for the need for robust econometrics models which can forecast intraday asset volatility. There are numerous models available in the finance literature to model financial asset volatility. Various Autoregressive Conditional Heteroskedastic (ARCH) time series models are widely used for modelling daily (end of day) volatility of the financial assets. The family of basic GARCH models works well for modelling daily volatility but they are proven to be not as efficient for intraday volatility. The last two decades have seen some research augmenting the GARCH family of models to forecast intraday volatility, the Multiplicative Component GARCH (MCGARCH) model of Engle & Sokalska (2012) being the most recent of them. MCGARCH models the conditional variance as the multiplicative product of daily, diurnal, and stochastic intraday volatility of the financial asset. In this paper we use the MCGARCH model to forecast the intraday volatility of Australia¿¿s S&P/ASX50 stock market index and the USA Dow Jones Industrial Average (DJIA) stock market index. We also use the model to forecast their intraday Value at Risk (VaR) and Expected Shortfall (ES). As the model requires a daily volatility component, we test a GARCH based estimate of the daily volatility component against the daily realized volatility (RV) estimates obtained from the Heterogeneous Autoregressive model for Realized Volatility (HARRV). The results in the paper show that 1 minute VaR forecasts obtained from the MCGARCH model using the HARRV based daily volatility component outperform the ones obtained using the GARCH based daily volatility component.
*Joint work with Abhay K Singha and Robert J Powella, School of Business, Edith Cowan University, Perth, WA

4th Apr 2014  11:00 am  

Speaker: 
Professor Isabel Casas, 
Affiliation: 
Department of Business and Economics; University of Southern Denmark 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Time Varying Impulse Response 
Abstract: 
The vector autoregressive model (VAR) is a useful alternative to structural econometrics model when the aim is to study macroeconomic behaviour and the impulse response function (IRF) measures the effect of exogenous shocks on other economic variables. It exploits the fact that macroeconomic variables are interrelated and depend on historical data. Classical VAR and IRF are too inflexible because they do not capture changes of parameters in time. We assume that the process of interest is locally stationary and propose the timevarying nonparametric local linear estimator of a timevarying VAR and its covariance matrix. We apply this model to the monetary problem that relates the unemployment, interest rate and inflation.

7th Apr 2014  01:00 pm  

Speaker: 
Prof. Timo Terasvirta, 
Affiliation: 
Department of Economics and Business; Aarhus University 
Venue: 
Merewether Rm 498 
Title: 
Specification and Testing of Multiplicative TimeVarying GARCH Models with Applications 
Abstract: 
In this paper we develop a specification technique for building multiplicative timevarying GARCH models of Amado and Ter¿¿svirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by applying a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finitesample properties of the strategy and tests are examined by Monte Carlo simulations. The modelling strategy is illustrated in practice with two real examples, an empirical application to daily exchange rate returns and another one to daily coffee futures returns. * joint work with Cristina Amado

11th Apr 2014  11:00 am  

Speaker: 
Professor Rodney Wolff, 
Affiliation: 
The University of Queensland 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Computing Portfolio Risk with Optimised Polynomial Expansions 
Abstract: 
The application of orthogonal polynomial expansions to estimation of probability density functions has considerable attraction in financial portfolio theory, particularly for accessing features of a portfolio's profit/loss distribution. This is because such expansions are given by the sum of known orthogonal polynomials multiplied by an associated weight function. When the weight function is the Normal density, in classical models for financial profit/loss, the Hermite system constitutes the orthogonal polynomials. Now in the case of estimators, orthogonal polynomials are simply linear combinations of moments. For low orders, such moments have substantive interpretation as concepts in finance, namely for tail shape. Hence, orthogonal polynomial expansion methods provide a transparent indication of how empirical moments can affect the distribution of portfolio profit/loss, and hence associated risk measures which are based on tail probability calculations. However, naive applications of expansion methods are flawed. The shape of the estimator's tail can undulate, under the influence of the constituent polynomials in the expansion, and can even exhibit regions of negative density. This paper presents techniques to redeem these flaws and to improve quality of risk estimation. We show that by targeting a smooth density which is sufficiently close to the target density, we can obtain expansionbased estimators which do not have the shortcomings of equivalent naive estimators. In particular, we apply optimisation and smoothing techniques which place greater weight on the tails than the body of the distribution. Numerical examples using both real and simulated data illustrate our approach. We further outline how our techniques can apply to a wide class of expansion methods, and indicate opportunities to extend to the multivariate case, where distributions of individual component risk factors in a portfolio can be accessed for the purpose of risk management. * joint work with Kohei Marumo (Saitama University, Japan)

17th Apr 2014  11:00 am  

Speaker: 
Dr Ping Yu, 
Affiliation: 
Department of Economics; University of Auckland 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Marginal Quantile Treatment Effect 
Abstract: 
This paper studies estimation and inference based on the marginal quantile treatment effect. First, we illustrate the importance of the rank preservation assumption in the quantile treatment effects evaluation, show the identifiability of the marginal quantile treatment effect, and clarify the relationship between the marginal quantile treamtent effect and other quantile treatment parameters. Second, we develop sharp bounds for the quantile treatment effect with and without the monotonicity assumption, and also sufficient and necessary conditions for point identification. Third, we estimate the marginal quantile treatment effect and associated quantile treatment effect and integrated quantile treatment effect based on the distribution regression, derive the corresponding weak limits and show the validity of the bootstrap inferences. The inference procedure can be used to construct uniform confidence bands for quantile treatment parameters and test unconfoundedness and stochastic dominance. We also develop goodness of fit tests to choose regressors in the distribution regression. Fourth, we conduct two counterfactual analyses: deriving the transition matrix and developing the relative marginal policy relevant quantile treatment effect parameter under the policy invariance. Fifth, we compare the identification schemes in some important literature with that by the marginal quantile treatment effect, and point out advantages and also weaknesses of each scheme, e.g., Chernozhukov and Hansen (2005) concentrate mainly on the quantile treatment effect with the selection select but without the essential heterogeneity; Abadie, Angrist and Imbens (2002), Aakvik, Heckman and Vytlacil (2005) and Chernozhukov and Hansen (2006) suffer from some obvious misspecification problems. Meanwhile, an alternative estimator of the local quantile treatment effect is developed and its weak limit is derived. Finally, we apply the estimation methods to the famous return to schooling dataset of Angrist and Krueger (1991) to illustrate the usefulness of the techniques developed in this paper to practitioners. 
7th May 2014  11:00 am  

Speaker: 
Professor Pavel Shevchenko, 
Affiliation: 
Senior Principal Research Scientist; CSIRO  Computational Informatics 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Loss Distribution Approach for Operational Risk Capital Modelling: Challenges and Pitfalls 
Abstract: 
The management of operational risk in the banking industry has undergone explosive changes over the last decade due to substantial changes in the operational environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for Banking Supervision has developed a regulatory Basel II framework that introduced operational risk category and corresponding capital requirements. Over the past five years, many major banks have received accreditation under the Basel II Advanced Measurement Approach by adopting the Loss Distribution Approach (LDA) despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. This talk is devoted to quantitative issues in operational risk LDA such as modelling large losses, methods to combine different data sources (internal data, external data and scenario analysis) and modelling dependence which are still unresolved issues in operational risk capital modelling. Presented results are based on our work with the banking industry, discussions with regulators and academic research.

16th May 2014  11:00 am  

Speaker: 
Dr Demetris Christodolou, 
Affiliation: 
Discipline of Accounting; The University of Sydney 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Identification and Intepretation of Estimates from the Rank Deficient Accounting Data Matrix 
Abstract: 
Regression models that rely on inputs from published financial statements ignore the fact that the observed accounting data matrix is rank deficient by design. The standard practice is to identify zeroparameter restrictions on the accounting matrix in order to impose full rank and enable estimation but this approach renders the interpretation of recovered estimates as composite deviations from the identity parameters that have been omitted. The alternative approach would be to identify suitable restrictions on linear combinations, but again the interpretation of estimates is conditional on validity of the restriction. This is a standard result in the analysis of intercepts but there is lack of insight on how to deal with rank deficient systems of slope coefficients, and this has proven to be an acute problem for empirical accounting research that fails to acknowledge the relevant effects. We discuss the problem of identification and interpretation of estimates from the rank deficient accounting data matrix, particularly within the context of equity pricing models. * joint work with Professor Richard Gerlach

23rd May 2014  11:00 am  

Speaker: 
Professor Jeffrey Racine, 
Affiliation: 
Department of Econometrics; McMaster University 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Infinite Order CrossValidated Local Polynomial Regression 
Abstract: 
Many practical problems require nonparametric estimates of regression functions, and local polynomial regression has emerged as a leading approach. In applied settings practitioners often adopt either the local constant or local linear variants, or choose the order of the local polynomial to be slightly greater than the order of the maximum derivative estimate required. But such ad hoc determination of the polynomial order may not be optimal in general, while the joint determination of the polynomial order and bandwidth presents some interesting theoretical and practical challenges. In this paper we propose a datadriven approach towards the joint determination of the polynomial order and bandwidth, provide theoretical underpinnings, and demonstrate that improvements in both finitesample efficiency and rates of convergence can thereby be obtained. In the case where the true data generating process (DGP) is in fact a polynomial whose order does not depend on the sample size, our method is capable of attaining the ¿¿¿n rate often associated with correctly specified parametric models, while the estimator is shown to be uniformly consistent for a much larger class of DGPs. Theoretical underpinnings are provided and finitesample properties are examined. Keywords: Model Selection, Efficiency, Rates of Convergence

30th May 2014  11:00 am  

Speaker: 
Dr Julian Mestre; ARC Discovery Early Career Research Fellow, 
Affiliation: 
School of Information Technologies; The University of Sydney 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Universal Scheduling on a Single Machine 
Abstract: 
Consider scheduling jobs to minimize weighted average completion times on an unreliable machine that can experience unexpected changes in processing speed or even full breakdowns. We aim for a universal nonadaptive solution that performs on any possible machine behavior. Even though it is not obvious that such a schedule should exist, we show that there is a deterministic algorithm that finds a universal scheduling sequence with a solution value within 4 times the value of an optimal solution tailored to that particular machine behavior. A randomized version of this algorithm attains an approximation ratio of e. Furthermore, we show that both results are best possible among universal solutions. Finally, we study the problem of finding the best possible universal schedule. Even though the problem is NPhard, we show that it admits a polynomial time approximation scheme. References:

13th Jun 2014  11:00 am  

Speaker: 
Professor Peter Schmidt, 
Affiliation: 
Department of Economics; Michigan State University 
Venue: 
Rm. 498 Merewether Bldg H04 
Title: 
A PostTruncation Parameterization of Truncated Normal Technical Inefficiency 
Abstract: 
In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N^{+} (µ,σ^{2}). This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier. We will distinguish the pretruncation mean (µ) and variance (σ^{2}) from the posttruncation mean µ_{*}=E(u) and variance σ_{*}^{2}= var(u). Existing models parameterize the pretruncation mean and/or variance in terms of the environmental variables and some parameters. Changes in the environmental variables cause changes in the pretruncation mean and/or variance, and imply changes in both the posttruncation mean and variance. The expressions for the changes in the posttruncation mean and variance can be quite complicated. In this paper, we suggest parameterizing the posttruncation mean and variance instead. This leads to simple expressions for the effects of changes in the environmental variables on the mean and variance of u, and it allows the environmental variables to affect the mean of u only, or the variance of u only, or both. * joint work with Christine Amsler (Michigan State University) and WenJen Tsay (Academic Sinica) 
1st Aug 2014  11:00 am  

Speaker: 
Professor Robin Sickles, 
Affiliation: 
Chair of Economics; Rice University 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Algorithmic Trading, Market Timing and Market Efficiency 
Abstract: 
In recent years large panel data models have been developed to make full use of the Information content of such datasets. Despite the large number of contributions, an important issue that is rarely pursued in much of the existing literature concerns the risk of neglecting structural beaks in the data generating process. While a substantial literature on structural break analysis exists for univariate time series, a relatively small number of techniques have been developed for panel data models. This paper provides a new treatment to deal with the problem of multiple structural breaks that occur at unknown date points in the panel model parameters. Our method is related to the Haar wavelet technique that we adjust according to the structure of the explanatory variables in order to detect the change points of the parameters consistently. We apply the technique to high frequency securities data to examine the effects of algorithmic trading (AT) on standard measures of market quality that proxy for some dimension of liquidity. Specifically, we examine whether AT has time varying effects on liquidity and discuss asset pricing implications.

8th Aug 2014  11:00 am  

Speaker: 
Dr Peter Exterkate, 
Affiliation: 
Department of Economics and Business; Aarhus University 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Distribution Forecasting in NonLinear Models with Stochastic Volatility 
Abstract: 
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This makes it a powerful forecasting tool, which is applicable in many different contexts. However, it is usually applied only to independent and identically distributed observations. This paper introduces a variant of kernel ridge regression for time series with stochastic volatility. The conditional mean and volatility are both modelled as nonlinear functions of observed variables. We set up the estimation problem in a Bayesian manner and derive a Gibbs sampler to obtain draws from the predictive distribution. A simulation study and an application to forecasting the distribution of returns on the S\&P500 index are presented, and we find that our method outperforms most popular GARCH variants in terms of onedayahead predictive ability. Notably, most of this improvement comes from a more adequate approximation to the tails of the distribution. 
15th Aug 2014  11:00 am  

Speaker: 
Associate Professor Jamie Alcock, 
Affiliation: 
Discipline of Finance; The University of Sydney 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Characterising the Assymetric Dependence Premium 
Abstract: 
We examine the relative importance of asymmetric dependence (AD) and systematic risk in the crosssection of US equities. Using a ßinvariant AD metric, we demonstrate a lowertail dependence premium equivalent to 35% of the market risk premium, compared with an uppertail dependence discount that is 41% of the market risk premium. Lowertail dependence displays a constant price between 19892009, while the discount associated with uppertail dependence appears to be increasing in recent years. Subsequently, we find that return changes in US equities between 20072009 reflected changes in systematic risk and uppertail dependence. This suggests that both systematic risk and AD should be managed in order to reduce the return impact of market downturns. Our findings have substantial implications for the cost of capital, investor expectations, portfolio management and performance assessment. *joint work with Anthony Hatherley 
20th Aug 2014  03:00 pm  

Speaker: 
Professor William Greene, 
Affiliation: 
Stern School of Business; New York University 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
True Random Effects in Stochastic Frontier Models 
Abstract: 
This study is a mixture of an empirical investigation and econometric method. In his analysis of the famous (and notorious) year 2000 World Health Organization health efficiency study, Greene (2004) raised the possibility that what the WHO had measured as inefficiency was probably cross country heterogeneity in a panel data set. He developed the 'true random effects (TRE) stochastic frontier model' as part of that exploration, as a way to distinguish between heterogeneity and inefficiency. The WHO study has been the subject of a huge amount of public comment for the past 14 years (almost none of it well informed). The predictions of Greene's TRE model are very different from the WHO results. Numerous specifications have since been developed to accommodate 'panel data' effects in models of efficiency. The most recent developments solve a longstanding modeling problem, in theory, but are impractical in practice. This paper examines the path of development of random effects models for stochastic frontiers and presents a practical implementation of this current leading development of this modeling approach. The estimator is based on the method of maximum simulated likelihood. As part of the development, we reconsider some aspects of this method of maximum likelihood estimation.

22nd Aug 2014  11:00 am  

Speaker: 
Professor Sally Wood, 
Affiliation: 
Discipline of Business Analytics; The University of Sydney 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
AdaptSPEC: Adaptive Spectral Estimation for Non Stationary Time Series 
Abstract: 
Many time series are nonstationary and the ease and rapidity of data capture means that researchers can now model the nonstationarity in a flexible manner.
The talk outlines an approach for analyzing possibly nonstationary time series. The data are assumed to be generated from an unknown but finite number of locally stationary processes and these locally stationary process are combined in a flexible manner to produce a nonstationary time series. The method presented is flexible in the sense that a parametric data generating process is not assumed for the locally stationary series. The model is formulated in a Bayesian framework, and the estimation relies on reversible jump Markov chain Monte Carlo (RJMCMC) methods. The frequentist properties of the method are investigated by simulation, and applications to intracranial electroencephalogram (IEEG), and the El Nino Southern Oscillation phenomenon are described in detail.

29th Aug 2014  11:00 am  

Speaker: 
Dr Mohamad Khaled, 
Affiliation: 
School of Economics; University of Queensland 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Modelling Multivariate Discrete Time Series with Quadratic Exponential Famillies 
Abstract: 
A parsimonious model for multiple discrete time series is introduced using an exponential family theory framework. The talk will focus on the model probabilistic properties and on statistical inference using maximum likelihood estimation. In discrete exponential families, one of the major challenges is the computational intractability induced by their inherent combinatorial complexity. As well as solving that problem, we will show that the model gives rise to a nonhomogenous Markov chain whose asymptotic behavior will be studied. An empirical application will be given as an illustration.

5th Sep 2014  11:00 am  

Speaker: 
Professor Renate Meyer, 
Affiliation: 
Department of Statistics; University of Auckland 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Bayesian SemiParametric Likelihood Approximations for Stationary Time Series 
Abstract: 
Time series abound in many fields such as econometrics, medicine, ecology and astrophysics. Parametric models like ARMA or the more recent GARCH models dominate standard time series modeling. In particular, Bayesian time series analysis (Steel, 2008) is inherently parametric in that a completely specified likelihood function is needed. Even though nonparametric Bayesian inference has been a rapidly growing topic over the last decade, as reviewed by Hjort (2010), only very few nonparametric Bayesian approaches to time series analysis have been developed. Most notably, Carter and Kohn (1997), Gangopadhyay (1998), Choudhuri et al. (2004), Hermansen (2008), and Rover et al. (2011) used Whittle's likelihood (Whittle, 1957) for Bayesian modeling of the spectral density as the main nonparametric characteristic of stationary time series. On the other hand, frequentist time series analyses are often based on nonparametric techniques encompassing a multitude of bootstrap methods, see e.g. Hardle et al. (2003), Kirch and Politis (2011), Kreiss and Lahiri (2011). Whittle's likelihood is an approximation of the true likelihood. Even for nonGaussian stationary time series, which are not completely specified by their first and secondorder structure, the Whittle likelihood results in asymptotically correct statistical inference in many situations. But as shown in Contreras et al. (2006), the loss of efficiency of the nonparametric approach using Whittle's likelihood can be substantial. On the other hand, parametric methods are more powerful than nonparametric methods if the observed time series is close to the considered model class but fail if the model is misspecified. Therefore, we suggest a nonparametric correction of a parametric likelihood approximation that takes advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. We use a nonparametric Bernstein polynomial prior on the spectral density with weights induced by a Dirichlet process distribution. We show that Bayesian nonparametric posterior computations can be performed via a MHwithinGibbs sampler by making use of the Sethuraman representation of the Dirichlet process. * joint work with Claudia Kirch (Karlsruhe Institute of Technology, Germany)

19th Sep 2014  11:00 am  

Speaker: 
Dr Filippo Massari, 
Affiliation: 
Australian School of Business; University of New South Wales 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
What the Market Believes 
Abstract: 
This paper examines the implications of the market selection hypothesis on equilibrium prices. The main focus is on the effect of different risk attitudes on the accuracy of the probabilities implied by equilibrium prices and on the "learning" mechanism of markets. I find that, given traders' beliefs and an initial allocation of wealth, the probabilities implicit in equilibrium prices depend on risk attitudes. This dependency can be used to define a class of probabilities generated by traders' beliefs. This class of probabilities is rich as it includes Bayes' rule as well as known nonBayesian models. In economies populated by traders with identical CRRA utilities that are more risk averse than log, these probabilities are, in term of likelihood, asymptotically unambiguously superior to Bayes' rule. This result challenges on a theoretical level the optimality of the Bayesian procedure and the common idea that Bayesian learning is the only "rational" way to learn.

10th Oct 2014  11:00 am  

Speaker: 
Dr Chris Strickland, 
Affiliation: 
Australian School of Business; University of New South Wales 
Venue: 
Room 498 Merewether Bldg H04 
Title: 
Exploiting Structure for Efficient Bayesian Estimation of Complex Models Used in the Analysis of Large Space and SpaceTime Data Sets 
Abstract: 
We develop scalable methods for the analysis of large space and spacetime data sets. The algorithms we propose have a computational cost that scales either linearly or superlinearly with the number of observations. The proposed methods exploit structure enabling calculations to be performed on relatively low dimensional subspaces. This ensures that the proposed methodology remains computational feasible, even for large space time data sets. We use this methodology to analyse remotely sensed data.

17th Oct 2014  11:00 am  

Speaker: 
Dr Gareth Peters, 
Affiliation: 
University of London 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Optimal Insurance Purchase Strategies via Optimal Multiple Stopping Times 
Abstract: 
This talk will present recent results in which we study a class of insurance products where the policy holder has the option to insure k of its annual Operational Risk losses in a horizon of T years. This involves a choice of k out of T years in which to apply the insurance policy coverage by making claims against losses in the given year. The insurance product structure presented can accommodate any kind of annual mitigation, but we present three basic generic insurance policy structures that can be combined to create more complex types of coverage. Following the Loss Distributional Approach (LDA) with Poisson distributed annual loss frequencies and InverseGaussian loss severities we are able to characterize in closed form analytical expressions for the multiple optimal decision strategy that minimizes the expected Operational Risk loss over the next T years. For the cases where the combination of insurance policies and LDA model does not lead to closed form expressions for the multiple optimal decision rules, we also develop a principled class of closed form approximations to the optimal decision rule. These approximations are developed based on a class of orthogonal Askey polynomial series basis expansion representations of the annual loss compound process distribution and functions of this annual loss. 
24th Oct 2014  11:00 am  

Speaker: 
Professor William Griffiths, 
Affiliation: 
University of Melbourne 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Constructing a Panel of CountryLevel Income Distributions 
Abstract: 
Grouped income distribution data from the World Bank or WIDER are used to construct income distributions for several countryyear combinations, with a view to using these distributions to analyse propoor growth, and changes in inequality and poverty at national, regional and global levels. The construction of income distributions involves 3 steps.
The talk will focus on the methodology for these 3 steps and for computing inequality, poverty and propoor growth measures from mixtures of lognormal distributions. * joint work with Duangkamon Chotikapanich, Monash University; Gholamreza Hajargasht, University of Melbourne; Charley Xia, University of Melbourne 
31st Oct 2014  11:00 am  

Speaker: 
Associate Professor Scott. A. Sisson, 
Affiliation: 
University of New South Wales 
Venue: 
Rm 498 Merewether Bldg H04 
Title: 
Functional regression ABC for Gaussian process density estimation 
Abstract: 
We propose a novel Bayesian nonparametric method for modelling a set of related density functions, where grouped data in the form of samples from the density functions are available. Borrowing strength across the groups is a major challenge in this context. To address this problem, we introduce a hierarchically structured prior, defined over a set of univariate density functions, using convenient transformations of Gaussian processes. Inference is performed through a combination of Approximate Bayesian Computation (ABC) and a functional regressionadjustment. This work provides a flexible and computationally tractable way to perform hierarchical nonparametric density estimation, and is the first attempt to use ABC to estimate infinitedimensional parameters. The proposed method is illustrated by a simulated example and a real analysis of rural high school exam performance in Brazil. * joint work with G. S. Rodrigues and D. J. Nott 
21st Nov 2014  11:00 am  

Speaker: 
Professor Michael McAleer, 
Affiliation: 
Department of Quantitative Finance; National Tsing Hua University 
Venue: 
Room 498 Merewether Building (H04) 
Title: 
On Univariate and Multivariate Models of Volatility 
Abstract: 
Part 1 of the presentation is on one of the two most widely estimated univariate asymmetric conditional volatility models. The exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in reinterpreting the existing properties of the QMLE of the EGARCH parameters. Part 2 of the presentation is on one of the most widelyused multivariate conditional volatility models, namely the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the QuasiMaximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters. 
24th Nov 2014  11:00 am  

Speaker: 
Associate Professor Tomohiro Ando, 
Affiliation: 
Graduate School of Business Administration; Keio University 
Venue: 
Room 214/215 Economic and Business Building (H69) 
Title: 
Asset pricing with highdimensional multifactor models 
Abstract: 
This presentation analyzes multifactor models in the presence of a large number of potential observable risk factors and unobservable common/groupspecific pervasive factors. We show how relevant observable factors can be found from a large given set and how to determine the number of common/groupspecific unobservable factors. The proposed method allows consistent estimation of the beta coefficients in the presence of correlations between the observable and unobservable factors. Even when the group membership of each asset and the number of groups are left unspecified, we show the consistency and asymptotic normality of the estimated beta coefficients.
*joint work with Professor Jushan Bai (Columbia University) 
28th Nov 2014  11:00 am  

Speaker: 
Dr Edward Cripps, 
Affiliation: 
School of Mathematics and Statistics; University of Western Australia 
Venue: 
Room 498, Merewether Building (H04) 
Title: 
Flexible clustering of longitudinal trajectories with applications in psychology 
Abstract: 
This talk presents a Bayesian approach to clustering longitudinal trajectories into latent classes. 
© 20022014 The University of Sydney. Last updated: 13 October, 2014
ABN: 15 211 513 464. CRICOS number: 00026A. Phone: +61 2 9351 2222.
Contact the University  Disclaimer  Privacy  Accessibility