Find us on Facebook Find us on LinkedIn Follow us on Twitter Subscribe to our YouTube channel

2014 Seminars

21st Feb 2014 - 11:00 am

Speaker:

Professor Bo Chen; University of Warwick, Coventry,

Affiliation:

University of Warwick, Coventry

Venue:

Room 498, Merewether Building (H04)

Title:

Incentive Schemes to Resolve Parkinson's Law in Project Management

Abstract:

In project management, the widely observed behavioural phenomenon Parkinson's Law results in the benefit towards project completion time from potential early completion of tasks being wasted. In many projects, this leads to poor project performance. We describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under the critical path method (CPM). This scheme can be applied to any project where the tasks that are allocated to a single task owner are independent, i.e., none is a predecessor of another. Our scheme is also applicable to resolving the Student Syndrome. We further describe an incentive compatible mechanism to resolve Parkinson's Law for projects planned under critical chain project management (CCPM). The incentive payments received by all task owners under CCPM weakly dominate those under CPM; moreover, the minimum guaranteed payment to the project manager remains unchanged. Finally, we develop an incentive compatible mechanism for repeated projects, where commitments to early completion continue for subsequent projects. Our work provides an alternative to CPM planning which is vulnerable to Parkinson's Law and to CCPM planning which lacks formal control of project progress.

25th Feb 2014 - 11:00 am

Speaker:

Professor Masayuki Hirukawa,

Affiliation:

Professor Masayuki Hirukawa; Setsunan University

Venue:

Room 498, Merewether Building (H04)

Title:

Consistent Estimation of Linear Regression Models Using Matched Data*

Abstract:

Regression estimation using matched samples is not uncommon in applied economics.  This paper demonstrates that ordinary least squares estimation of linear regression models using matched samples is inconsistent and that the convergence rate to its probability limit depends on the number of matching variables.  In line with these findings, bias-corrected estimators are proposed, and their asymptotic properties are explored.  The estimators can be interpreted as a version of indirect inference estimators.  Monte Carlo simulations confirm that the bias correction works well in finite samples.

Download Paper

28th Feb 2014 - 11:00 am

Speaker:

Associate Professor Danny Oron,

Affiliation:

The University of Sydney

Venue:

Room 498, Merewether Building (H04)

Title:

Scheduling controllable processing time jobs with time dependent effects and batching considerations

Abstract:

ABSTRACT

 

In classical scheduling models jobs are assumed to have fixed processing times. However, in many real life applications job processing times are controllable through the allocation of a limited resource. The most common and realistic model assumes that there exists a non-linear and convex relationship between the amount of resource allocated to a job and its processing time. The scheduler's task when dealing with controllable processing times is twofold. In addition to solving the underlying sequencing problem, one must establish an optimal resource allocation policy. We combine the convex resource allocation model with two widespread scheduling models on a single machine setting; we begin by studying a batching model, whereby jobs undergo a batching, or burn in, process where different tasks are grouped into batches and processed simultaneously. The processing time of each batch is equal to the longest processing time among all jobs contained in the batch. The latter model focuses on linear deterioration, where job processing times are a function of the waiting time prior to their execution. In the most general setting, each job comprises of a basic processing time which is independent of its start time, and a start time dependent deterioration function. Some common examples of deteriorating systems include fire fighting, pollution containment and medical treatments. We provide interesting polynomial time algorithms for the makespan and total completion time criteria.

14th Mar 2014 - 11:00 am

Speaker:

Professor Michael Smith,

Affiliation:

Melbourne Business School; University of Melbourne

Venue:

Room 498 Merewether Bldg H04

Title:

Copula Modelling of Dependence in Multivariate Time Series

Abstract:

Almost all existing nonlinear multivariate time series models remain linear, conditional on a point in time or latent regime. Here, an alternative is proposed, where nonlinear serial and cross-sectional dependence is captured by a copula model. The copula defines a multivariate time series on the unit cube. A drawable vine copula is employed, along with a factorization which allows the marginal and transitional densities of the time series to be expressed analytically. The factorization also provides for simple conditions under which the series is stationary and/or Markov, as well as being parsimonious. A parallel algorithm for computing the likelihood is proposed, along with a Bayesian approach for computing inference based on model averages over parsimonious representations of the vine copula. The model average estimates are shown to be more accurate in a simulation study. Two five-dimensional time series from the Australian electricity market are examined. In both examples, the fitted copula captures substantial asymmetric tail dependence, both over time and between elements in the series.

Keywords: Copula Model, Nonlinear Multivariate Time Series, Bayesian Model Averaging, Multivariate Stationarity.

21st Mar 2014 - 11:00 am

Speaker:

Professor Suk-Joong Kim,

Affiliation:

Discipline of Finance; The University of Sydney

Venue:

Rm 498 Merewether Bldg H04

Title:

Modelling the Crash Risk of the Australian Dollar Carry Trade

Abstract:

This paper investigates the nature and the determinants of the Australian dollar (AUD) carry trades using a Markov regime shifting model over the period 2 Jan 1999 to 31 Dec 2012. We find that the AUD has been used, except for a number of short periods notably surrounding the outbreak of the GFC, as an investment currency in a carry trade regime. We also investigate the determinants of the AUD carry trade regime probabilities. For daily horizon, prior to September 2008, carry trade regime probabilities are significantly lower in response to higher realized volatility of the USD/AUD exchange rate, number of trades, unexpected inflation and unexpected unemployment announcements. They are significantly higher when order flows are positive (more buyer than seller initialed trades of AUD) and when RBA policy interest rate unexpectedly increase. At weekly horizon, realized skewness and net long futures position on
the AUD contributed to the carry trade regime probabilities. On the other hand, post-September 2008 period shows a breakdown on the relationship between carry trade regime probabilities and the determinants.

JEL: E44; F31; G15
Keywords: AUD carry trade, Regime shifting, News, Order flows, Speculative positions

 

28th Mar 2014 - 11:00 am

Speaker:

Professor David Allen,

Affiliation:

Centre for Applied Financial Studies ; University of South Australia

Venue:

Rm 498 Merewether Bldg H04

Title:

Modelling and Forecasting Intraday Market Risk with Application to Stock Indices

Abstract:

On the afternoon of May 6, 2010 the Dow Jones Industrial Average (DJIA) plunged about 1000 points (about 9%) in a matter of minutes before rebounding almost as quickly. This was the biggest one day point decline on an intraday basis in the DJIA??s history. An almost similar dramatic change in intraday volatility1 was observed on April 4, 2000 when the DJIA dropped by 4.8%. These historical events present a very compelling argument for the need for robust econometrics models which can forecast intraday asset volatility. There are numerous models available in the finance literature to model financial asset volatility. Various Autoregressive Conditional Heteroskedastic (ARCH) time series models are widely used for modelling daily (end of day) volatility of the financial assets. The family of basic GARCH models works well for modelling daily volatility but they are proven to be not as efficient for intraday volatility. The last two decades have seen some research augmenting the GARCH family of models to forecast intraday volatility, the Multiplicative Component GARCH (MCGARCH) model of Engle & Sokalska (2012) being the most recent of them. MCGARCH models the conditional variance as the multiplicative product of daily, diurnal, and stochastic intraday volatility of the financial asset. In this paper we use the MCGARCH model to forecast the intraday volatility of Australia??s S&P/ASX-50 stock market index and the USA Dow Jones Industrial Average (DJIA) stock market index. We also use the model to forecast their intraday Value at Risk (VaR) and Expected Shortfall (ES). As the model requires a daily volatility component, we test a GARCH based estimate of the daily volatility component against the daily realized volatility (RV) estimates obtained from the Heterogeneous Autoregressive model for Realized Volatility (HARRV). The results in the paper show that 1 minute VaR forecasts obtained from the MCGARCH model using the HARRV based daily volatility component outperform the ones obtained using the GARCH based daily volatility component.


Keywords: Intraday returns, VaR, Expected Shortfall, GARCH, realized variance.


Note: The terms volatility and variance are used interchangeably throughout the text. Unless otherwise stated both refer to the variance of the return series.

 

*Joint work with Abhay K Singha and Robert J Powella,   School of Business, Edith Cowan University, Perth, WA

 

4th Apr 2014 - 11:00 am

Speaker:

Professor Isabel Casas,

Affiliation:

Department of Business and Economics; University of Southern Denmark

Venue:

Rm 498 Merewether Bldg H04

Title:

Time Varying Impulse Response

Abstract:

The vector autoregressive model (VAR) is a useful alternative to structural econometrics model when the aim is to study macroeconomic behaviour and the impulse response function (IRF) measures the effect of exogenous shocks on other economic variables.  It exploits the fact that macroeconomic variables are interrelated and depend on historical data.  Classical VAR and IRF are too inflexible because they do not capture changes of parameters in time.  We assume that the process of interest is locally stationary and propose the time-varying nonparametric local linear estimator of a time-varying VAR and its covariance matrix.  We apply this model to the monetary problem that relates the unemployment, interest rate and inflation.


 

7th Apr 2014 - 01:00 pm

Speaker:

Prof. Timo Terasvirta,

Affiliation:

Department of Economics and Business; Aarhus University

Venue:

Merewether Rm 498

Title:

Specification and Testing of Multiplicative Time-Varying GARCH Models with Applications

Abstract:

In this paper we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Ter??svirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by applying a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by Monte Carlo simulations. The modelling strategy is illustrated in practice with two real examples, an empirical application to daily exchange rate returns and another one to daily coffee futures returns.

* joint work with Cristina Amado


 

11th Apr 2014 - 11:00 am

Speaker:

Professor Rodney Wolff,

Affiliation:

The University of Queensland

Venue:

Rm 498 Merewether Bldg H04

Title:

Computing Portfolio Risk with Optimised Polynomial Expansions

Abstract:

The application of orthogonal polynomial expansions to estimation of probability density functions has considerable attraction in financial portfolio theory, particularly for accessing features of a portfolio's profit/loss distribution.  This is because such expansions are given by the sum of known orthogonal polynomials multiplied by an associated weight function.  When the weight function is the Normal density, in classical models for financial profit/loss, the Hermite system constitutes the orthogonal polynomials.  Now in the case of estimators, orthogonal polynomials are simply linear combinations of moments.  For low orders, such moments have substantive interpretation as concepts in finance, namely for tail shape.  Hence, orthogonal polynomial expansion methods provide a transparent indication of how empirical moments can affect the distribution of portfolio profit/loss, and hence associated risk measures which are based on tail probability calculations.  However, naive applications of expansion methods are flawed.  The shape of the estimator's tail can undulate, under the influence of the constituent polynomials in the expansion, and can even exhibit regions of negative density.  This paper presents techniques to redeem these flaws and to improve quality of risk estimation.  We show that by targeting a smooth density which is sufficiently close to the target density, we can obtain expansion-based estimators which do not have the shortcomings of equivalent naive estimators.  In particular, we apply optimisation and smoothing techniques which place greater weight on the tails than the body of the distribution.  Numerical examples using both real and simulated data illustrate our approach.  We further outline how our techniques can apply to a wide class of expansion methods, and indicate opportunities to extend to the multivariate case, where distributions of individual component risk factors in a portfolio can be accessed for the purpose of risk management. 

* joint work with Kohei Marumo (Saitama University, Japan)

 

17th Apr 2014 - 11:00 am

Speaker:

Dr Ping Yu,

Affiliation:

Department of Economics; University of Auckland

Venue:

Rm 498 Merewether Bldg H04

Title:

Marginal Quantile Treatment Effect

Abstract:

This paper studies estimation and inference based on the marginal quantile treatment effect. First, we illustrate the importance of the rank preservation assumption in the quantile treatment effects evaluation, show the identifiability of the marginal quantile treatment effect, and clarify the relationship between the marginal quantile treamtent effect and other quantile treatment parameters. Second, we develop sharp bounds for the quantile treatment effect with and without the monotonicity assumption, and also sufficient and necessary conditions for point identification. Third, we estimate the marginal quantile treatment effect and associated quantile treatment effect and integrated quantile treatment effect based on the distribution regression, derive the corresponding weak limits and show the validity of the bootstrap inferences. The inference procedure can be used to construct uniform confidence bands for quantile treatment parameters and test unconfoundedness and stochastic dominance. We also develop goodness of fit tests to choose regressors in the distribution regression. Fourth, we conduct two counterfactual analyses: deriving the transition matrix and developing the relative marginal policy relevant quantile treatment effect parameter under the policy invariance. Fifth, we compare the identification schemes in some important literature with that by the marginal quantile treatment effect, and point out advantages and also weaknesses of each scheme, e.g., Chernozhukov and Hansen (2005) concentrate mainly on the quantile treatment effect with the selection select but without the essential heterogeneity; Abadie, Angrist and Imbens (2002), Aakvik, Heckman and Vytlacil (2005) and Chernozhukov and Hansen (2006) suffer from some obvious misspecification problems. Meanwhile, an alternative estimator of the local quantile treatment effect is developed and its weak limit is derived. Finally, we apply the estimation methods to the famous return to schooling dataset of Angrist and Krueger (1991) to illustrate the usefulness of the techniques developed in this paper to practitioners.

7th May 2014 - 11:00 am

Speaker:

Professor Pavel Shevchenko,

Affiliation:

Senior Principal Research Scientist; CSIRO - Computational Informatics

Venue:

Rm 498 Merewether Bldg H04

Title:

Loss Distribution Approach for Operational Risk Capital Modelling: Challenges and Pitfalls

Abstract:

The management of operational risk in the banking industry has undergone explosive changes over the last decade due to substantial changes in the operational environment. Globalization, deregulation, the use of complex financial products and changes in information technology have resulted in exposure to new risks very different from market and credit risks. In response, Basel Committee for Banking Supervision has developed a regulatory Basel II framework that introduced operational risk category and corresponding capital requirements. Over the past five years, many major banks have received accreditation under the Basel II Advanced Measurement Approach by adopting the Loss Distribution Approach (LDA) despite there being a number of unresolved methodological challenges in its implementation. Different approaches and methods are still under hot debate. This talk is devoted to quantitative issues in operational risk LDA such as modelling large losses, methods to combine different data sources (internal data, external data and scenario analysis) and modelling dependence which are still unresolved issues in operational risk capital modelling. Presented results are based on our work with the banking industry, discussions with regulators and academic research.

 

16th May 2014 - 11:00 am

Speaker:

Dr Demetris Christodolou,

Affiliation:

Discipline of Accounting; The University of Sydney

Venue:

Rm 498 Merewether Bldg H04

Title:

Identification and Intepretation of Estimates from the Rank Deficient Accounting Data Matrix

Abstract:

Regression models that rely on inputs from published financial statements ignore the fact that the observed accounting data matrix is rank deficient by design. The standard practice is to identify zero-parameter restrictions on the accounting matrix in order to impose full rank and enable estimation but this approach renders the interpretation of recovered estimates as composite deviations from the identity parameters that have been omitted. The alternative approach would be to identify suitable restrictions on linear combinations, but again the interpretation of estimates is conditional on validity of the restriction. This is a standard result in the analysis of intercepts but there is lack of insight on how to deal with rank deficient systems of slope coefficients, and this has proven to be an acute problem for empirical accounting research that fails to acknowledge the relevant effects. We discuss the problem of identification and interpretation of estimates from the rank deficient accounting data matrix, particularly within the context of equity pricing models.

* joint work with Professor Richard Gerlach 

 

23rd May 2014 - 11:00 am

Speaker:

Professor Jeffrey Racine,

Affiliation:

Department of Econometrics; McMaster University

Venue:

Rm 498 Merewether Bldg H04

Title:

Infinite Order Cross-Validated Local Polynomial Regression

Abstract:

Many practical problems require nonparametric estimates of regression functions, and local polynomial regression has emerged as a leading approach. In applied settings practitioners often adopt either the local constant or local linear variants, or choose the order of the local polynomial to be slightly greater than the order of the maximum derivative estimate required. But such ad hoc determination of the polynomial order may not be optimal in general, while the joint determination of the polynomial order and bandwidth presents some interesting theoretical and practical challenges. In this paper we propose a data-driven approach towards the joint determination of the polynomial order and bandwidth, provide theoretical underpinnings, and demonstrate that improvements in both finite-sample efficiency and rates of convergence can thereby be obtained. In the case where the true data generating process (DGP) is in fact a polynomial whose order does not depend on the sample size, our method is capable of attaining the  ???n  rate often associated with correctly specified parametric models, while the estimator is shown to be uniformly consistent for a much larger class of DGPs. Theoretical underpinnings are provided and finite-sample properties are examined.

Keywords:  Model Selection, Efficiency, Rates of Convergence


* joint work with Peter G. Hall 

 

30th May 2014 - 11:00 am

Speaker:

Dr Julian Mestre; ARC Discovery Early Career Research Fellow,

Affiliation:

School of Information Technologies; The University of Sydney

Venue:

Rm 498 Merewether Bldg H04

Title:

Universal Scheduling on a Single Machine

Abstract:

Consider scheduling jobs to minimize weighted average completion times on an unreliable machine that can experience unexpected changes in processing speed or even full breakdowns. We aim for a universal non-adaptive solution that performs on any possible machine behavior.

Even though it is not obvious that such a schedule should exist, we show that there is a deterministic algorithm that finds a universal scheduling sequence with a solution value within 4 times the value of an optimal solution tailored to that particular machine behavior. A randomized version of this algorithm attains an approximation ratio of e. Furthermore, we show that both results are best possible among universal solutions.

Finally, we study the problem of finding the best possible universal schedule. Even though the problem is NP-hard, we show that it admits a polynomial time approximation scheme.

 References:
[1] "Universal sequencing on a single machine???  by Epstein, Levin, Marchetti-Spaccamela, Megow, Mestre, Skutella, and Stougie. SIAM Journal on Computing, 2013.
[2] "Instance-sensitive robustness guarantees for sequencing with unknown packing and covering constraints ??? by Megow and Mestre. Proc of the 4th conference on Innovations in Theoretical Computer Science, 2013.

 

13th Jun 2014 - 11:00 am

Speaker:

Professor Peter Schmidt,

Affiliation:

Department of Economics; Michigan State University

Venue:

Rm. 498 Merewether Bldg H04

Title:

A Post-Truncation Parameterization of Truncated Normal Technical Inefficiency

Abstract:

In this paper we consider a stochastic frontier model in which the distribution of technical inefficiency is truncated normal. In standard notation, technical inefficiency u is distributed as N+ (µ,σ2).   This distribution is affected by some environmental variables z that may or may not affect the level of the frontier but that do affect the shortfall of output from the frontier.  We will distinguish the pre-truncation mean (µ) and variance (σ2) from the post-truncation mean µ*=E(u) and variance σ*2= var(u).  Existing models parameterize the pre-truncation mean and/or variance in terms of the environmental variables and some parameters.  Changes in the environmental variables cause changes in the pre-truncation mean and/or variance, and imply changes in both the post-truncation mean and variance.  The expressions for the changes in the post-truncation mean and variance can be quite complicated.  In this paper, we suggest parameterizing the post-truncation mean and variance instead.  This leads to simple expressions for the effects of changes in the environmental variables on the mean and variance of u, and it allows the environmental variables to affect the mean of u only, or the variance of u only, or both.

* joint work with Christine Amsler (Michigan State University) and Wen-Jen Tsay (Academic Sinica) 

1st Aug 2014 - 11:00 am

Speaker:

Professor Robin Sickles,

Affiliation:

Chair of Economics; Rice University

Venue:

Rm 498 Merewether Bldg H04

Title:

Algorithmic Trading, Market Timing and Market Efficiency

Abstract:

In recent years large panel data models have been developed to make full use of the Information content of such datasets. Despite the large number of contributions, an important issue that is rarely pursued in much of the existing literature concerns the risk of neglecting structural beaks in the data generating process.  While a substantial literature on structural break analysis exists for univariate time series, a relatively small number of techniques have been developed for panel data models.  This paper provides a new treatment to deal with the problem of multiple structural breaks that occur at unknown date points in the panel model parameters.  Our method is related to the Haar wavelet technique that we adjust according to the structure of the explanatory variables in order to detect the change points of the parameters consistently.  We apply the technique to high frequency  securities data to examine the effects of algorithmic trading (AT) on standard measures of market quality that proxy for some dimension of liquidity.  Specifically, we examine whether AT has time varying effects on liquidity and discuss asset pricing implications.

 

8th Aug 2014 - 11:00 am

Speaker:

Dr Peter Exterkate,

Affiliation:

Department of Economics and Business; Aarhus University

Venue:

Rm 498 Merewether Bldg H04

Title:

Distribution Forecasting in Non-Linear Models with Stochastic Volatility

Abstract:

Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors.  This makes it a powerful forecasting tool, which is applicable in many different contexts. However, it is usually applied only to independent and identically distributed observations.  This paper introduces a variant of kernel ridge regression for time series with stochastic volatility.  The conditional mean and volatility are both modelled as nonlinear functions of observed variables. We set up the estimation problem in a Bayesian manner and derive a Gibbs sampler to obtain draws from the predictive distribution.  A simulation study and an application to forecasting the distribution of returns on the S\&P500 index are presented, and we find that our method outperforms most popular GARCH variants in terms of one-day-ahead predictive ability.  Notably, most of this improvement comes from a more adequate approximation to the tails of the distribution.

15th Aug 2014 - 11:00 am

Speaker:

Associate Professor Jamie Alcock,

Affiliation:

Discipline of Finance; The University of Sydney

Venue:

Rm 498 Merewether Bldg H04

Title:

Characterising the Assymetric Dependence Premium

Abstract:

We examine the relative importance of asymmetric dependence (AD) and systematic risk in the cross-section of US equities. Using a ß-invariant AD metric, we demonstrate a lower-tail dependence premium equivalent to 35% of the market risk premium, compared with an upper-tail dependence discount that is 41% of the market risk premium. Lower-tail dependence displays a constant price between 1989-2009, while the discount associated with upper-tail dependence appears to be increasing in recent years. Subsequently, we find that return changes in US equities between 2007-2009 reflected changes in systematic risk and upper-tail dependence. This suggests that both systematic risk and AD should be managed in order to reduce the return impact of market downturns. Our findings have substantial implications for the cost of capital, investor expectations, portfolio management and performance assessment.

*joint work with Anthony Hatherley