Find us on Facebook Find us on LinkedIn Follow us on Twitter Subscribe to our YouTube channel

2011 Seminars

18th Mar 2011 - 11:00 am

Speaker:

Dr Jinwen Ou,

Affiliation:

Discipline of Operations Management and Econometrics, The University of Sydney

Venue:

Room 498, Merewether Building (H04)

Title:

An O(T^3 log T) Algorithm for Economic Lot Sizing with Constant Capacities and Concave Inventory Costs

Abstract:

This paper studies the classical single-item economic lot-sizing problem with constant capacities, linear production costs and concave inventory costs, where backlogging is allowed. We propose an O( T^3 log T) optimal algorithm for the problem, which improves upon the O( T^4) running time of the famous algorithm developed by Florian and Klein (1971). Instead of using the standard dynamic programming approach by predetermining the minimal costs for all possible subplans, we develop a different dynamic programming algorithm to obtain a more efficient implementation.

1st Apr 2011 - 11:00 am

Speaker:

Professor Stephen G Walker,

Affiliation:

University of Kent, UK

Venue:

Room 498, Merewether Building

Title:

Best Guess and Learning Models

Abstract:

It is to be argued that Bayesian methods are a combination of making best guesses and learning. For this idea to be formulated it is necessary to fully understand what a Bayesian believes she is learning about and what she is making guesses about. This talk will present a framework in which best guesses are revised via a learning process based on observations. Illustrations involving model selection will be presented.

7th Apr 2011 - 02:00 pm

Speaker:

Dr Maurice J.G. Bun,

Affiliation:

Tinbergen Institute and Amsterdam School of Economics, University of Amsterdam, The Netherlands

Venue:

Room 498, Merewether Building

Title:

Identification pathologies and their effects on GMM

Abstract:

We apply a range of GMM based inference procedures to the first-order dynamic panel data model. We use moment conditions from either the first differenced or levels model or both. In addition to standard Wald and LM procedures we consider some recently developed weak instrument robust GMM statistics. By Monte Carlo simulation we address size and power in finite samples of hypothesis tests on the autoregressive coefficient. Both our theoretical and simulation results indicate that conventional tests are subject to considerable size distortions in a significant part of the parameter space. In addition, there is a decline in power close to the unit root. Weak instrument robust statistics, however, have good size properties while maintaining sufficient power in the system model in some cases.

15th Apr 2011 - 11:00 am

Speaker:

Dr Archie Chapman,

Affiliation:

The Discipline of Operations Management and Econometrics

Venue:

Room 498, Merewether Building

Title:

Distributed Optimisation by Learning in Games with Pure Nash Equilibria

Abstract:

An emerging framework for optimisation in large distributed problems is that of multi-agent systems, in which control of a system is partitioned among several autonomous decision makers. Within this context, potential games are used as a design template for constructing agents' utility functions, resulting in games with pure strategy Nash equilibria that can be solved for using iterative learning algorithms. This presentation investigates the convergence properties of one such iterative learning procedure, called fictitious play, in repeated normal form games.  Specifically, using methods from the theory of differential inclusions and stochastic approximations, we analyse the rest points of fictitious play.  We also discuss how to extend fictitious play's use to solve games with unknown rewards or perturbations to action observations, and also to multi-agent sequential decision-making problems (such as Decentralised-POMDPs).

19th Apr 2011 - 04:00 pm

Speaker:

Dr Enrico Gerding,

Affiliation:

School of Electronics and Computer Science University of Southampton, UK

Venue:

Room 498, Merewether Building

Title:

Online Mechanism Design for Electric Vehicle Charging

Abstract:

Plug-in hybrid electric vehicles are expected to place a considerable strain on local electricity distribution networks, requiring charging to be coordinated in order to accommodate capacity constraints. In this talk I will present a novel online auction protocol for this problem, wherein vehicle owners use computer agents to bid for power and also state time windows in which a vehicle is available for charging. This is a multi-dimensional mechanism design domain, with owners having non-increasing marginal valuations for each subsequent unit of electricity. In our design, we couple a greedy allocation algorithm with the occasional “burning” of allocated power, leaving it unallocated, in order to adjust an allocation and achieve monotonicity and thus truthfulness. We consider two variations: burning at each time step or on-departure. Both mechanisms are evaluated in depth, using data from a real-world trial of electric vehicles in the UK to simulate system dynamics and valuations. The mechanisms provide higher allocative efficiency than a fixed price system, are almost competitive with a standard scheduling heuristic which assumes non-strategic agents, and can sustain a substantially larger number of vehicles at the same per-owner fuel cost saving than a simple random scheme.

 

28th Apr 2011 -

Speaker:

Dr Vadim Timkovsky,

Affiliation:

The University of Sydney Business School

Venue:

Room 498, Merewether Building

Title:

Why April 2007 Triggered October 2008

Abstract:

Current margin calculation practice uses two approaches to margining investment portfolios, strategy-based and risk-based. Our observations of the margin rules changes against the margin debt behaviour in the U.S. in the period from 2003 through 2008 support the thesis that the use of the risk-based approach to margining customer accounts with positions in stocks and stock options since April 2007 influenced and triggered the U.S. stock market crash in October 2008. This paper also presents mathematical models showing that the strategy-based approach is, at this point, the most appropriate one for margining security portfolios in customer margin accounts, while the risk-based approach can work efficiently for margining only index portfolios in customer margin accounts and inventory portfolios of brokers. We also show that the application of the risk-based approach to security portfolios in customer margin accounts is very risky and can result in the pyramid of debt in the bullish market and the pyramid of loss in the bearish market. We also provide recommendations on ways to set appropriate margin requirements.

6th May 2011 - 11:00 am

Speaker:

Professor Jaya Krishnakumar,

Affiliation:

University of Geneva

Venue:

Room 498, Merewether Building

Title:

Measuring welfare: Latent variable models for happiness and capabilities in the presence of unobservable heterogeneity

Abstract:

The paper contributes to the operationalisation of the capabilities approach to welfare economics by developing and analysing data on the freedoms of adults in Argentina. Specifically, it reports on the development and delivery of a survey instrument for measuring capabilities, calculates for each respondent a Nehring–Puppe type index of capabilities and examines the distribution of index scores. The main analytic part of the paper then goes on to develop a generalized linear latent and mixed model (GLLAMM) for assessing the impact of capabilities on life satisfaction, in which allowance is made for (i) unobserved heterogeneity and (ii) possible endogeneity, by introducing latent individual effects and by instrumenting capability variables using income and other socio-economic variables. Our empirical results show that empathy, self-worth, goalautonomy, discrimination, safety and stress are statistically significant determinants of life satisfaction, in a decreasing order of importance. The paper concludes by suggesting that, if replicated, the findings have profound implications for the conceptualisation and evaluation of economic progress.
This is a joint work with Paul Anand and Ngoc Bich Tran.

27th May 2011 - 11:00 am

Speaker:

Professor Min Ding,

Affiliation:

Smeal College of Business The Pennsylvania State University

Venue:

MLR6, Merewether Building (H04)

Title:

Incentive Aligned Data Collection

Abstract:

Incentive alignment aims to motivate participants to reveal truthfully their preferences, and it has been used in a variety of contexts such as conjoint analysis (Ding, Grewal and Liechty 2005; Ding 2007). Incentive alignment also makes it more feasible for researchers/managers to design new data collection methods, and one of such new methods is discussed in this presentation. Extant preference measurement research, including conjoint analysis, is done in the isolation of one¹s own mind. That is, it remains completely silent on the explicit influence of others in the formation of consumer preferences. This paper proposes a holistic framework of preference, PIE, as well as a measurement method to remedy this problem. The new paradigm posits that consumers evaluate product attributes using (potentially) three perspectives which are determined by some combinations of the product¹s physical profile (P), the focal customer¹s idiosyncratic attributes (I), and an external target group¹s value system (E), the last factor allowing for influences from others. To provide an empirically feasible method to collect information consistent with this framework, we propose and test an incentive-aligned approach, a group-sourced mechanism, which mimics a real life consultation of a customer with her ³friends² in purchase decision making. The results provide support for the PIE framework, including superior predictive power.

10th Jun 2011 - 11:00 am

Speaker:

Long Gao, Assistant Professor of Operations Management,

Affiliation:

AGSM, University of California

Venue:

Room 498, Merewether Building (H04)

Title:

Managing an Available-to-Promise Assembly System with Dynamic Short-Term Pseudo Order Forecast

Abstract:

We study an order promising problem in a multi-class, Available-to-Promise (ATP) assembly system in the presence of pseudo orders. A pseudo order refers to a tentative customer order whose attributes, such as the likelihood of an actual order, order quantity and con¯rmation timing, can change dynamically over time. Each product is assembled from two major components, with one component requiring one unit of production capacity and one unit of component inventory. An accepted order must be ¯lled before a positive delivery lead time. The underlying order acceptance decisions involve tradeo®s between committing resources (production capacity and component inventory) to low-reward ¯rm orders and reserving resources for high-reward orders. We develop a Markov chain model that captures the key characteristics of pseudo orders, including demand lumpiness, non-stationarity and volatility. We then formulate a stochastic dynamic program for the ATP assembly (ATP-A) system that embeds the Markov chain model as a short-term forecast for pseudo orders. We show that the optimal order acceptance policy is characterized by class prioritization, resource imbalance-based rationing and capacity-inventory-demand matching. In particular, the rationing level for each class is determined by a critical value that depends on the resource imbalance level, de¯ned as the net di®erence between the production capacity and component inventory levels. Extensive numerical tests underscore the importance of the key properties of the optimal policy and provide operational and managerial insights on the value of the short-term demand forecast and the robustness of the optimal policy.

24th Jun 2011 - 11:00 am

Speaker:

Haoying Sun,

Affiliation:

Ph.D. Candidate, Department of Information Risk and Operations Management McCombs School of Business, The University of Texas at Austin

Venue:

Room 498, Merewether Building (H04)

Title:

Assortment choices of competing retailers with uninformed consumers

Abstract:

For many products, some (uninformed) consumers may need to experience the touch and feel in order to determine their valuation. In addition, consumers differ in their costs of searching for the ideal product. Under such circumstances, we show that heterogeneous product assortment breadth among two competing retailers can emerge as an equilibrium. Specifically, we consider a market with two products and two retailers, and show the conditions under which there exists an equilibrium in which one retailer carries a full line and the other sells one product only, even though the demand structure for the two products is symmetric and the cost structures of the two retailers are the same. Under this equilibrium, the full line retailer expands the market demand by attracting the uninformed consumers with large search costs and the single product retailer pass on the savings on carrying costs to the informed consumers by setting a lower price. Therefore, the two retailers soften the competition between them and achieve higher profits.
 
This is a joint work with Steve Gilbert.
 

22nd Jul 2011 - 11:00 am

Speaker:

Professor Christopher S. Tang,

Affiliation:

Edward W. Carter Professor of Business Administration, UCLA Anderson School

Venue:

Room 498, Merewether Building (H04)

Title:

Managing Opportunistic Supplier Product Adulteration: Deferred Payments, Inspection, and Combined Mechanisms

Abstract:

Recent cases of product adulteration by foreign suppliers have compelled many manufacturers to re-think approaches to deterring suppliers from cutting corners, especially when manufacturers cannot fully monitor and control the suppliers' actions. Recognizing that process certification programs, such as ISO9000, do not guarantee unadulterated products and that product liability and product warranty with foreign suppliers are rarely enforceable, manufacturers turn to mechanisms that make payments to the suppliers contingent on no defects discovery. In this paper we study: (a) the deferred payment mechanism --- the buyer pays the supplier after the deferred payment period only if no adulteration has been discovered by the customers; (b) the inspection mechanism --- the buyer pays the supplier is immediately, contingent on product passing the inspection; and (c) the combined mechanism --- a combination of the deferred payment and inspection mechanisms.   We find the optimal contracts for each mechanism, and describe the Nash equilibria of inspection sub-games for the inspection and the combined mechanisms.  The inspection mechanism cannot completely deter the suppliers from product adulteration, while the deferred payment mechanism can.    Surprisingly, the combined mechanism is redundant: either the inspection or the deferred payment mechanisms perform just as well.    Finally, the four factors that determine the dominance of deferred payment mechanism over the inspection mechanism are: (a) the inspection cost relative to inspection accuracy, (b) the buyer's liability for adulterated products, (c) the difference in financing rates for the buyer and the supplier relative to the defects discovery rate by customers, and (d) the difference in production costs for adulterated and unadulterated product.

* Joint work with Volodymyr Babich, Georgetown University

29th Jul 2011 - 11:00 am

Speaker:

Dr Yoni Nazarathy,

Affiliation:

Swinburne University of Technology

Venue:

Room 498, Merewether Building

Title:

Finite Buffer Fluid Networks with Overflows

Abstract:

Consider an abstraction of a material processing network with N nodes. Each node is equipped with a finite buffer of capacity K_i and with a processor capable of working at rate mu_i. Material is modeled as a continuous (ideal fluid) flow and arrives to the nodes exogenously according to rates alpha_i. When material arrives to node i and finds less than K_i in the buffer then it either enters the buffer or is immediately processed if the buffer is empty. Material which is processed at node i can either leave the system or move to other nodes. This follows the proportions p_{i,j} (the proportion of material leaving i which goes to j) with have sum_j p_{i,j} <= 1; in case the inequality is strict, the remaining material leaves the system. When material arrives to find a full buffer it is diverted (overflows) according to propositions q_{i,j} similarly to the p_{i,j}.
 
The case of random discrete memoryless (Poisson/Exponential) flows and K_i = infinity is the well-known Jackson network and can be represented as a Markov chain having a product form solution in the stable case. As opposed to that, finite K_i  typically implies intractability of the Markov Chain. In this case it is first fruitful to analyze the behaviour of the system with deterministic continuous flows. In this respect we formulate traffic equations and show that they can be represented as a linear complementarity problem. We further find a polynomial time algorithm for the solution of the equations (note that the general LCP is in general an NP-complete problem). Finally, the solution of the traffic equations can be used to approximate the sojourn time distribution of customers through the network which can be represented as a discrete phase-type distribution.
Joint work with Erjen Lefeber from Eindhoven University of Technology.

19th Aug 2011 - 11:00 am

Speaker:

Associate Professor Tommaso Proietti,

Affiliation:

Discipline of Operations Management and Econometrics

Venue:

Room 498, Merewether Building

Title:

The Variance Profile

Abstract:

The variance profile is defined as the power mean of the spectral density function of a stationary stochastic process. It is a continuous and non-decreasing function of the power parameter, p, which returns the minimum of the spectrum (p ??? ??????), the interpolation error variance (harmonic mean, p = ???1), the prediction error variance (geometric mean, p = 0), the unconditional variance (arithmetic mean, p = 1) and the maximum of the spectrum (p ??????). The variance profile provides a useful characterisation of a stochastic process; we focus in particular on the class of fractionally integrated processes. Moreover, it enables a direct and immediate derivation of the Szeg??-Kolmogorov formula and the interpolation error variance formula. The paper proposes a non-parametric estimator of the variance profile based on the power mean of the smoothed sample spectrum, and proves its consistency and its asymptotic normality. From the empirical standpoint, we propose and illustrate the use of the variance profile for estimating the long memory parameter

23rd Sep 2011 - 11:00 am

Speaker:

Associate Professor Roselyne Joyeux,

Affiliation:

Macquarie University

Venue:

Room 498, Merewether Building

Title:

Macro Fundamentals as a Source of Stock Market

Abstract:

In order to shed new light on the influence of volume and economic fundamentals on the volatility of the Chinese stock market we follow the methodology introduced by Engle, Ghysels and Sohn (2009) (EGS) and Engle and Rangel (2008). We show that the Chinese A-share market presented speculative characteristics before WTO entry in late 2001. However, after that date macroeconomic fundamentals play an increasing role, especially for CPI inflation, and the influence of volume on the A-share index vanishes. The B-share market shows speculative characteristics since it was opened to domestic investors in 2001.
 
This is a joint work with Eric Girardin, AMSE-GREQAM-University Aix-Marseille

14th Oct 2011 - 11:00 am

Speaker:

Dr Davide Delle Monache,

Affiliation:

Universit?? di Roma-Tor-Vergata, Rome

Venue:

Room 498, Merewether Building (H04)

Title:

The effect of misspecification in models for extracting trends and cycles

Abstract:

This article deals with the specification of trends and cycles in an unobserved components model. We establish a general framework to assess the robustness in misspecified linear time series models based on the MSEs criterion. We show how different criterion can be used for different purposes: forecasting, filtering and smoothing. We generalized the algorithms in Harvey and Delle Monache (2009, HDM hereafter) allowing for all possible misspecifications in linear SSF models. We concentrate on model to extract trends and cycles. We assess the robustness of various sources of possible misspecification. We investigate the discrepancy between the estimated parameters (sample estimation) and the `pseudo-true values'; this yields interesting insights regarding the unknown data generating process (DGP). For example, if the true DGP is a correlated components model, as advocated in the recent literature, then we have that: (i) the calibrate HP filter leads to a big inefficiency for filtering as well as for smoothing; (ii) an uncorrelated components model can still yield a filter with high efficiency. So the extracted cycles on real time are close to each other; (iii) the sample estimates do not match the `pseudo-true values'. Therefore, the differences between the cycles extracted by the alternative specifications is not due to the correlation misspecification and the correlated components model does not match the true DGP.

21st Oct 2011 - 11:00 am

Speaker:

Dr Anastasios Panagiotelis,

Affiliation:

Monash University

Venue:

Room 498, Merewether Building (H04)

Title:

Pair Copula Constructions for Multivariate Discrete Data

Abstract:

Rich multivariate discrete datasets are increasingly found in diverse fields including econometrics, finance, biometrics and psychometrics. Many common models used for multivariate discrete data are equivalent to popular copula models, for example the multivariate probit can be expressed in terms of a Gaussian copula. Our contribution is to introduce a new class of models for multivariate discrete data based on Pair Copula Constructions (PCCs) which has two major advantages. First, PCCs capture more flexible dependence structures compared to more restrictive existing approaches, including the Gaussian copula. Second, the computational burden of evaluating the likelihood for an m-dimensional discrete PCC only grows quadratically with m. This compares favourably to existing models for which computing the likelihood either requires the evaluation of evaluation of 2^m terms or slow numerical integration methods. We demonstrate the high quality of maximum likelihood estimates both under a simulated setting and for two real data applications, including a longitudinal discrete dataset. We show that the use of asymmetric pair copulas in a PCC can improve both the in-sample fit of our models and the out-of-sample prediction of joint outcomes that lie in the tails of the multivariate data.  

4th Nov 2011 - 11:00 am

Speaker:

Dr Vasilis Sarafidis,

Affiliation:

The University of Sydney Business School

Venue:

Room 498, Merewether Building (H04)

Title:

Cross-sectional Dependence in Panel Data Analysis

Abstract:

This paper provides an overview of the existing literature on panel data models with error cross-sectional dependence. We distinguish between spatial dependence and factor structure dependence and we analyse the implications of weak and strong cross-sectional dependence on the properties of the estimators. We consider estimation under strong and weak exogeneity of the regressors for both T fixed and T large cases. Available tests for error cross-sectional dependence and methods for determining the number of factors are discussed in detail. The finite-sample properties of some estimators and statistics are investigated using Monte Carlo experiments.

The paper can be downloaded from the following web link:
http://mpra.ub.uni-muenchen.de/20815/1/MPRA_paper_20815.pdf

11th Nov 2011 - 11:00 am

Speaker:

Qian Chen,

Affiliation:

Discipline of Operations Management and Econometrics

Venue:

Room 498, Merewether Building (H04)

Title:

Bayesian VaR and ES forecasting via the two-sided Weibull distribution

Abstract:

A study on the impacts of asymmetry in the conditional distribution and volatility on forecasting Value-at-Risk and expected shortfall is carried via parametric method. A new distribution derived from Weibull distribution is proposed to generate adequate Value-at-Risk and expected shortfall. A two-regime double-threshold GARCH is used to model the asymmetric behavior in volatility process of a heteroskedastic financial return series. As the financial data are usually observed in high frequency, a smoother change between regimes is considered more reasonable than a sharp transition. Thus a generalized two-regime smooth-transition GARCH model is adopted for comparison. To allow flexibility in the model, the threshold parameter, at which the change between regimes occurs, is estimated. It???s well known that the financial data are usually observed other than normally distributed. Therefore, a Student t and an asymmetric Laplace distribution are also used as the potential distribution for the financial data. The latter is recently popular as it captures the dynamics in skewness with a time-varying shape parameter. Hansen???s generalized skewed t distribution is also frequently used to take into account the dynamics in skewness. For comparison, this distribution is also used in our models. The model parameters are estimated by Baysian Markov Chain Monte Carlo sampling
scheme, employing the Metropolis-Hastings (MH) algorithm with a mixture of Gaussian proposal distributions. We illustrate the model by applying it to return series from four international stock market indices, as well as two exchange rates, and generating one-step ahead forecasts of VaR and ES. The models are compared via standard and non-standard tests. 

Keywords: Two-sided Weibull, Value-at-Risk, Expected shortfall, Back-testing, precrisis and post crisis, asymmetric higher moments, asymmetric volatility.

18th Nov 2011 - 11:00 am

Speaker:

Dr Natalia Ponomareva,

Affiliation:

Macquarie University

Venue:

Room 498, Merewether Building (H04)

Title:

Australian Labour Market Dynamics Across the Ages

Abstract:

Transition probabilities between four labour market states (full-time employment, part-time employment, unemployment and inactive) for three age groups (the young, the middle and the old) are calculated using monthly gross flow data for Australia from October 1997 to September 2010. We determine the responses of the different groups to phases of the business cycle by estimating a small set of unobserved common dynamic factors that drive the transitions of the age groups. We also look at the impulse responses of gross flows to a positive business cycle shock.

25th Nov 2011 - 11:00 am

Speaker:

Associate Professor Valentin Zelenyuk,

Affiliation:

The University of Queensland

Venue:

Room 498, Merewether Building (H04)

Title:

Local Maximum Likelihood Techniques with Categorical Data

Abstract:

In this paper we provide asymptotic theory of local maximum likelihood techniques for estimating a regression model where some regressors are discrete. Our methodology and theory are particularly useful for models that give us a likelihood of the unknown functions that we can use to identify and estimate the underlying model. This is the case when the conditional density of the variable of interest, given the explanatory variables, is known up to a set of unknown functions. Examples of such models include probit and logit models, truncated regression models, stochastic frontier models, etc. In developing the theory we use the Racine and Li (2004) kernels for discrete regressors. The asymptotic properties of the resulting estimator are derived and the method is illustrated in various simulated scenarios. The results indicate a great flexibility of the approach and good performances in various complex scenarios, even with moderate sample sizes.

2nd Dec 2011 - 11:00 am

Speaker:

Julia Polak,

Affiliation:

Monash University

Venue:

Room 498, Merewether Building (H04)

Title:

Are we still using the best predicted model? Prediction Capability Procedure flavored by Principal Function Data Analysis

Abstract:

We present a method for analyzing the model prediction capabilities in the changing reality. The proposed prediction capability procedure combines the ideas of nonparametric density estimation and principal function data analysis in order to clarify the question whether the new observed data comes from the same expected data generation process or not. If there is not enough evidence that the data generation process has been changed after the model has been selected there is no reason to believe that the model has lost its predictive capabilities in the new reality.