Sydney eScholarship Community:
http://hdl.handle.net/2123/8118
2017-12-11T05:59:28ZEndogenous Environmental Variables In Stochastic Frontier Models
http://hdl.handle.net/2123/16763
Title: Endogenous Environmental Variables In Stochastic Frontier Models
Authors: Amsler, Christine; Prokhorov, Artem; Schmidt, Peter
Abstract: This paper considers a stochastic frontier model that contains environmental variables that affect the level of inefficiency but not the frontier. The model contains statistical noise, potentially endogenous regressors, and technical inefficiency that follows the scaling property, in the sense that it is the product of a basic (half-normal) inefficiency term and a parametric function of the environmental variables. The environmental variables may be endogenous because they are correlated with the statistical noise or with the basic inefficiency term.
Several previous papers have considered the case of inputs that are endogenous because they are correlated with statistical noise, and if they contain environmental variables these are exogenous. One recent paper allows the environmental variables to be correlated with statistical noise. Our paper is the first to allow both the inputs and the environmental variables to be endogenous in the sense that they are correlated either with statistical noise or with the basic inefficiency term. Correlation of inputs or environmental variables with the basic inefficiency term raises non-trivial conceptual issues about the meaning of exogeneity, and technical issues of estimation of the model.2017-04-09T00:00:00ZSpeeding up MCMC by Efficient Data Subsampling
http://hdl.handle.net/2123/16205
Title: Speeding up MCMC by Efficient Data Subsampling
Authors: Quiroz, Matias; Villani, Mattias; Kohn, Robert; Tran, Minh-Ngoc
Abstract: We propose Subsampling MCMC, a Markov Chain Monte Carlo (MCMC) framework where the likelihood function for n observations is estimated from a random subset of m observations. We introduce a general and highly efficient unbiased estimator of the log-likelihood based on control variates obtained from clustering the data. The cost of computing the log-likelihood estimator is much smaller than that of the full log-likelihood used by standard MCMC. The likelihood estimate is bias-corrected and used in two correlated pseudo-marginal algorithms to sample from a perturbed posterior, for which we derive the asymptotic error with respect to n and m, respectively. A practical estimator of the error is proposed and we show that the error is negligible even for a very small m in our applications. We demonstrate that Subsampling MCMC is substantially more efficient than standard MCMC in terms of sampling efficiency for a given computational budget, and that it outperforms other subsampling methods for MCMC proposed in the literature.2016-01-01T00:00:00ZMatrix Neural Networks
http://hdl.handle.net/2123/15839
Title: Matrix Neural Networks
Authors: Gao, Junbin; Guo, Yi; Wang, Zhiyong
Abstract: Traditional neural networks assume vectorial inputs as the network is arranged
as layers of single line of computing units called neurons. This special
structure requires the non-vectorial inputs such as matrices to be converted
into vectors. This process can be problematic. Firstly, the spatial information
among elements of the data may be lost during vectorisation. Secondly,
the solution space becomes very large which demands very special treatments
to the network parameters and high computational cost. To address these
issues, we propose matrix neural networks (MatNet), which takes matrices
directly as inputs. Each neuron senses summarised information through bilinear
mapping from lower layer units in exactly the same way as the classic
feed forward neural networks. Under this structure, back prorogation and
gradient descent combination can be utilised to obtain network parameters
e ciently. Furthermore, it can be conveniently extended for multimodal
inputs. We apply MatNet to MNIST handwritten digits classi cation and
image super resolution tasks to show its e ectiveness. Without too much
tweaking MatNet achieves comparable performance as the state-of-the-art
methods in both tasks with considerably reduced complexity.2016-11-02T00:00:00ZEstimation of Hierarchical Archimedean Copulas as a Shortest Path Problem
http://hdl.handle.net/2123/14745
Title: Estimation of Hierarchical Archimedean Copulas as a Shortest Path Problem
Authors: Matsypura, Dmytro; Neo, Emily; Prokhorov, Artem
Abstract: We formulate the problem of finding and estimating the optimal hierarchical Archimedean copula as an amended shortest path problem. The standard network flow problem is amended by certain constraints specific to copulas, which limit scalability of the problem. However, we show in dimensions as high as twenty that the new approach dominates the alternatives which usually require recursive estimation or full enumeration.2016-04-16T00:00:00ZEfficient estimation of parameters in marginal in semiparametric multivariate models
http://hdl.handle.net/2123/14641
Title: Efficient estimation of parameters in marginal in semiparametric multivariate models
Authors: Panchenko, Valentyn; Prokhorov, Artem
Abstract: We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator.2016-03-01T00:00:00ZBlock-Wise Pseudo-Marginal Metropolis-Hastings
http://hdl.handle.net/2123/14595
Title: Block-Wise Pseudo-Marginal Metropolis-Hastings
Authors: Tran, M.-N.; Kohn, R.; Quiroz, M.; Villani, M.
Abstract: The pseudo-marginal Metropolis-Hastings approach is increasingly used for Bayesian inference in statistical models where the likelihood is analytically intractable but can be estimated unbiasedly, such as random effects models and state-space models, or for data subsampling in big data settings. In a seminal paper, Deligiannidis et al. (2015) show how the pseudo-marginal Metropolis-Hastings (PMMH) approach can be made much
more e cient by correlating the underlying random numbers used to form the estimate of the likelihood at the current and proposed values of the unknown parameters. Their proposed approach greatly speeds up the standard PMMH algorithm, as it requires a much smaller number of particles to form the optimal likelihood estimate. We present a closely related alternative PMMH approach that divides the underlying random numbers mentioned above into blocks so that the likelihood estimates for the proposed and current values of the likelihood only di er by the random numbers in one block. Our approach is less general than that of Deligiannidis et al. (2015), but has the following advantages. First, it provides a more direct way to control the correlation between the logarithms of the estimates of the likelihood at the current and proposed values of
the parameters. Second, the mathematical properties of the method are simplified and made more transparent compared to the treatment in Deligiannidis et al. (2015). Third, blocking is shown to be a natural way to carry out PMMH in, for example, panel data models and subsampling problems. We obtain theory and guidelines for selecting the optimal number of particles, and document large speed-ups in a panel data example and
a subsampling problem.2016-03-30T00:00:00ZFast Inference for Intractable Likelihood Problems using Variational Bayes
http://hdl.handle.net/2123/14594
Title: Fast Inference for Intractable Likelihood Problems using Variational Bayes
Authors: Gunawan, David; Tran, Minh-Ngoc; Kohn, Robert
Abstract: Variational Bayes (VB) is a popular statistical method for Bayesian inference. The existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes their use in many interesting models. Tran
et al. (2015) extend the scope of application of VB to cases where the likelihood is intractable but can be estimated unbiasedly, and name the method “Variational Bayes with Intractable Likelihood (VBIL)”. This paper presents a version of VBIL, named Variational Bayes with Intractable Log-Likelihood (VBILL), that is useful for cases, such as big data and big panel data models, where only unbiased estimators of the log-likelihood are available. In particular, we develop an estimation approach, based on subsampling and the MapReduce
programming technique, for analysing massive datasets which cannot fit into a single desktop’s memory. The proposed method is theoretically justified in the sense that, apart from an extra Monte Carlo error which can be controlled, it is able to produce estimators as if the true log-likelihood or full data were used. The proposed methodology is robust in the sense that it works well when only highly variable estimates of the log-likelihood are available. The method is illustrated empirically using several simulated datasets and a big real dataset
based on the arrival time status of U. S. airlines. Keywords. Pseudo Marginal Metropolis-Hastings, Debiasing Approach, Big Data, Panel Data, Difference Estimator.2016-03-30T00:00:00ZA New Measure of Vector Dependence, with an Application to Financial Contagion
http://hdl.handle.net/2123/14490
Title: A New Measure of Vector Dependence, with an Application to Financial Contagion
Authors: Medovikov, Ivan; Prokhorov, Artem
Abstract: We propose a new nonparametric measure of association between an arbitrary number of random vectors. The measure is based on the empirical copula process for the multivariate marginals, corresponding to the vectors, and is insensitive to the within-vector dependence. It is bounded by the [0, 1] interval, covering the entire range of dependence from vector independence to a vector version of a monotone relationship. We study the properties of the new measure under several well-known copulas and provide a nonparametric estimator of the measure, along with its asymptotic theory, under fairly general assumptions. To illustrate the applicability of the new measure, we use it to assess the degree of interdependence between equity markets in North and South America, Europe and Asia, surrounding the financial crisis of 2008. We find strong evidence of previously unknown contagion patterns, with selected regions exhibiting little dependence before and after the crisis and a lot of dependence during the crisis period.2016-03-11T00:00:00ZExact ABC using Importance Sampling
http://hdl.handle.net/2123/13839
Title: Exact ABC using Importance Sampling
Authors: Tran, Minh-Ngoc; Kohn, Robert
Abstract: Approximate Bayesian Computation (ABC) is a powerful method for carrying out
Bayesian inference when the likelihood is computationally intractable. However, a draw-
back of ABC is that it is an approximate method that induces a systematic error because
it is necessary to set a tolerance level to make the computation tractable. The issue of
how to optimally set this tolerance level has been the subject of extensive research. This
paper proposes an ABC algorithm based on importance sampling that estimates expec-
tations with respect to the exact posterior distribution given the observed summary
statistics. This overcomes the need to select the tolerance level. By exact we mean
that there is no systematic error and the Monte Carlo error can be made arbitrarily
small by increasing the number of importance samples. We provide a formal justifica-
tion for the method and study its convergence properties. The method is illustrated in
two applications and the empirical results suggest that the proposed ABC based esti-
mators consistently converge to the true values as the number of importance samples
increases. Our proposed approach can be applied more generally to any importance
sampling problem where an unbiased estimate of the likelihood is required.2015-09-23T00:00:00ZBayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Range and Realized Measures
http://hdl.handle.net/2123/13800
Title: Bayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Range and Realized Measures
Authors: Gerlach, Richard; Wang, Chao
Abstract: A new framework named Realized Conditional Autoregressive Expectile (Realized-
CARE) is proposed, through incorporating a measurement equation into the conventional
CARE model, in a framework analogous to Realized-GARCH. The Range
and realized measures (Realized Variance and Realized Range) are employed as
the dependent variables of the measurement equation, since they have proven more
efficient than return for volatility estimation. The dependence between Range &
realized measures and expectile can be modelled with this measurement equation.
The grid search accuracy of the expectile level will be potentially improved with introducing
this measurement equation. In addition, through employing a quadratic
fitting target search, the speed of grid search is significantly improved. Bayesian
adaptive Markov Chain Monte Carlo is used for estimation, and demonstrates its superiority
compared to maximum likelihood in a simulation study. Furthermore, we
propose an innovative sub-sampled Realized Range and also adopt an existing scaling
scheme, in order to deal with the micro-structure noise of the high frequency
volatility measures. Compared to the CARE, the parametric GARCH and the
Realized-GARCH models, Value-at-Risk and Expected Shortfall forecasting results
of 6 indices and 3 assets series favor the proposed Realized-CARE model,
especially the Realized-CARE model with Realized Range and sub-sampled
Realized Range.2015-09-11T00:00:00ZFat tails and copulas: limits of diversification revisited
http://hdl.handle.net/2123/13799
Title: Fat tails and copulas: limits of diversification revisited
Authors: Ibragimov, Rustam; Prokhorov, Artem; Mo, Jingyuan
Abstract: We consider the problem of portfolio risk diversification in a Value-at-Risk framework with heavy-tailed risks and arbitrary dependence captured by a copula function. We use the power law for modelling the tails and investigate whether the benefits of diversification persist when the risks in consideration are allowed to have extremely heavy tails with tail indices less than one and when their copula describes wide classes of dependence structures. We show that for asymptotically large losses with the Eyraud-Farlie-Gumbel-Morgenstern copula, the threshold value of tail indices at which diversification stops being beneficial is the same as for independent losses. We further extend this result to a wider range of dependence structures which can be
approximated using power-type copulas and their approximations. This range of dependence structures includes many well known copula families, among which there are comprehensive, Archimedian, asymmetric and tail dependent copulas. In other words, diversification increases Value-at-Risk for tail indices less than one regardless of the nature of dependence between portfolio components within these classes. A wide set of simulations supports these theoretical results.2015-09-11T00:00:00ZGeneralized Information Matrix Tests for Copulas
http://hdl.handle.net/2123/13798
Title: Generalized Information Matrix Tests for Copulas
Authors: Prokhorov, Artem; Schepsmeier, Ulf; Zhu, Yajing
Abstract: We propose a family of goodness-of-fit tests for copulas. The tests use generalizations of the information matrix (IM) equality of White (1982) and so relate to the copula test proposed by Huang and Prokhorov (2014). The idea is that eigenspectrum-based statements of the IM equality reduce the degrees of freedom of the test's asymptotic distribution and lead to better size-power properties, even in high dimensions. The gains are especially pronounced for
vine copulas, where additional benefits come from simplifications of score functions and the Hessian. We derive the asymptotic distribution of the generalized tests, accounting for the non-parametric estimation of the marginals and apply a parametric bootstrap procedure, valid when asymptotic critical values are inaccurate. In Monte Carlo simulations, we study the behavior of the new tests, compare them with several Cramer-von Mises type tests and confirm the desired properties of the new tests in high dimensions.2015-09-11T00:00:00ZSupplemental Material for GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
http://hdl.handle.net/2123/13797
Title: Supplemental Material for GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
Authors: Hill, Jonathan B.; Prokhorov, Artem
Abstract: The following supplemental material contains an omitted simulation experiment, and omitted
proofs of theorems and preliminary lemmata. Section S contains simulation results, and Section
A contains an appendix with omitted proofs.2015-09-11T00:00:00ZGEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
http://hdl.handle.net/2123/13795
Title: GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
Authors: Hill, Jonathan B.; Prokhorov, Artem
Abstract: We construct a Generalized Empirical Likelihood estimator for a GARCH(1,1) model
with a possibly heavy tailed error. The estimator imbeds tail-trimmed estimating equations allowing for over-identifying conditions, asymptotic normality, efficiency and empirical likelihood based confidence regions for very heavy-tailed random volatility data. We show the implied probabilities from the tail-trimmed Continuously Updated Estimator elevate weight for usable large values, assign large but not maximum weight to extreme observations, and give the lowest weight to non-leverage points. We derive a higher order expansion for GEL with imbedded tail-trimming (GELITT), which reveals higher order bias and efficiency properties, available when the GARCH error has a finite second moment. Higher order
asymptotics for GEL without tail-trimming requires the error to have moments of substantially higher order. We use first order asymptotics and higher order bias to justify the choice of the number of trimmed observations in any given sample. We also present robust versions of Generalized Empirical Likelihood Ratio, Wald, and Lagrange Multiplier tests, and an efficient and heavy tail robust moment estimator with an application to expected shortfall estimation. Finally, we present a broad simulation study for GEL and GELITT, and demonstrate profile weighted expected shortfall for the Russian Ruble - US Dollar exchange rate. We show that tail-trimmed CUE-GMM dominates other estimators in terms of bias, mse and approximate normality.
Description: AMS classifications : 62M10 , 62F35.
JEL classifications : C13 , C49.2015-09-11T00:00:00ZGeneralized Variance: A Robust Estimator of Stock Price Volatility
http://hdl.handle.net/2123/13263
Title: Generalized Variance: A Robust Estimator of Stock Price Volatility
Authors: Sutton, M; Vasnev, A; Gerlach, R
Abstract: This paper proposes an ex-post volatility estimator, called generalized variance,
that uses high frequency data to provide measurements robust to the idiosyncratic
noise of stock markets caused by market microstructures. The new volatility estimator is analyzed theoretically, examined in a simulation study and evaluated empirically against the two currently dominant measures of daily volatility: realized volatility and realized range. The main finding is that generalized variance is robust to the presence of microstructures while delivering accuracy superior to realized volatility and realized range in several circumstances. The empirical study features Australian stocks from the ASX 20.2015-04-30T00:00:00ZEndogeneity in Stochastic Frontier Models
http://hdl.handle.net/2123/12755
Title: Endogeneity in Stochastic Frontier Models
Authors: Amsler, Christine; Artem, Prokhorov; Peter, Schmidt
Abstract: Stochastic frontier models are typically estimated by maximum likelihood (MLE) orcorrected ordinary least squares. The consistency of either estimator depends on exogeneity of the explanatory variables (inputs, in the production frontier setting). We will investigate the case that one or more of the inputs is endogenous, in the simultaneous equation sense of endogeneity. That is, we worry that there is correlation between the inputs and statistical noise or inefficiency.
In a standard regression setting, simultaneity is handled by a number of procedures that are numerically or asymptotically equivalent. These include 2SLS; using the residual from the reduced form equations for the endogenous variables as a control function; and MLE of the system that contains the equation of interest plus the unrestricted reduced form equations for the endogenous variables (LIML). We will consider modifications of these standard procedures for the stochastic frontier setting.
The paper is mostly a survey and combination of existing results from the stochastic frontier literature and the classic simultaneous equations literature, but it also contains some new results.2015-02-17T00:00:00ZForecasting risk via realized GARCH, incorporating the realized range
http://hdl.handle.net/2123/12235
Title: Forecasting risk via realized GARCH, incorporating the realized range
Authors: Richard, Gerlach; Chao, Wang
Abstract: The realized GARCH framework is extended to incorporate the realized range,
and the intra-day range, as potentially more efficient series of information than re-
alized variance or daily returns, for the purpose of volatility and tail risk forecasting
in a financial time series. A Bayesian adaptive Markov chain Monte Carlo method
is employed for estimation and forecasting. Compared to a range of well known
parametric GARCH models, predictive log-likelihood results across six market in-
dex return series favor the realized GARCH models incorporating the realized range.
Further, these same models also compare favourably for tail risk forecasting, both
during and after the global financial crisis.2014-11-07T00:00:00ZBayesian Tail Risk Forecasting using Realised GARCH
http://hdl.handle.net/2123/12060
Title: Bayesian Tail Risk Forecasting using Realised GARCH
Authors: Contino, Christian; Gerlach, Richard
Abstract: A Realised Volatility GARCH model is developed within a Bayesian framework
for the purpose of forecasting Value at Risk and Conditional Value at Risk.
Student-t and Skewed Student-t return distributions are combined with Gaussian
and Student-t distributions in the measurement equation in a GARCH framework to
forecast tail risk in eight international equity index markets over a four year period.
Three Realised Volatility proxies are considered within this framework. Realised
Volatility GARCH models show a marked improvement compared to ordinary
GARCH for both Value at Risk and Conditional Value at Risk forecasting. This
improvement is consistent across a variety of data, volatility model speci_cations and
distributions, and demonstrates that Realised Volatility is superior when producing volatility forecasts. Realised Volatility models implementing a Skewed Student-t distribution for returns in the GARCH equation are favoured.2014-10-10T00:00:00ZBayesian Assessment of Dynamic Quantile Forecasts
http://hdl.handle.net/2123/11816
Title: Bayesian Assessment of Dynamic Quantile Forecasts
Authors: Gerlach, Richard; Chen, Cathy W.S.; Lin, Edward M.H.
Abstract: Methods for Bayesian testing and assessment of dynamic quantile forecasts are
proposed. Specifically, Bayes factor analogues of popular frequentist tests for
independence of violations from, and for correct coverage of a time series of, quantile
forecasts are developed. To evaluate the relevant marginal likelihoods involved,
analytic integration methods are utilised when possible, otherwise multivariate
adaptive quadrature methods are employed to estimate the required quantities. The
usual Bayesian interval estimate for a proportion is also examined in this context.
The size and power properties of the proposed methods are examined via a
simulation study, illustrating favourable comparisons both overall and with their
frequentist counterparts. An empirical study employs the proposed methods, in
comparison with standard tests, to assess the adequacy of a range of forecasting
models for Value at Risk (VaR) in several financial market data series.2014-09-10T00:00:00ZConsistent Estimation of Linear Regression Models Using Matched Data
http://hdl.handle.net/2123/11773
Title: Consistent Estimation of Linear Regression Models Using Matched Data
Authors: Prokhorov, Artem; Hirukawa, Masayuki
Abstract: Economists often use matched samples, especially when dealing with earnings data
where a number of missing observations need to be imputed. In this paper, we
demonstrate that the ordinary least squares estimator of the linear regression model
using matched samples is inconsistent and has a non-standard convergence rate to
its probability limit. If only a few variables are used to impute the missing data then it
is possible to correct for the bias. We propose two semi-parametric bias-corrected
estimators and explore their asymptotic properties. The estimators have an indirectinference
interpretation and their convergence rates depend on the number of
variables used in matching. We can attain the parametric convergence rate if that
number is no greater than three. Monte Carlo simulations confirm that the bias
correction works very well in such cases.2014-09-05T00:00:00ZSemi-parametric Expected Shortfall Forecasting
http://hdl.handle.net/2123/10457
Title: Semi-parametric Expected Shortfall Forecasting
Authors: Gerlach, Richard; Chen, Cathy W.S.
Abstract: Intra-day sources of data have proven effective for dynamic volatility and tail risk
estimation. Expected shortfall is a tail risk measure, that is now recommended by the
Basel Committee, involving a conditional expectation that can be semi-parametrically
estimated via an asymmetric sum of squares function. The conditional autoregressive
expectile class of model, used to indirectly model expected shortfall, is generalised to
incorporate information on the intra-day range. An asymmetric Gaussian density
model error formulation allows a likelihood to be developed that leads to semiparametric
estimation and forecasts of expectiles, and subsequently of expected
shortfall. Adaptive Markov chain Monte Carlo sampling schemes are employed for
estimation, while their performance is assessed via a simulation study. The proposed
models compare favourably with a large range of competitors in an empirical study
forecasting seven financial return series over a ten year period.2014-04-01T00:00:00ZConfidence Levels for CVaR Risk Measures and Minimax Limits*
http://hdl.handle.net/2123/9943
Title: Confidence Levels for CVaR Risk Measures and Minimax Limits*
Authors: Anderson, Edward; Xu, Huifu; Zhang, Dali
Abstract: Conditional value at risk (CVaR) has been widely used as a risk measure in finance. When the confidence level of CVaR is set close to 1, the CVaR risk measure approximates the extreme (worst scenario) risk measure. In this paper, we present a quantitative analysis of the relationship between the two risk measures and it’s impact on optimal decision making when we wish to minimize the respective risk measures. We also investigate the difference between the optimal solutions to the two optimization problems with identical objective function but under constraints on the two risk measures. We discuss the benefits of a sample average approximation scheme for the CVaR constraints and investigate the convergence of the optimal solution obtained from this scheme as the sample size increases. We use some portfolio optimization problems to investigate teh performance of the CVaR approximation approach. Our numerical results demonstrate how reducing the confidence level can lead to a better overall performance.2014-01-01T00:00:00ZTwo-Sample Nonparametric Estimation of Intergenerational Income Mobility
http://hdl.handle.net/2123/9293
Title: Two-Sample Nonparametric Estimation of Intergenerational Income Mobility
Authors: Murtazashvili, Irina; Liu, Di; Prokhorov, Artem
Abstract: We estimate intergenerational income mobility in the USA and Sweden. To measure the degree to which income status is transmitted from one generation to another we propose a nonparametric estimator, which is particularly relevant for cross-country comparisons. Our approach allows intergenerational mobility to vary across observable family characteristics. Furthermore, it ts situations when data on
fathers and sons come from di fferent samples. Finally, our estimator is consistent in the presence of measurement error in fathers' long-run economic status. We fi nd that family background captured by fathers' education matters for intergenerational income persistence in the USA more than in Sweden
suggesting that the character of inequality in the two countries is rather di fferent.2013-08-07T00:00:00ZCompeting for contracts with buyer uncertainty: Choosing price and quality variables
http://hdl.handle.net/2123/9071
Title: Competing for contracts with buyer uncertainty: Choosing price and quality variables
Authors: Anderson, Edward; Qian, Cheng
Abstract: We model a situation in which a single firm evaluates competing suppliers and
selects just one. Suppliers submit bids involving both price and quality variables. The
buyer makes a choice which from the supplier's perspective appears to contain a
stochastic element - for example the buyer may have information, which is not
shared with the suppliers, and that gives one supplier an advantage in the final
choice. We use a discrete choice model of buyer choice (e.g. multinomial logit). Our
main result is that the supplier's choice of the quality variables is not affected by the
competitive environment. Thus the suppliers compete only on price. We compare this
with a second model in which the buyer's weighting on different quality variables is
uncertain at the time bids are made.2013-05-09T00:00:00ZForecast combination for U.S. recessions with real-time data
http://hdl.handle.net/2123/8965
Title: Forecast combination for U.S. recessions with real-time data
Authors: Vasnev, Andrey; Pauwels, Laurent
Abstract: This paper proposes the use of forecast combination to improve predictive accuracy
in forecasting the U.S. business cycle index, as published by the Business Cycle
Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly
forecast utilising the well-established coincident indicators and yield curve models,
allowing for dynamics and real-time data revisions. Forecast combinations use logscore
and quadratic-score based weights, which change over time. This paper finds
that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's
own forecasting performance.2013-03-01T00:00:00ZPractical use of sensitivity in econometrics with an illustration to forecast combinations
http://hdl.handle.net/2123/8964
Title: Practical use of sensitivity in econometrics with an illustration to forecast combinations
Authors: Vasnev, Andrey; Magnus, Jan R
Abstract: Sensitivity analysis is important for its own sake and also in combination with
diagnostic testing. We consider the question how to use sensitivity statistics in
practice, in particular how to judge whether sensitivity is large or small. For this
purpose we distinguish between absolute and relative sensitivity and highlight the
context-dependent nature of any sensitivity analysis. Relative sensitivity is then
applied in the context of forecast combination and sensitivity-based weights are
introduced. All concepts are illustrated through the European yield curve. In this
context it is natural to look at sensitivity to autocorrelation and normality assumptions.
Different forecasting models are combined with equal, fit-based and sensitivity-based
weights, and compared with the multivariate and random walk benchmarks. We show
that the fit-based weights and the sensitivity-based weights are complementary. For
long-term maturities the sensitivity-based weights perform better than other weights.2013-03-01T00:00:00ZMultiple Event Incidence and Duration Analysis for Credit Data Incorporating Non-Stochastic Loan Maturity
http://hdl.handle.net/2123/8963
Title: Multiple Event Incidence and Duration Analysis for Credit Data Incorporating Non-Stochastic Loan Maturity
Authors: Vasnev, Andrey; Gerlach, Richard; Watkins, John
Abstract: Applications of duration analysis in Economics and Finance exclusively employ
methods for events of stochastic duration. In application to credit data, previous
research incorrectly treats the time to pre-determined maturity events as censored
stochastic event times. The medical literature has binary parametric ‘cure rate’
models that deal with populations that never experienced the modelled event. We
propose and develop a Multinomial parametric incidence and duration model,
incorporating such populations. In the class of cure rate models, this is the first fully
parametric multinomial model and is the first framework to accommodate an event
with pre-determined duration. The methodology is applied to unsecured personal
loan credit data provided by one of Australia’s largest financial services
organizations. This framework is shown to be more flexible and predictive through a
simulation and empirical study that reveals: simulation results of estimated
parameters with a large reduction in bias; superior forecasting of duration;
explanatory variables can act in different directions upon incidence and duration;
and, variables exist that are statistically significant in explaining only incidence or
duration.2012-12-01T00:00:00ZForecast combination for U.S. recessions with real-time data
http://hdl.handle.net/2123/8933
Title: Forecast combination for U.S. recessions with real-time data
Authors: Vasnev, Andrey; Pauwels, Laurent
Abstract: This paper proposes the use of forecast combination to improve predictive accuracy
in forecasting the U.S. business cycle index as published by the Business Cycle
Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly
forecast utilising the well-established coincident indicators and yield curve models,
allowing for dynamics and real-time data revisions. Forecast combinations use logscore
and quadratic-score based weights, which change over time. This paper finds
that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's
own forecasting performance.2013-01-01T00:00:00ZPractical considerations for optimal weights in density forecast combination
http://hdl.handle.net/2123/8932
Title: Practical considerations for optimal weights in density forecast combination
Authors: Vasnev, Andrey; Pauwels, Laurent
Abstract: The problem of finding appropriate weights to combine several density forecasts
is an important issue currently debated in the forecast combination literature.
Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density
forecasts with optimal weights obtained from solving an optimization problem.
This paper studies the properties of this optimization problem when the number
of forecasting periods is relatively small and finds that it often produces corner
solutions by allocating all the weight to one density forecast only. This paper’s
practical recommendation is to have an additional training sample period for the
optimal weights. While reserving a portion of the data for parameter estimation
and making pseudo-out-of-sample forecasts are common practices in the empirical
literature, employing a separate training sample for the optimal weights is novel,
and it is suggested because it decreases the chances of corner solutions. Alternative
log-score or quadratic-score weighting schemes do not have this training sample
requirement.
January2013-01-01T00:00:00ZMaximum likelihood estimation of time series models: the Kalman filter and beyond
http://hdl.handle.net/2123/8337
Title: Maximum likelihood estimation of time series models: the Kalman filter and beyond
Authors: Proietti, Tommaso; Luati, Alessandra
Abstract: The purpose of this chapter is to provide a comprehensive treatment of likelihood inference for state space models. These are a class of time series models relating an observable time series to quantities called states, which are characterized by a simple temporal dependence structure, typically a first order Markov process.
The states have sometimes substantial interpretation. Key estimation problems in economics concern latent variables, such as the output gap, potential output, the non-accelerating-inflation rate of unemployment, or NAIRU, core inflation, and so forth. Time-varying volatility, which is quintessential to finance, is an important feature also in macroeconomics. In the multivariate framework relevant features can be common to different series, meaning that the driving forces of a particular feature and/or the transmission mechanism are the same.
The objective of this chapter is reviewing this algorithm and discussing maximum likelihood inference, starting from the linear Gaussian case and discussing the extensions to a nonlinear and non Gaussian framework.2012-05-01T00:00:00ZMargining Option Portfolios by Network Flows
http://hdl.handle.net/2123/8173
Title: Margining Option Portfolios by Network Flows
Authors: Matsypura, D.; Timkovsky, V.G.
Abstract: As shown in [Rudd and Schroeder, 1982], the problem of margining option portfolios where option spreads with two legs are used for offsetting can be solved in polynomial time by network flow algorithms. However, spreads with only two legs do not provide sufficient accuracy in measuring risk. Therefore, margining practice also employs spreads with three and four legs. A polynomial time solution to the extension of the problem where option spreads with three and four legs are also used for offsetting is not known. In this paper we propose a heuristic network flow algorithm for this extension and present a computational study that proves high efficiency of this algorithm in margining practice.2010-09-01T00:00:00ZCombinatorics of Option Spreads: The Margining Aspect
http://hdl.handle.net/2123/8172
Title: Combinatorics of Option Spreads: The Margining Aspect
Authors: Matsypura, D.; Timkovsky, V.G.
Abstract: In December 2005, the U.S. Securities and Exchange Commission approved margin rules for complex option spreads with 5, 6, 7, 8, 9, 10 and 12 legs. Only option spreads with 2, 3 or 4 legs were recognized before. Taking advantage of option spreads with a large number of legs substantially reduces margin requirements and, at the same time, adequately estimates risk for margin accounts with positions in options. In this paper we present combinatorial models for known and newly discovered option spreads with up to 134 legs. We propose their full characterization in terms of matchings, alternating cycles and chains in graphs with bicolored edges. We show that the combinatorial analysis of option spreads reveals powerful hedging mechanisms in the structure of margin accounts, and that the problem of minimizing the margin requirement for a portfolio of option spreads can be solved in polynomial time using network flow algorithms. We also give recommendations on how to create more efficient margin rules for options.2010-07-01T00:00:00ZPortfolio Margining: Strategy vs Risk
http://hdl.handle.net/2123/8171
Title: Portfolio Margining: Strategy vs Risk
Authors: Coffman, E.G. Jr; Matsypura, D.; Timkovsky, V.G.
Abstract: This paper presents the results of a novel mathematical and experimental analysis of two approaches to margining customer accounts, strategy-based and risk-based. Building combinatorial models of hedging mechanisms of these approaches, we show that the strategy-based approach is, at this point, the most appropriate one for margining security portfolios in customer margin accounts, while the risk-based approach can work efficiently for margining only index portfolios in customer mar-gin accounts and inventory portfolios of brokers. We also show that the application of the risk-based approach to security portfolios in customer margin accounts is very risky and can result in the pyramid of debt in the bullish market and the pyramid of loss in the bearish market. The results of this paper support the thesis that the use of the risk-based approach to margining customer accounts with positions in stocks and stock options since April 2007 influenced and triggered the U.S. stock market crash in October 2008. We also provide recommendations on ways to set appropriate margin requirements to help avoid such failures in the future.2010-03-01T00:00:00ZEstimating Value At Risk
http://hdl.handle.net/2123/8170
Title: Estimating Value At Risk
Authors: Lu, Zudi; Huang, Hai; Gerlach, Richard
Abstract: Significantly driven by JP Morgan's RiskMetrics system with EWMA (exponentially weighted moving average) forecasting technique, value-at-risk (VaR) has turned to be a popular measure of the degree of various risks in financial risk management. In this paper we propose a new approach termed skewed-EWMA to forecast the changing volatility and formulate an adaptively efficient procedure to estimate the VaR. Differently from the JP Morgan's standard-EWMA, which is derived from a Gaussian distribution, and the Guermat and Harris (2001)'s robust-EWMA, from a Laplace distribution, we motivate and derive our skewed-EWMA procedure from an asymmetric Laplace distribution, where both skewness and heavy tails in return distribution and the time-varying nature of them in practice are taken into account. An EWMA-based procedure that adaptively adjusts the shape parameter controlling the skewness and kurtosis in the distribution is suggested. Backtesting results show that our proposed skewed-EWMA method offers a viable improvement in forecasting VaR.2010-01-01T00:00:00ZBayesian Semi-parametric Expected Shortfall Forecasting in Financial Markets
http://hdl.handle.net/2123/8169
Title: Bayesian Semi-parametric Expected Shortfall Forecasting in Financial Markets
Authors: Gerlach, Richard; Chen, Cathy W.S.; Lin, Liou-Yan
Abstract: Bayesian semi-parametric estimation has proven effective for quantile estimation in general and specifically in financial Value at Risk forecasting. Expected short-fall is a competing tail risk measure, involving a conditional expectation beyond a quantile, that has recently been semi-parametrically estimated via asymmetric least squares and so-called expectiles. An asymmetric Gaussian density is proposed allowing a likelihood to be developed that leads to Bayesian semi-parametric estimation and forecasts of expectiles and expected shortfall. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation in these families. The proposed models are clearly favoured in an empirical study forecasting eleven financial return series: clear evidence of more accurate expected shortfall forecasting, compared to a range of competing methods is found. Further, the most favoured models are those estimated by Bayesian methods.2012-01-01T00:00:00ZThe Multistep Beveridge-Nelson Decomposition
http://hdl.handle.net/2123/8168
Title: The Multistep Beveridge-Nelson Decomposition
Authors: Proietti, Tommaso
Abstract: The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-stepahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.2011-10-01T00:00:00ZDoes the Box-Cox transformation help in forecasting macroeconomic time series?
http://hdl.handle.net/2123/8167
Title: Does the Box-Cox transformation help in forecasting macroeconomic time series?
Authors: Proietti, Tommaso; Lütkepohl, Helmut
Abstract: The paper investigates whether transforming a time series leads to an improvement in forecasting accuracy. The class of transformations that is considered is the Box-Cox power transformation, which applies to series measured on a ratio scale. We propose a nonparametric approach for estimating the optimal transformation parameter based on the frequency domain estimation of the prediction error variance, and also conduct an extensive recursive forecast experiment on a large set of seasonal monthly macroeconomic time series related to industrial production and retail turnover. In about one fifth of the series considered the Box-Cox transformation produces forecasts significantly better than the untransformed data at one-step-ahead horizon; in most of the cases the logarithmic transformation is the relevant one. As the forecast horizon increases, the evidence in favour of a transformation becomes less strong. Typically, the naïve predictor that just reverses the transformation leads to a lower mean square error than the optimal predictor at short forecast leads. We also discuss whether the preliminary in-sample frequency domain assessment conducted provides a reliable guidance which series should be transformed for improving significantly the predictive performance.2011-10-01T00:00:00ZStochastic trends and seasonality in economic time series: new evidence from Bayesian stochastic model specification search
http://hdl.handle.net/2123/8166
Title: Stochastic trends and seasonality in economic time series: new evidence from Bayesian stochastic model specification search
Authors: Proietti, Tommaso; Grassi, Stefano
Abstract: An important issue in modelling economic time series is whether key unobserved components representing trends, seasonality and calendar components, are deterministic or evolutive. We address it by applying a recently proposed Bayesian variable selection methodology to an encompassing linear mixed model that features, along with deterministic effects, additional random explanatory variables that account for the evolution of the underlying level, slope, seasonality and trading days. Variable selection is performed by estimating the posterior model probabilities using a suitable Gibbs sampling scheme. The paper conducts an extensive empirical application on a large and representative set of monthly time series concerning industrial production and retail turnover. We find strong support for the presence of stochastic trends in the series, either in the form of a time-varying level, or, less frequently, of a stochastic slope, or both. Seasonality is a more stable component: only in 70% of the cases we were able to select at least one stochastic trigonometric cycle out of the six possible cycles. Most frequently the time variation is found in correspondence with the fundamental and the first harmonic cycles. An interesting and intuitively plausible finding is that the probability of estimating time-varying components increases with the sample size available. However, even for very large sample sizes we were unable to find stochastically varying calendar effects.2011-09-01T00:00:00ZDo External Political Pressures Affect the Renminbi Exchange Rate?
http://hdl.handle.net/2123/8165
Title: Do External Political Pressures Affect the Renminbi Exchange Rate?
Authors: Pauwels, Laurent; Liu, Li-Gang
Abstract: This paper investigates whether external political pressure for faster renminbi (RMB) appreciation affect both the daily returns and the conditional volatility of the RMB central parity rate. We construct several political pressure indicators pertaining to the RMB exchange rate, with a special emphasis on the US pressure, to test the hypothesis. After controlling for Chinese macroeconomic surprise news, we find that US and non-US political pressure does not have a significant influence on RMB's daily returns. However, evidence suggests that political pressures, and especially those from the US, have statistically significant impacts on the conditional volatility of the RMB. Furthermore, we conduct the same exercise on the 12-month RMB nondeliverable forward rate (NDF). We find that the NDF market is highly responsive to macroeconomic surprise news and there is some evidence that Sino-US bilateral meetings affect the conditional volatility of the RMB NDF.2011-09-01T00:00:00ZRanking games and gambling: When to quit when you're ahead
http://hdl.handle.net/2123/8164
Title: Ranking games and gambling: When to quit when you're ahead
Authors: Anderson, E.J.
Abstract: It is common for rewards to be given on the basis of a rank ordering, so that relative performance amongst a cohort is the criterion. In this paper we formulate an equilibrium model in which an agent makes successive decisions on whether or not to gamble and is rewarded on the basis of a rank ordering of final wealth. This is a model of the behaviour of mutual fund managers who are paid depending on funds under management which in turn are largely determined by annual or quarterly rank orderings. In this model fund managers can elect either to pick stocks or to use a market tracking strategy. In equilibrium the final distribution of rewards will have a negative skew. We explore how this distribution depends on the number of players, the probability of success when gambling, the structure of the rewards, and on information regarding the other player's performance.2011-08-01T00:00:00ZThe Two-sided Weibull Distribution and Forecasting Financial Tail Risk
http://hdl.handle.net/2123/8163
Title: The Two-sided Weibull Distribution and Forecasting Financial Tail Risk
Authors: Gerlach, Richard; Chen, Qian
Abstract: A two-sided Weibull is developed to model the conditional financial return distribution, for the purpose of forecasting Value at Risk (VaR) and conditional VaR. A range of conditional return distributions are combined with four volatility specifications to forecast tail risk in four international markets, two exchange rates and one individual asset series, over a four year forecast period that includes the recent global financial crisis. The two-sided Weibull performs at least as well as other distributions for VaR forecasting, but performs most favourably for conditional Value at Risk forecasting, prior to as well as during and after the recent crisis.2011-01-01T00:00:00ZMixed strategies in discriminatory divisible-good auctions
http://hdl.handle.net/2123/8162
Title: Mixed strategies in discriminatory divisible-good auctions
Authors: Anderson, E.J.; Holmberg, P.; Philpott, A.B.
Abstract: Using the concept of market-distribution functions, we derive general optimality conditions for discriminatory divisible-good auctions, which are also applicable to Bertrand games and non-linear pricing. We introduce the concept of offer distribution function to analyze randomized offer curves, and characterize mixed-strategy Nash equilibria for pay-as-bid auctions where demand is uncertain and costs are common knowledge; a setting for which pure-strategy supply function equilibria typically do not exist. We generalize previous results on mixtures over horizontal offers as in Bertrand-Edgeworth games, but more importantly we characterize novel mixtures over partly increasing supply functions.2009-11-01T00:00:00ZSurvival Analysis for Credit Scoring: Incidence and Latency
http://hdl.handle.net/2123/8161
Title: Survival Analysis for Credit Scoring: Incidence and Latency
Authors: Watkins, John; Vasnev, Andrey; Gerlach, Richard
Abstract: Duration analysis is an analytical tool for time-to-event data that has been borrowed from medicine and engineering to be applied by econometricians to investigate typical economic and finance problems. In applications to credit data, time to the pre-determined maturity events have been treated as censored observations for the events with stochastic latency. A methodology, motivated by the cure rate model framework, is developed in this paper to appropriately analyse a set of mutually exclusive terminal events where at least one event may have a predetermined latency. The methodology is applied to a set of personal loan data provided by one of Australia's largest financial services institutions. This is the first framework to simultaneously model prepayment, write off and maturity events for loans. Furthermore, in the class of cure rate models it is the first fully parametric multinomial model and the first to accommodate for an event with pre-determined latency. The simulation study found this model performed better than the two most common applications of survival analysis to credit data. In addition, the result of the application to personal loans data reveals particular explanatory variables can act in different directions upon incidence and latency of an event and variables exist that may be statistically significant in explaining only incidence or latency.2009-11-01T00:00:00ZConvergent learning algorithms for potential games with unknown noisy rewards
http://hdl.handle.net/2123/8160
Title: Convergent learning algorithms for potential games with unknown noisy rewards
Authors: Chapman, Archie C.; Leslie, David S.; Rogers, Alex; Jennings, Nicholas R.
Abstract: In this paper, we address the problem of convergence to Nash equilibria in games with rewards that are initially unknown and which must be estimated over time from noisy observations. These games arise in many real-world applications, whenever rewards for actions cannot be prespecified and must be learned on-line. Standard results in game theory, however, do not consider such settings. Specifically, using results from stochastic approximation and differential inclusions, we prove the convergence of variants of fictitious play and adaptive play to Nash equilibria in potential games and weakly acyclic games, respectively. These variants all use a multi-agent version of Q-learning to estimate the reward functions and a novel form of the e-greedy decision rule to select an action. Furthermore, we derive e-greedy decision rules that exploit the sparse interaction structure encoded in two compact graphical representations of games, known as graphical and hypergraphical normal form, to improve the convergence rate of the learning algorithms. The structure captured in these representations naturally occurs in many distributed optimisation and control applications. Finally, we demonstrate the efficacy of the algorithms in a simulated ad hoc wireless sensor network management problem.2011-08-01T00:00:00ZBayesian time-varying quantile forecasting for Value-at-Risk in financial markets
http://hdl.handle.net/2123/8159
Title: Bayesian time-varying quantile forecasting for Value-at-Risk in financial markets
Authors: Gerlach, Richard; Chen, Cathy W.S.; Chan, Nancy Y. C.
Abstract: Recently, Bayesian solutions to the quantile regression problem, via the likelihood of a Skewed-Laplace distribution, have been proposed. These approaches are extended and applied to a family of dynamic conditional autoregressive quantile models. Popular Value at Risk models, used for risk management in finance, are extended to this fully nonlinear family. An adaptive Markov chain Monte Carlo sampling scheme is adapted for estimation and inference. Simulation studies illustrate favourable performance, compared to the standard numerical optimization of the usual nonparametric quantile criterion function, in finite samples. An empirical study generating Value at Risk forecasts for ten major financial stock indices finds significant nonlinearity in dynamic quantiles and evidence favoring the proposed model family, for lower level quantiles, compared to a range of standard parametric volatility models, a semi-parametric smoothly mixing regression and some nonparametric risk measures, in the literature.2009-08-01T00:00:00ZForecast combination for discrete choice models: predicting FOMC monetary policy decisions
http://hdl.handle.net/2123/8158
Title: Forecast combination for discrete choice models: predicting FOMC monetary policy decisions
Authors: Pauwels, Laurent; Vasnev, Andrey
Abstract: This paper provides a methodology for combining forecasts based on several discrete choice models. This is achieved primarily by combining one-step-ahead probability forecast associated with each model. The paper applies well-established scoring rules for qualitative response models in the context of forecast combination. Log-scores and quadratic-scores are both used to evaluate the forecasting accuracy of each model and to combine the probability forecasts. In addition to producing point forecasts, the effect of sampling variation is also assessed. This methodology is applied to forecast the US Federal Open Market Committee (FOMC) decisions in changing the federal funds target rate. Several of the economic fundamentals influencing the FOMC decisions are nonstationary over time and are modelled in a similar fashion to Hu and Phillips (2004a, JoE). The empirical results show that combining forecasted probabilities using scores mostly outperforms both equal weight combination and forecasts based on multivariate models.2011-06-01T00:00:00ZSupply Function Equilibria Always Exist
http://hdl.handle.net/2123/8157
Title: Supply Function Equilibria Always Exist
Authors: Anderson, Edward
Abstract: Supply function equilibria are used in the analysis of divisible good auctions with a large number of identical objects to be sold or bought. An important example occurs in wholesale electricity markets. Despite the substantial literature on supply function equilibria the existence of a pure strategy Nash equilibria for a uniform price auction in asymmetric cases has not been established in a general setting. In this paper we prove the existence of a supply function equilibrium for a duopoly with asymmetric firms having convex costs, with decreasing concave demand subject to an additive demand shock, provided the second derivative of the demand function is small enough. The proof is constructive and also gives insight into the structure of the equilibrium solutions.2011-04-01T00:00:00ZBayesian Forecasting for Financial Risk Management, Pre and Post the Global Financial Crisis
http://hdl.handle.net/2123/8156
Title: Bayesian Forecasting for Financial Risk Management, Pre and Post the Global Financial Crisis
Authors: Gerlach, Richard; Chen, Cathy W.S.; Lin, Edward M.H.; Lee, Wcw
Abstract: Value-at-Risk (VaR) forecasting via a computational Bayesian framework is considered. A range of parametric models are compared, including standard, threshold nonlinear and Markov switching GARCH specifications, plus standard and nonlinear stochastic volatility models, most considering four error probability distributions: Gaussian, Student-t, skewed-t and generalized error distribution. Adaptive Markov chain Monte Carlo methods are employed in estimation and forecasting. A portfolio of four Asia-Pacific stock markets is considered. Two forecasting periods are evaluated in light of the recent global financial crisis. Results reveal that: (i) GARCH models out-performed stochastic volatility models in almost all cases; (ii) asymmetric volatility models were clearly favoured pre-crisis; while at the 1% level during and post-crisis, for a 1 day horizon, models with skewed-t errors ranked best, while IGARCH models were favoured at the 5% level; (iii) all models forecasted VaR less accurately and anti-conservatively post-crisis2011-03-01T00:00:00ZAustralian Residential Housing Market & Hedonic Construction of House Price Indices for Metropolitan
http://hdl.handle.net/2123/8155
Title: Australian Residential Housing Market & Hedonic Construction of House Price Indices for Metropolitan
Authors: Knight, Eva; Cottet, Remy
Abstract: A Semiparametric spatial model is used as it allows nonlinear estimation of both mean and variance.
A Bayesian approach is used for inference via a Markov Chain Monte Carlo sampling scheme. A distinct advantage of using the Bayesian approach is the incorporation of prior information in the inferential process. The prior is updated with arrival of information. In the real world, the modeller should have some idea of the outcome before the modelling process begins. Finite sample inference can be obtained and is more accurate than asymptotic approximation. In the case of the real estate market, transaction data are finite due to infrequent trading. Estimation is done via posterior distributions which factor in the variability of estimators and therefore have improved confidence intervals.
Spatial variables such as longitude and latitude are modelled via the construction of a bivariate thin plate spline. These two variables provide powerful lens for capturing the effect of demographic factors and for borrowing and lending information in neighbouring suburbs. Demographic factors and 1 trends are just as important as economic factors in determining demand for residential housing and they are also included in the model.2011-02-01T00:00:00Z