Business Analytics Working Paper Serieshttps://hdl.handle.net/2123/81242024-10-06T06:20:39Z2024-10-06T06:20:39ZCombining simple multivariate HAR-like models for portfolio constructionClements, AdamVasnev, Andrey L.https://hdl.handle.net/2123/318362023-11-03T04:03:08Z2023-01-01T00:00:00ZCombining simple multivariate HAR-like models for portfolio construction
Clements, Adam; Vasnev, Andrey L.
Forecasts of the covariance matrix of returns is a crucial input into portfolio
construction. In recent years multivariate version of the Heterogenous AutoRegressive
(HAR) models have been designed to utilise realised measures of the covariance
matrix to generate forecasts. This paper shows that combining forecasts
from simple HAR-like models provide more coefficients estimates, stable forecasts
and lower portfolio turnover. The economic benefits of the combination approach
become crucial when transactions costs are taken into account. This combination
approach also provides benefits in the context of direct forecasts of the portfolio
weights. Economic benefits are observed at both 1-day and 1-week ahead forecast
horizons.
2023-01-01T00:00:00ZThe role of data and priors in estimating climate sensitivityIkefuji, MasakoMagnus, Jan R.Vasnev, Andrey L.https://hdl.handle.net/2123/318352023-11-03T03:56:00Z2023-01-01T00:00:00ZThe role of data and priors in estimating climate sensitivity
Ikefuji, Masako; Magnus, Jan R.; Vasnev, Andrey L.
In Bayesian theory, the data together with the prior produce a
posterior. We show that it is also possible to follow the opposite route, that
is, to use data and posterior information (both of which are observable) to
reveal the prior (which is not observable). We then apply the theory to
equilibrium climate sensitivity as reported by the Intergovernmental Panel
on Climate Change in an attempt to get some insight into the prior beliefs of
the IPCC scientists. It appears that the data contain much less information
than one might think, due to the presence of correlation. We conclude that
the prior in the fifth IPCC report was too low, and in the sixth report too
high.
2023-01-01T00:00:00ZBase-Stock Policies with Constant Lead Time: Closed-Form Solutions and ApplicationsLI, ZHAOLIN (ERICK)LIANG, GUITIANFU, QI (GRACE)TEO, CHUNG-PIAWhttps://hdl.handle.net/2123/302112023-03-14T22:23:11Z2023-03-15T00:00:00ZBase-Stock Policies with Constant Lead Time: Closed-Form Solutions and Applications
LI, ZHAOLIN (ERICK); LIANG, GUITIAN; FU, QI (GRACE); TEO, CHUNG-PIAW
We study stationary base-stock policies for multiperiod dynamic inventory systems with a constant lead time and independently and identically distributed (iid) demands. When ambiguities in the underlying demand distribution arise, we derive the robust optimal base-stock level in closed forms using only the mean and variance of the iid demands. This simple solution performs exceptionally well in numerical experiments, and has important applications for several classes of problems in Operations Management.
More important, we propose a new distribution-free method to derive robust solutions for multiperiod dynamic inventory systems. We formulate a zero-sum game in which the firm chooses a base-stock level to minimize its cost while Nature (which is the firm’s opponent) chooses an iid two-point distribution to maximize the firm’s time-average cost in the steady state. By characterizing the steady-state equilibrium, we demonstrate how lead time can affect the firm’s equilibrium strategy (i.e., the firm’s robust base-stock level), Nature’s equilibrium strategy (i.e., the firm’s most unfavorable distribution), and the value of the zero-sum game (i.e., the firm’s optimized worst-case time-average cost). With either backorders or lost sales, our numerical study shows that superior performance can be obtained using our robust base-stock policies, which mitigate the consequence of distribution mis-specification.
2023-03-15T00:00:00ZGlobal combinations of expert forecastsQian, YilinThompson, RyanVasnev, Andrey Lhttps://hdl.handle.net/2123/293542022-07-29T06:22:46Z2022-01-01T00:00:00ZGlobal combinations of expert forecasts
Qian, Yilin; Thompson, Ryan; Vasnev, Andrey L
Expert forecast combination—the aggregation of individual forecasts from multiple subjectmatter
experts— is a proven approach to economic forecasting. To date, research in this area
has exclusively concentrated on local combination methods, which handle separate but
related forecasting tasks in isolation. Yet, it has been known for over two decades in the
machine learning community that global methods, which exploit taskrelatedness, can improve
on local methods that ignore it. Motivated by the possibility for improvement, this paper
introduces a framework for globally combining expert forecasts. Through our framework, we
develop global versions of several existing forecast combinations. To evaluate the efficacy of
these new global forecast combinations, we conduct extensive comparisons using synthetic
and real data. Our real data comparisons, which involve expert forecasts of core economic
indicators in the Eurozone, are the first empirical evidence that the accuracy of global
combinations of expert forecasts can surpass local combinations.
2022-01-01T00:00:00ZOn the uncertainty of a combined forecast: The critical role of correlationMagnus, JanVasnev, Andreyhttps://hdl.handle.net/2123/273072022-01-10T05:02:08Z2021-01-01T00:00:00ZOn the uncertainty of a combined forecast: The critical role of correlation
Magnus, Jan; Vasnev, Andrey
The purpose of this paper is to show that the effect of the zero-correlation assumption in combining forecasts can be huge, and that ignoring (positive) correlation can lead to confidence bands around the forecast combination that are much too narrow. In the typical case where three or more forecasts are combined, the estimated variance increases without bound when correlation increases. Intuitively, this is because similar forecasts provide little information if we know that they are highly correlated. Although we concentrate on forecast combinations and confidence bands, our theory applies to any statistic where the observations are linearly combined. We apply our theoretical results to explain why forecasts by Central Banks (in our case, the Bank of Japan) are so frequently misleadingly precise. In most cases, a correlation above 0.7 is required to produce reasonable confidence bands.
2021-01-01T00:00:00ZForecast combination puzzle in the HAR modelClements, AdamVasnev, Andreyhttps://hdl.handle.net/2123/250452021-05-12T03:53:45Z2021-01-01T00:00:00ZForecast combination puzzle in the HAR model
Clements, Adam; Vasnev, Andrey
The Heterogeneous Autoregressive (HAR) model of Corsi (2009) has become the
benchmark model for predicting realized volatility given its simplicity and consistent
empirical performance. Many modifications and extensions to the original model have
been proposed that often only provide incremental forecast improvements. In this
paper, we take a step back and view the HAR model as a forecast combination that
combines three predictors: previous day realization (or random walk forecast),
previous week average, and previous month average. When applying the Ordinary
Least Squares (OLS) to combine the predictors, the HAR model uses optimal weights
that are known to be problematic in the forecast combination literature. In fact, the
simple average forecast often outperforms the optimal combination in many empirical
applications. We investigate the performance of the simple average forecast for the
realized volatility of the Dow Jones Industrial Average equity index. We find dramatic
improvements in forecast accuracy across all horizons and different time periods. This
is the first time the forecast combination puzzle is identified in this context.
2021-01-01T00:00:00ZTwo-Stage Stochastic and Robust Optimization for Non-Adaptive Group TestingHo-Nguyen, Namhttps://hdl.handle.net/2123/236952020-10-27T21:07:33Z2020-10-28T00:00:00ZTwo-Stage Stochastic and Robust Optimization for Non-Adaptive Group Testing
Ho-Nguyen, Nam
We consider the problem of detecting defective items amongst a large collection, by conducting tests of individual or groups of items. Group testing offers improvements over the naive individual testing scheme by potentially certifying multiple individual items as non-defective with a single test. The group testing problem aims to design a group testing plan to detect the defective items using as few tests as possible. We propose novel two-stage stochastic and robust optimization formulations for the design of group testing plans in the noiseless non-adaptive setting. Our formulations enable us to certify optimality for existing group testing schemes, as well as model complex grouping constraints, a feature that is not discussed in the existing literature.
2020-10-28T00:00:00ZToo similar to combine? On negative weights in forecast combinationRadchenko, PeterVasnev, AndreyWang, Wendunhttps://hdl.handle.net/2123/229562020-07-28T02:11:38Z2020-01-01T00:00:00ZToo similar to combine? On negative weights in forecast combination
Radchenko, Peter; Vasnev, Andrey; Wang, Wendun
This paper provides the first thorough investigation of the negative weights that can emerge when combining forecasts. The usual practice in the literature is to ignore or trim negative weights, i.e., set them to zero. This default strategy has its merits, but it is not optimal. We study the problem from a variety of different angles, and the main conclusion is that negative weights emerge when highly correlated forecasts with similar variances are combined. In this situation, the estimated weights have large variances, and trimming reduces the variance of the weights and improves the combined forecast. The threshold of zero is arbitrary and can be improved. We propose an optimal trimming threshold, i.e., an additional tuning parameter to improve forecasting performance. The effects of optimal trimming are demonstrated in simulations. In the empirical example using the European Central Bank Survey of Professional Forecasters, we find that the new strategy performs exceptionally well and can deliver improvements of more than 10% for inflation, up to 20% for GDP growth, and more than 20% for unemployment forecasts relative to the equal-weight benchmark.
2020-01-01T00:00:00ZPredicting China’s Monetary Policy with Forecast CombinationsPauwels, Laurenthttps://hdl.handle.net/2123/204062019-08-23T06:23:15Z2019-05-14T00:00:00ZPredicting China’s Monetary Policy with Forecast Combinations
Pauwels, Laurent
China’s monetary policy is unconventional and constantly evolving as a result of its rapid economic development. This paper proposes to use forecast combinations to predict the People’s Bank of China’s monetary policy stance with a large set of 73 macroeconomic and financial predictors covering various aspects of China’s economy. The multiple instruments utilised by the People’s Bank of China are aggregated into a Monetary Policy Index (MPI). The intention is to capture the overall monetary policy stance of the People’s Bank of China into a single variable that can be forecasted. Forecast combination assign weights to predictors according to their forecasting performance to produce a consensus forecast. The out-of-sample forecast results demonstrate that optimal forecast combinations are superior in predicting the MPI over other models such as the Taylor rule and simple autoregressive models. The corporate goods price index and the US nominal effective exchange rate are the most important predictors.
2019-05-14T00:00:00ZFundamental MomentsImbs, JeanPauwels, Laurenthttps://hdl.handle.net/2123/203862019-08-22T07:58:14Z2019-05-08T00:00:00ZFundamental Moments
Imbs, Jean; Pauwels, Laurent
Global trade can give rise to global hubs, centers of activity whose influence on the global economy is large enough that local disturbances have consequences in the aggregate. This paper investigates the nature, existence, and rise of such hubs using the World Input-Output Tables (WIOT) to evaluate the importance of vertical trade in creating global hubs that significantly affect countries volatility and their co-movement. Our results suggest that the world has become more granular since 1995, with significant consequences on GDP volatility and co-movements especially in developed countries. These consequences are well explained by international trade.
2019-05-08T00:00:00ZMoment Redundancy Test with Application to Efficiency-Improving CopulasHao, BowenProkhorov, ArtemQian, Hailonghttps://hdl.handle.net/2123/202042019-08-23T06:22:48Z2019-03-25T00:00:00ZMoment Redundancy Test with Application to Efficiency-Improving Copulas
Hao, Bowen; Prokhorov, Artem; Qian, Hailong
Moment redundancy as defined by Breusch et al. (1999) is a testable hypothesis. We propose a simple test of the hypothesis in the context of copula-based pseudo-maximum likelihood estimation considered by Prokhorov and Schmidt (2009b). A robust and efficiency-improving parametric copula permits sizable improvement in precision at no cost in terms of bias and the proposed test can be used to select such copulas.
2019-03-25T00:00:00ZA New Family of Copulas, with Application to Estimation of a Production Frontier SystemAmsler, ChristineProkhorov, ArtemSchmidt, Peterhttps://hdl.handle.net/2123/202032019-08-23T06:22:54Z2019-03-25T00:00:00ZA New Family of Copulas, with Application to Estimation of a Production Frontier System
Amsler, Christine; Prokhorov, Artem; Schmidt, Peter
In this paper we propose a new family of copulas for which the copula arguments are uncorrelated but dependent. Specifically, if w1 and w2 are the uniform random variables in the copula, they are uncorrelated, but w1 is correlated with |w2 - ½|. We show how this family of copulas can be applied to the error structure in an econometric production frontier model. We also generalize the family of copulas to three or more dimensions, and we give an empirical application.
2019-03-25T00:00:00ZEquivalence of optimal forecast combinations under affine constraintsChan, FelixPauwels, Laurenthttps://hdl.handle.net/2123/201762019-08-22T07:54:04Z2019-03-19T00:00:00ZEquivalence of optimal forecast combinations under affine constraints
Chan, Felix; Pauwels, Laurent
Forecasts are usually produced from models and expert judgements. The reconciliation of different forecasts presents an interesting challenge for managerial decisions. Mean absolute deviations and mean squared errors scoring rules are commonly employed as the criteria of optimality to aggregate or combine multiple forecasts into a consensus forecast. While much is known about mean squared errors in the context of forecast combination, little attention has been given to the mean absolute deviation. This paper establishes the first-order condition and the optimal solutions from minimizing mean absolute deviation. With this result, the paper derives the conditions in which the optimal solutions for minimizing mean absolute deviation and mean squared error loss functions are equivalent. More generally, this paper derives a sufficient condition which ensures the equivalence of optimal solutions of minimizing different loss functions under the same affine constraint that each feasible solution must sum to one. A simulation study and an illustration using expert forecasts data corroborate the theoretical findings. Interestingly, the numerical analysis shows that even with skewness in the data, the equivalence is unaffected. However, when outliers are presented in the data, mean absolute deviation is more robust than the mean squared error in small samples, which is consistent with the conventional belief relating the two loss functions.
2019-03-19T00:00:00ZAsymptotic Theory for Rotated Multivariate GARCH ModelsAsai, ManabuChang, Chia-LinMcAleer, MichaelPauwels, Laurenthttps://hdl.handle.net/2123/201782019-08-22T07:43:18Z2019-03-20T00:00:00ZAsymptotic Theory for Rotated Multivariate GARCH Models
Asai, Manabu; Chang, Chia-Lin; McAleer, Michael; Pauwels, Laurent
In this paper, we derive the statistical properties of a two step approach to estimating multivariate GARCH rotated BEKK (RBEKK) models. By the definition of rotated BEKK, we estimate the unconditional covariance matrix in the first step in order to rotate observed variables to have the identity matrix for its sample covariance matrix. In the second step, we estimate the remaining parameters via maximizing the quasi-likelihood function. For this two step quasi-maximum likelihood (2sQML) estimator, we show consistency and asymptotic normality under weak conditions. While second-order moments are needed for consistency of the estimated unconditional covariance matrix, the existence of finite sixth-order moments are required for convergence of the second-order derivatives of the quasi-log-likelihood function. We also show the relationship of the asymptotic distributions of the 2sQML estimator for the RBEKK model and the variance targeting (VT) QML estimator for the VT-BEKK model. Monte Carlo experiments show that the bias of the 2sQML estimator is negligible, and that the appropriateness of the diagonal specification depends on the closeness to either of the Diagonal BEKK and the Diagonal RBEKK models.
2019-03-20T00:00:00ZHigher Moment Constraints for Predictive Density CombinationsPauwels, LaurentRadchenko, PeterVasnev, Andreyhttps://hdl.handle.net/2123/201752019-08-22T07:58:21Z2019-03-19T00:00:00ZHigher Moment Constraints for Predictive Density Combinations
Pauwels, Laurent; Radchenko, Peter; Vasnev, Andrey
The majority of financial data exhibit asymmetry and heavy tails, which makes forecasting the entire density critically important. Recently, a forecast combina- tion methodology has been developed to combine predictive densities. We show that combining individual predictive densities that are skewed and/or heavy-tailed results in significantly reduced skewness and kurtosis. We propose a solution to over- come this problem by deriving optimal log score weights under Higher-order Moment Constraints (HMC). The statistical properties of these weights are investigated the- oretically and through a simulation study. Consistency and asymptotic distribution results for the optimal log score weights with and without high moment constraints are derived. An empirical application that uses the S&P 500 daily index returns illustrates that the proposed HMC weight density combinations perform very well relative to other combination methods.
2019-03-19T00:00:00ZSpeeding up MCMC by Efficient Data SubsamplingQuiroz, MatiasVillani, MattiasKohn, RobertTran, Minh-Ngochttps://hdl.handle.net/2123/162052019-05-07T02:04:41Z2016-01-01T00:00:00ZSpeeding up MCMC by Efficient Data Subsampling
Quiroz, Matias; Villani, Mattias; Kohn, Robert; Tran, Minh-Ngoc
We propose Subsampling MCMC, a Markov Chain Monte Carlo (MCMC) framework where the likelihood function for n observations is estimated from a random subset of m observations. We introduce a general and highly efficient unbiased estimator of the log-likelihood based on control variates obtained from clustering the data. The cost of computing the log-likelihood estimator is much smaller than that of the full log-likelihood used by standard MCMC. The likelihood estimate is bias-corrected and used in two correlated pseudo-marginal algorithms to sample from a perturbed posterior, for which we derive the asymptotic error with respect to n and m, respectively. A practical estimator of the error is proposed and we show that the error is negligible even for a very small m in our applications. We demonstrate that Subsampling MCMC is substantially more efficient than standard MCMC in terms of sampling efficiency for a given computational budget, and that it outperforms other subsampling methods for MCMC proposed in the literature.
2016-01-01T00:00:00ZMatrix Neural NetworksGao, JunbinGuo, YiWang, Zhiyonghttps://hdl.handle.net/2123/158392016-11-02T13:06:50Z2016-11-02T00:00:00ZMatrix Neural Networks
Gao, Junbin; Guo, Yi; Wang, Zhiyong
Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic. Firstly, the spatial information among elements of the data may be lost during vectorisation. Secondly, the solution space becomes very large which demands very special treatments to the network parameters and high computational cost. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each neuron senses summarised information through bilinear mapping from lower layer units in exactly the same way as the classic feed forward neural networks. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters e ciently. Furthermore, it can be conveniently extended for multimodal inputs. We apply MatNet to MNIST handwritten digits classi cation and image super resolution tasks to show its e ectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity.
2016-11-02T00:00:00ZEstimation of Hierarchical Archimedean Copulas as a Shortest Path ProblemMatsypura, DmytroNeo, EmilyProkhorov, Artemhttps://hdl.handle.net/2123/147452021-03-26T04:05:36Z2016-04-16T00:00:00ZEstimation of Hierarchical Archimedean Copulas as a Shortest Path Problem
Matsypura, Dmytro; Neo, Emily; Prokhorov, Artem
We formulate the problem of finding and estimating the optimal hierarchical Archimedean copula as an amended shortest path problem. The standard network flow problem is amended by certain constraints specific to copulas, which limit scalability of the problem. However, we show in dimensions as high as twenty that the new approach dominates the alternatives which usually require recursive estimation or full enumeration.
2016-04-16T00:00:00ZEfficient estimation of parameters in marginal in semiparametric multivariate modelsPanchenko, ValentynProkhorov, Artemhttps://hdl.handle.net/2123/146412016-04-04T14:07:22Z2016-03-01T00:00:00ZEfficient estimation of parameters in marginal in semiparametric multivariate models
Panchenko, Valentyn; Prokhorov, Artem
We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator.
2016-03-01T00:00:00ZFast Inference for Intractable Likelihood Problems using Variational BayesGunawan, DavidTran, Minh-NgocKohn, Roberthttps://hdl.handle.net/2123/145942021-03-26T04:05:40Z2016-03-30T00:00:00ZFast Inference for Intractable Likelihood Problems using Variational Bayes
Gunawan, David; Tran, Minh-Ngoc; Kohn, Robert
Variational Bayes (VB) is a popular statistical method for Bayesian inference. The existing VB algorithms are restricted to cases where the likelihood is tractable, which precludes their use in many interesting models. Tran et al. (2015) extend the scope of application of VB to cases where the likelihood is intractable but can be estimated unbiasedly, and name the method “Variational Bayes with Intractable Likelihood (VBIL)”. This paper presents a version of VBIL, named Variational Bayes with Intractable Log-Likelihood (VBILL), that is useful for cases, such as big data and big panel data models, where only unbiased estimators of the log-likelihood are available. In particular, we develop an estimation approach, based on subsampling and the MapReduce programming technique, for analysing massive datasets which cannot fit into a single desktop’s memory. The proposed method is theoretically justified in the sense that, apart from an extra Monte Carlo error which can be controlled, it is able to produce estimators as if the true log-likelihood or full data were used. The proposed methodology is robust in the sense that it works well when only highly variable estimates of the log-likelihood are available. The method is illustrated empirically using several simulated datasets and a big real dataset based on the arrival time status of U. S. airlines. Keywords. Pseudo Marginal Metropolis-Hastings, Debiasing Approach, Big Data, Panel Data, Difference Estimator.
2016-03-30T00:00:00ZBlock-Wise Pseudo-Marginal Metropolis-HastingsTran, M.-N.Kohn, R.Quiroz, M.Villani, M.https://hdl.handle.net/2123/145952021-03-26T04:05:40Z2016-03-30T00:00:00ZBlock-Wise Pseudo-Marginal Metropolis-Hastings
Tran, M.-N.; Kohn, R.; Quiroz, M.; Villani, M.
The pseudo-marginal Metropolis-Hastings approach is increasingly used for Bayesian inference in statistical models where the likelihood is analytically intractable but can be estimated unbiasedly, such as random effects models and state-space models, or for data subsampling in big data settings. In a seminal paper, Deligiannidis et al. (2015) show how the pseudo-marginal Metropolis-Hastings (PMMH) approach can be made much more e cient by correlating the underlying random numbers used to form the estimate of the likelihood at the current and proposed values of the unknown parameters. Their proposed approach greatly speeds up the standard PMMH algorithm, as it requires a much smaller number of particles to form the optimal likelihood estimate. We present a closely related alternative PMMH approach that divides the underlying random numbers mentioned above into blocks so that the likelihood estimates for the proposed and current values of the likelihood only di er by the random numbers in one block. Our approach is less general than that of Deligiannidis et al. (2015), but has the following advantages. First, it provides a more direct way to control the correlation between the logarithms of the estimates of the likelihood at the current and proposed values of the parameters. Second, the mathematical properties of the method are simplified and made more transparent compared to the treatment in Deligiannidis et al. (2015). Third, blocking is shown to be a natural way to carry out PMMH in, for example, panel data models and subsampling problems. We obtain theory and guidelines for selecting the optimal number of particles, and document large speed-ups in a panel data example and a subsampling problem.
2016-03-30T00:00:00ZA New Measure of Vector Dependence, with an Application to Financial ContagionMedovikov, IvanProkhorov, Artemhttps://hdl.handle.net/2123/144902016-03-11T13:07:01Z2016-03-11T00:00:00ZA New Measure of Vector Dependence, with an Application to Financial Contagion
Medovikov, Ivan; Prokhorov, Artem
We propose a new nonparametric measure of association between an arbitrary number of random vectors. The measure is based on the empirical copula process for the multivariate marginals, corresponding to the vectors, and is insensitive to the within-vector dependence. It is bounded by the [0, 1] interval, covering the entire range of dependence from vector independence to a vector version of a monotone relationship. We study the properties of the new measure under several well-known copulas and provide a nonparametric estimator of the measure, along with its asymptotic theory, under fairly general assumptions. To illustrate the applicability of the new measure, we use it to assess the degree of interdependence between equity markets in North and South America, Europe and Asia, surrounding the financial crisis of 2008. We find strong evidence of previously unknown contagion patterns, with selected regions exhibiting little dependence before and after the crisis and a lot of dependence during the crisis period.
2016-03-11T00:00:00ZExact ABC using Importance SamplingTran, Minh-NgocKohn, Roberthttps://hdl.handle.net/2123/138392015-09-23T14:05:46Z2015-09-23T00:00:00ZExact ABC using Importance Sampling
Tran, Minh-Ngoc; Kohn, Robert
Approximate Bayesian Computation (ABC) is a powerful method for carrying out Bayesian inference when the likelihood is computationally intractable. However, a draw- back of ABC is that it is an approximate method that induces a systematic error because it is necessary to set a tolerance level to make the computation tractable. The issue of how to optimally set this tolerance level has been the subject of extensive research. This paper proposes an ABC algorithm based on importance sampling that estimates expec- tations with respect to the exact posterior distribution given the observed summary statistics. This overcomes the need to select the tolerance level. By exact we mean that there is no systematic error and the Monte Carlo error can be made arbitrarily small by increasing the number of importance samples. We provide a formal justifica- tion for the method and study its convergence properties. The method is illustrated in two applications and the empirical results suggest that the proposed ABC based esti- mators consistently converge to the true values as the number of importance samples increases. Our proposed approach can be applied more generally to any importance sampling problem where an unbiased estimate of the likelihood is required.
2015-09-23T00:00:00ZGeneralized Information Matrix Tests for CopulasProkhorov, ArtemSchepsmeier, UlfZhu, Yajinghttps://hdl.handle.net/2123/137982015-09-11T14:06:18Z2015-09-11T00:00:00ZGeneralized Information Matrix Tests for Copulas
Prokhorov, Artem; Schepsmeier, Ulf; Zhu, Yajing
We propose a family of goodness-of-fit tests for copulas. The tests use generalizations of the information matrix (IM) equality of White (1982) and so relate to the copula test proposed by Huang and Prokhorov (2014). The idea is that eigenspectrum-based statements of the IM equality reduce the degrees of freedom of the test's asymptotic distribution and lead to better size-power properties, even in high dimensions. The gains are especially pronounced for vine copulas, where additional benefits come from simplifications of score functions and the Hessian. We derive the asymptotic distribution of the generalized tests, accounting for the non-parametric estimation of the marginals and apply a parametric bootstrap procedure, valid when asymptotic critical values are inaccurate. In Monte Carlo simulations, we study the behavior of the new tests, compare them with several Cramer-von Mises type tests and confirm the desired properties of the new tests in high dimensions.
2015-09-11T00:00:00ZGEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood InferenceHill, Jonathan B.Prokhorov, Artemhttps://hdl.handle.net/2123/137952015-09-11T14:06:26Z2015-09-11T00:00:00ZGEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
Hill, Jonathan B.; Prokhorov, Artem
We construct a Generalized Empirical Likelihood estimator for a GARCH(1,1) model with a possibly heavy tailed error. The estimator imbeds tail-trimmed estimating equations allowing for over-identifying conditions, asymptotic normality, efficiency and empirical likelihood based confidence regions for very heavy-tailed random volatility data. We show the implied probabilities from the tail-trimmed Continuously Updated Estimator elevate weight for usable large values, assign large but not maximum weight to extreme observations, and give the lowest weight to non-leverage points. We derive a higher order expansion for GEL with imbedded tail-trimming (GELITT), which reveals higher order bias and efficiency properties, available when the GARCH error has a finite second moment. Higher order asymptotics for GEL without tail-trimming requires the error to have moments of substantially higher order. We use first order asymptotics and higher order bias to justify the choice of the number of trimmed observations in any given sample. We also present robust versions of Generalized Empirical Likelihood Ratio, Wald, and Lagrange Multiplier tests, and an efficient and heavy tail robust moment estimator with an application to expected shortfall estimation. Finally, we present a broad simulation study for GEL and GELITT, and demonstrate profile weighted expected shortfall for the Russian Ruble - US Dollar exchange rate. We show that tail-trimmed CUE-GMM dominates other estimators in terms of bias, mse and approximate normality.
AMS classifications : 62M10 , 62F35. JEL classifications : C13 , C49.
2015-09-11T00:00:00ZBayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Range and Realized MeasuresGerlach, RichardWang, Chaohttps://hdl.handle.net/2123/138002015-09-11T14:06:23Z2015-09-11T00:00:00ZBayesian Semi-parametric Realized-CARE Models for Tail Risk Forecasting Incorporating Range and Realized Measures
Gerlach, Richard; Wang, Chao
A new framework named Realized Conditional Autoregressive Expectile (Realized- CARE) is proposed, through incorporating a measurement equation into the conventional CARE model, in a framework analogous to Realized-GARCH. The Range and realized measures (Realized Variance and Realized Range) are employed as the dependent variables of the measurement equation, since they have proven more efficient than return for volatility estimation. The dependence between Range & realized measures and expectile can be modelled with this measurement equation. The grid search accuracy of the expectile level will be potentially improved with introducing this measurement equation. In addition, through employing a quadratic fitting target search, the speed of grid search is significantly improved. Bayesian adaptive Markov Chain Monte Carlo is used for estimation, and demonstrates its superiority compared to maximum likelihood in a simulation study. Furthermore, we propose an innovative sub-sampled Realized Range and also adopt an existing scaling scheme, in order to deal with the micro-structure noise of the high frequency volatility measures. Compared to the CARE, the parametric GARCH and the Realized-GARCH models, Value-at-Risk and Expected Shortfall forecasting results of 6 indices and 3 assets series favor the proposed Realized-CARE model, especially the Realized-CARE model with Realized Range and sub-sampled Realized Range.
2015-09-11T00:00:00ZSupplemental Material for GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood InferenceHill, Jonathan B.Prokhorov, Artemhttps://hdl.handle.net/2123/137972015-09-11T14:06:12Z2015-09-11T00:00:00ZSupplemental Material for GEL Estimation for Heavy-Tailed GARCH Models with Robust Empirical Likelihood Inference
Hill, Jonathan B.; Prokhorov, Artem
The following supplemental material contains an omitted simulation experiment, and omitted proofs of theorems and preliminary lemmata. Section S contains simulation results, and Section A contains an appendix with omitted proofs.
2015-09-11T00:00:00ZFat tails and copulas: limits of diversification revisitedIbragimov, RustamProkhorov, ArtemMo, Jingyuanhttps://hdl.handle.net/2123/137992015-09-11T14:06:20Z2015-09-11T00:00:00ZFat tails and copulas: limits of diversification revisited
Ibragimov, Rustam; Prokhorov, Artem; Mo, Jingyuan
We consider the problem of portfolio risk diversification in a Value-at-Risk framework with heavy-tailed risks and arbitrary dependence captured by a copula function. We use the power law for modelling the tails and investigate whether the benefits of diversification persist when the risks in consideration are allowed to have extremely heavy tails with tail indices less than one and when their copula describes wide classes of dependence structures. We show that for asymptotically large losses with the Eyraud-Farlie-Gumbel-Morgenstern copula, the threshold value of tail indices at which diversification stops being beneficial is the same as for independent losses. We further extend this result to a wider range of dependence structures which can be approximated using power-type copulas and their approximations. This range of dependence structures includes many well known copula families, among which there are comprehensive, Archimedian, asymmetric and tail dependent copulas. In other words, diversification increases Value-at-Risk for tail indices less than one regardless of the nature of dependence between portfolio components within these classes. A wide set of simulations supports these theoretical results.
2015-09-11T00:00:00ZGeneralized Variance: A Robust Estimator of Stock Price VolatilitySutton, MVasnev, AGerlach, Rhttps://hdl.handle.net/2123/132632015-04-30T14:05:39Z2015-04-30T00:00:00ZGeneralized Variance: A Robust Estimator of Stock Price Volatility
Sutton, M; Vasnev, A; Gerlach, R
This paper proposes an ex-post volatility estimator, called generalized variance, that uses high frequency data to provide measurements robust to the idiosyncratic noise of stock markets caused by market microstructures. The new volatility estimator is analyzed theoretically, examined in a simulation study and evaluated empirically against the two currently dominant measures of daily volatility: realized volatility and realized range. The main finding is that generalized variance is robust to the presence of microstructures while delivering accuracy superior to realized volatility and realized range in several circumstances. The empirical study features Australian stocks from the ASX 20.
2015-04-30T00:00:00ZEndogeneity in Stochastic Frontier ModelsAmsler, ChristineArtem, ProkhorovPeter, Schmidthttps://hdl.handle.net/2123/127552015-02-17T13:05:57Z2015-02-17T00:00:00ZEndogeneity in Stochastic Frontier Models
Amsler, Christine; Artem, Prokhorov; Peter, Schmidt
Stochastic frontier models are typically estimated by maximum likelihood (MLE) orcorrected ordinary least squares. The consistency of either estimator depends on exogeneity of the explanatory variables (inputs, in the production frontier setting). We will investigate the case that one or more of the inputs is endogenous, in the simultaneous equation sense of endogeneity. That is, we worry that there is correlation between the inputs and statistical noise or inefficiency. In a standard regression setting, simultaneity is handled by a number of procedures that are numerically or asymptotically equivalent. These include 2SLS; using the residual from the reduced form equations for the endogenous variables as a control function; and MLE of the system that contains the equation of interest plus the unrestricted reduced form equations for the endogenous variables (LIML). We will consider modifications of these standard procedures for the stochastic frontier setting. The paper is mostly a survey and combination of existing results from the stochastic frontier literature and the classic simultaneous equations literature, but it also contains some new results.
2015-02-17T00:00:00ZForecasting risk via realized GARCH, incorporating the realized rangeRichard, GerlachChao, Wanghttps://hdl.handle.net/2123/122352014-12-17T01:29:48Z2014-11-07T00:00:00ZForecasting risk via realized GARCH, incorporating the realized range
Richard, Gerlach; Chao, Wang
The realized GARCH framework is extended to incorporate the realized range, and the intra-day range, as potentially more efficient series of information than re- alized variance or daily returns, for the purpose of volatility and tail risk forecasting in a financial time series. A Bayesian adaptive Markov chain Monte Carlo method is employed for estimation and forecasting. Compared to a range of well known parametric GARCH models, predictive log-likelihood results across six market in- dex return series favor the realized GARCH models incorporating the realized range. Further, these same models also compare favourably for tail risk forecasting, both during and after the global financial crisis.
2014-11-07T00:00:00ZBayesian Tail Risk Forecasting using Realised GARCHContino, ChristianGerlach, Richardhttps://hdl.handle.net/2123/120602015-02-04T04:19:12Z2014-10-10T00:00:00ZBayesian Tail Risk Forecasting using Realised GARCH
Contino, Christian; Gerlach, Richard
A Realised Volatility GARCH model is developed within a Bayesian framework for the purpose of forecasting Value at Risk and Conditional Value at Risk. Student-t and Skewed Student-t return distributions are combined with Gaussian and Student-t distributions in the measurement equation in a GARCH framework to forecast tail risk in eight international equity index markets over a four year period. Three Realised Volatility proxies are considered within this framework. Realised Volatility GARCH models show a marked improvement compared to ordinary GARCH for both Value at Risk and Conditional Value at Risk forecasting. This improvement is consistent across a variety of data, volatility model speci_cations and distributions, and demonstrates that Realised Volatility is superior when producing volatility forecasts. Realised Volatility models implementing a Skewed Student-t distribution for returns in the GARCH equation are favoured.
2014-10-10T00:00:00ZBayesian Assessment of Dynamic Quantile ForecastsGerlach, RichardChen, Cathy W.S.Lin, Edward M.H.https://hdl.handle.net/2123/118162014-10-09T22:53:23Z2014-09-10T00:00:00ZBayesian Assessment of Dynamic Quantile Forecasts
Gerlach, Richard; Chen, Cathy W.S.; Lin, Edward M.H.
Methods for Bayesian testing and assessment of dynamic quantile forecasts are proposed. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, quantile forecasts are developed. To evaluate the relevant marginal likelihoods involved, analytic integration methods are utilised when possible, otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for Value at Risk (VaR) in several financial market data series.
2014-09-10T00:00:00ZConsistent Estimation of Linear Regression Models Using Matched DataProkhorov, ArtemHirukawa, Masayukihttps://hdl.handle.net/2123/117732014-09-05T17:52:44Z2014-09-05T00:00:00ZConsistent Estimation of Linear Regression Models Using Matched Data
Prokhorov, Artem; Hirukawa, Masayuki
Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a non-standard convergence rate to its probability limit. If only a few variables are used to impute the missing data then it is possible to correct for the bias. We propose two semi-parametric bias-corrected estimators and explore their asymptotic properties. The estimators have an indirectinference interpretation and their convergence rates depend on the number of variables used in matching. We can attain the parametric convergence rate if that number is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases.
2014-09-05T00:00:00ZSemi-parametric Expected Shortfall ForecastingGerlach, RichardChen, Cathy W.S.https://hdl.handle.net/2123/104572019-05-07T02:04:41Z2014-04-01T00:00:00ZSemi-parametric Expected Shortfall Forecasting
Gerlach, Richard; Chen, Cathy W.S.
Intra-day sources of data have proven effective for dynamic volatility and tail risk estimation. Expected shortfall is a tail risk measure, that is now recommended by the Basel Committee, involving a conditional expectation that can be semi-parametrically estimated via an asymmetric sum of squares function. The conditional autoregressive expectile class of model, used to indirectly model expected shortfall, is generalised to incorporate information on the intra-day range. An asymmetric Gaussian density model error formulation allows a likelihood to be developed that leads to semiparametric estimation and forecasts of expectiles, and subsequently of expected shortfall. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation, while their performance is assessed via a simulation study. The proposed models compare favourably with a large range of competitors in an empirical study forecasting seven financial return series over a ten year period.
2014-04-01T00:00:00ZConfidence Levels for CVaR Risk Measures and Minimax Limits*Anderson, EdwardXu, HuifuZhang, Dalihttps://hdl.handle.net/2123/99432019-05-07T02:04:41Z2014-01-01T00:00:00ZConfidence Levels for CVaR Risk Measures and Minimax Limits*
Anderson, Edward; Xu, Huifu; Zhang, Dali
Conditional value at risk (CVaR) has been widely used as a risk measure in finance. When the confidence level of CVaR is set close to 1, the CVaR risk measure approximates the extreme (worst scenario) risk measure. In this paper, we present a quantitative analysis of the relationship between the two risk measures and it’s impact on optimal decision making when we wish to minimize the respective risk measures. We also investigate the difference between the optimal solutions to the two optimization problems with identical objective function but under constraints on the two risk measures. We discuss the benefits of a sample average approximation scheme for the CVaR constraints and investigate the convergence of the optimal solution obtained from this scheme as the sample size increases. We use some portfolio optimization problems to investigate teh performance of the CVaR approximation approach. Our numerical results demonstrate how reducing the confidence level can lead to a better overall performance.
2014-01-01T00:00:00ZTwo-Sample Nonparametric Estimation of Intergenerational Income MobilityMurtazashvili, IrinaLiu, DiProkhorov, Artemhttps://hdl.handle.net/2123/92932019-05-07T02:04:41Z2013-08-07T00:00:00ZTwo-Sample Nonparametric Estimation of Intergenerational Income Mobility
Murtazashvili, Irina; Liu, Di; Prokhorov, Artem
We estimate intergenerational income mobility in the USA and Sweden. To measure the degree to which income status is transmitted from one generation to another we propose a nonparametric estimator, which is particularly relevant for cross-country comparisons. Our approach allows intergenerational mobility to vary across observable family characteristics. Furthermore, it ts situations when data on fathers and sons come from di fferent samples. Finally, our estimator is consistent in the presence of measurement error in fathers' long-run economic status. We fi nd that family background captured by fathers' education matters for intergenerational income persistence in the USA more than in Sweden suggesting that the character of inequality in the two countries is rather di fferent.
2013-08-07T00:00:00ZCompeting for contracts with buyer uncertainty: Choosing price and quality variablesAnderson, EdwardQian, Chenghttps://hdl.handle.net/2123/90712014-12-15T00:11:13Z2013-05-09T00:00:00ZCompeting for contracts with buyer uncertainty: Choosing price and quality variables
Anderson, Edward; Qian, Cheng
We model a situation in which a single firm evaluates competing suppliers and selects just one. Suppliers submit bids involving both price and quality variables. The buyer makes a choice which from the supplier's perspective appears to contain a stochastic element - for example the buyer may have information, which is not shared with the suppliers, and that gives one supplier an advantage in the final choice. We use a discrete choice model of buyer choice (e.g. multinomial logit). Our main result is that the supplier's choice of the quality variables is not affected by the competitive environment. Thus the suppliers compete only on price. We compare this with a second model in which the buyer's weighting on different quality variables is uncertain at the time bids are made.
2013-05-09T00:00:00ZPractical use of sensitivity in econometrics with an illustration to forecast combinationsVasnev, AndreyMagnus, Jan Rhttps://hdl.handle.net/2123/89642014-09-05T04:19:54Z2013-03-01T00:00:00ZPractical use of sensitivity in econometrics with an illustration to forecast combinations
Vasnev, Andrey; Magnus, Jan R
Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights.
2013-03-01T00:00:00ZForecast combination for U.S. recessions with real-time dataVasnev, AndreyPauwels, Laurenthttps://hdl.handle.net/2123/89652014-12-15T00:12:05Z2013-03-01T00:00:00ZForecast combination for U.S. recessions with real-time data
Vasnev, Andrey; Pauwels, Laurent
This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use logscore and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance.
2013-03-01T00:00:00ZMultiple Event Incidence and Duration Analysis for Credit Data Incorporating Non-Stochastic Loan MaturityVasnev, AndreyGerlach, RichardWatkins, Johnhttps://hdl.handle.net/2123/89632019-05-07T02:04:40Z2012-12-01T00:00:00ZMultiple Event Incidence and Duration Analysis for Credit Data Incorporating Non-Stochastic Loan Maturity
Vasnev, Andrey; Gerlach, Richard; Watkins, John
Applications of duration analysis in Economics and Finance exclusively employ methods for events of stochastic duration. In application to credit data, previous research incorrectly treats the time to pre-determined maturity events as censored stochastic event times. The medical literature has binary parametric ‘cure rate’ models that deal with populations that never experienced the modelled event. We propose and develop a Multinomial parametric incidence and duration model, incorporating such populations. In the class of cure rate models, this is the first fully parametric multinomial model and is the first framework to accommodate an event with pre-determined duration. The methodology is applied to unsecured personal loan credit data provided by one of Australia’s largest financial services organizations. This framework is shown to be more flexible and predictive through a simulation and empirical study that reveals: simulation results of estimated parameters with a large reduction in bias; superior forecasting of duration; explanatory variables can act in different directions upon incidence and duration; and, variables exist that are statistically significant in explaining only incidence or duration.
2012-12-01T00:00:00ZPractical considerations for optimal weights in density forecast combinationVasnev, AndreyPauwels, Laurenthttps://hdl.handle.net/2123/89322019-05-07T02:04:40Z2013-01-01T00:00:00ZPractical considerations for optimal weights in density forecast combination
Vasnev, Andrey; Pauwels, Laurent
The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper’s practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement. January
2013-01-01T00:00:00ZForecast combination for U.S. recessions with real-time dataVasnev, AndreyPauwels, Laurenthttps://hdl.handle.net/2123/89332019-05-07T02:04:41Z2013-01-01T00:00:00ZForecast combination for U.S. recessions with real-time data
Vasnev, Andrey; Pauwels, Laurent
This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use logscore and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance.
2013-01-01T00:00:00ZMaximum likelihood estimation of time series models: the Kalman filter and beyondProietti, TommasoLuati, Alessandrahttps://hdl.handle.net/2123/83372019-05-07T02:04:40Z2012-05-01T00:00:00ZMaximum likelihood estimation of time series models: the Kalman filter and beyond
Proietti, Tommaso; Luati, Alessandra
The purpose of this chapter is to provide a comprehensive treatment of likelihood inference for state space models. These are a class of time series models relating an observable time series to quantities called states, which are characterized by a simple temporal dependence structure, typically a first order Markov process. The states have sometimes substantial interpretation. Key estimation problems in economics concern latent variables, such as the output gap, potential output, the non-accelerating-inflation rate of unemployment, or NAIRU, core inflation, and so forth. Time-varying volatility, which is quintessential to finance, is an important feature also in macroeconomics. In the multivariate framework relevant features can be common to different series, meaning that the driving forces of a particular feature and/or the transmission mechanism are the same. The objective of this chapter is reviewing this algorithm and discussing maximum likelihood inference, starting from the linear Gaussian case and discussing the extensions to a nonlinear and non Gaussian framework.
2012-05-01T00:00:00ZPortfolio Margining: Strategy vs RiskCoffman, E.G. JrMatsypura, D.Timkovsky, V.G.https://hdl.handle.net/2123/81712019-05-07T02:04:40Z2010-03-01T00:00:00ZPortfolio Margining: Strategy vs Risk
Coffman, E.G. Jr; Matsypura, D.; Timkovsky, V.G.
This paper presents the results of a novel mathematical and experimental analysis of two approaches to margining customer accounts, strategy-based and risk-based. Building combinatorial models of hedging mechanisms of these approaches, we show that the strategy-based approach is, at this point, the most appropriate one for margining security portfolios in customer margin accounts, while the risk-based approach can work efficiently for margining only index portfolios in customer mar-gin accounts and inventory portfolios of brokers. We also show that the application of the risk-based approach to security portfolios in customer margin accounts is very risky and can result in the pyramid of debt in the bullish market and the pyramid of loss in the bearish market. The results of this paper support the thesis that the use of the risk-based approach to margining customer accounts with positions in stocks and stock options since April 2007 influenced and triggered the U.S. stock market crash in October 2008. We also provide recommendations on ways to set appropriate margin requirements to help avoid such failures in the future.
2010-03-01T00:00:00ZCombinatorics of Option Spreads: The Margining AspectMatsypura, D.Timkovsky, V.G.https://hdl.handle.net/2123/81722019-05-07T02:04:40Z2010-07-01T00:00:00ZCombinatorics of Option Spreads: The Margining Aspect
Matsypura, D.; Timkovsky, V.G.
In December 2005, the U.S. Securities and Exchange Commission approved margin rules for complex option spreads with 5, 6, 7, 8, 9, 10 and 12 legs. Only option spreads with 2, 3 or 4 legs were recognized before. Taking advantage of option spreads with a large number of legs substantially reduces margin requirements and, at the same time, adequately estimates risk for margin accounts with positions in options. In this paper we present combinatorial models for known and newly discovered option spreads with up to 134 legs. We propose their full characterization in terms of matchings, alternating cycles and chains in graphs with bicolored edges. We show that the combinatorial analysis of option spreads reveals powerful hedging mechanisms in the structure of margin accounts, and that the problem of minimizing the margin requirement for a portfolio of option spreads can be solved in polynomial time using network flow algorithms. We also give recommendations on how to create more efficient margin rules for options.
2010-07-01T00:00:00ZStochastic trends and seasonality in economic time series: new evidence from Bayesian stochastic model specification searchProietti, TommasoGrassi, Stefanohttps://hdl.handle.net/2123/81662019-05-07T02:04:40Z2011-09-01T00:00:00ZStochastic trends and seasonality in economic time series: new evidence from Bayesian stochastic model specification search
Proietti, Tommaso; Grassi, Stefano
An important issue in modelling economic time series is whether key unobserved components representing trends, seasonality and calendar components, are deterministic or evolutive. We address it by applying a recently proposed Bayesian variable selection methodology to an encompassing linear mixed model that features, along with deterministic effects, additional random explanatory variables that account for the evolution of the underlying level, slope, seasonality and trading days. Variable selection is performed by estimating the posterior model probabilities using a suitable Gibbs sampling scheme. The paper conducts an extensive empirical application on a large and representative set of monthly time series concerning industrial production and retail turnover. We find strong support for the presence of stochastic trends in the series, either in the form of a time-varying level, or, less frequently, of a stochastic slope, or both. Seasonality is a more stable component: only in 70% of the cases we were able to select at least one stochastic trigonometric cycle out of the six possible cycles. Most frequently the time variation is found in correspondence with the fundamental and the first harmonic cycles. An interesting and intuitively plausible finding is that the probability of estimating time-varying components increases with the sample size available. However, even for very large sample sizes we were unable to find stochastically varying calendar effects.
2011-09-01T00:00:00ZDoes the Box-Cox transformation help in forecasting macroeconomic time series?Proietti, TommasoLütkepohl, Helmuthttps://hdl.handle.net/2123/81672019-05-07T02:04:40Z2011-10-01T00:00:00ZDoes the Box-Cox transformation help in forecasting macroeconomic time series?
Proietti, Tommaso; Lütkepohl, Helmut
The paper investigates whether transforming a time series leads to an improvement in forecasting accuracy. The class of transformations that is considered is the Box-Cox power transformation, which applies to series measured on a ratio scale. We propose a nonparametric approach for estimating the optimal transformation parameter based on the frequency domain estimation of the prediction error variance, and also conduct an extensive recursive forecast experiment on a large set of seasonal monthly macroeconomic time series related to industrial production and retail turnover. In about one fifth of the series considered the Box-Cox transformation produces forecasts significantly better than the untransformed data at one-step-ahead horizon; in most of the cases the logarithmic transformation is the relevant one. As the forecast horizon increases, the evidence in favour of a transformation becomes less strong. Typically, the naïve predictor that just reverses the transformation leads to a lower mean square error than the optimal predictor at short forecast leads. We also discuss whether the preliminary in-sample frequency domain assessment conducted provides a reliable guidance which series should be transformed for improving significantly the predictive performance.
2011-10-01T00:00:00ZThe Multistep Beveridge-Nelson DecompositionProietti, Tommasohttps://hdl.handle.net/2123/81682019-05-07T02:04:40Z2011-10-01T00:00:00ZThe Multistep Beveridge-Nelson Decomposition
Proietti, Tommaso
The Beveridge-Nelson decomposition defines the trend component in terms of the eventual forecast function, as the value the series would take if it were on its long-run path. The paper introduces the multistep Beveridge-Nelson decomposition, which arises when the forecast function is obtained by the direct autoregressive approach, which optimizes the predictive ability of the AR model at forecast horizons greater than one. We compare our proposal with the standard Beveridge-Nelson decomposition, for which the forecast function is obtained by iterating the one-stepahead predictions via the chain rule. We illustrate that the multistep Beveridge-Nelson trend is more efficient than the standard one in the presence of model misspecification and we subsequently assess the predictive validity of the extracted transitory component with respect to future growth.
2011-10-01T00:00:00ZBayesian Semi-parametric Expected Shortfall Forecasting in Financial MarketsGerlach, RichardChen, Cathy W.S.Lin, Liou-Yanhttps://hdl.handle.net/2123/81692019-05-07T02:04:40Z2012-01-01T00:00:00ZBayesian Semi-parametric Expected Shortfall Forecasting in Financial Markets
Gerlach, Richard; Chen, Cathy W.S.; Lin, Liou-Yan
Bayesian semi-parametric estimation has proven effective for quantile estimation in general and specifically in financial Value at Risk forecasting. Expected short-fall is a competing tail risk measure, involving a conditional expectation beyond a quantile, that has recently been semi-parametrically estimated via asymmetric least squares and so-called expectiles. An asymmetric Gaussian density is proposed allowing a likelihood to be developed that leads to Bayesian semi-parametric estimation and forecasts of expectiles and expected shortfall. Further, the conditional autoregressive expectile class of model is generalised to two fully nonlinear families. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation in these families. The proposed models are clearly favoured in an empirical study forecasting eleven financial return series: clear evidence of more accurate expected shortfall forecasting, compared to a range of competing methods is found. Further, the most favoured models are those estimated by Bayesian methods.
2012-01-01T00:00:00Z