Research Publications and Outputs
https://hdl.handle.net/2123/28510
Thu, 10 Oct 2024 08:19:35 GMT2024-10-10T08:19:35ZThe Online Shortest Path Problem: Learning Travel Times Using a Multiarmed Bandit Framework
https://hdl.handle.net/2123/32973
The Online Shortest Path Problem: Learning Travel Times Using a Multiarmed Bandit Framework
Lagos, Tomas; Auad, Ramon; Lagos, Felipe
In the age of e-commerce, logistics companies often operate within extensive road networks without accurate knowledge of travel times for their specific fleet of vehicles. Moreover, millions of dollars are spent on routing services that fail to accurately capture the unique characteristics of the drivers and vehicles of the companies. In this work, we address the challenge faced by a logistic operator with limited travel time information, aiming to find the optimal expected shortest path between origin-destination pairs. We model this problem as an online shortest path problem, common to many lastmile routing settings; given a graph whose arcs’ travel times are stochastic and follow an unknown distribution, the objective is to find a vehicle route of minimum travel time from an origin to a destination. The planner progressively collects travel condition data as drivers complete their routes. Inspired by the combinatorial multiarmed bandit and kriging literature, we propose three methods with distinct features to effectively learn the optimal shortest path, highlighting the practical advantages of incorporating spatial correlation in the learning process. Our approach balances exploration (improving estimates for unexplored arcs) and exploitation (executing the minimum expected time path) using the Thompson sampling algorithm. In each iteration, our algorithm executes the path that minimizes the expected travel time based on data from a posterior distribution of the speeds of the arcs. We conduct a computational study comprising two settings: a set of four artificial instances and a real-life case study. The case study uses empirical data of taxis in the 17-km-radius area of the center of Beijing, encompassing Beijing’s “5th Ring Road.” In both settings, our algorithms demonstrate efficient and effective balancing of the exploration-exploitation trade-off.
Mon, 01 Jan 2024 00:00:00 GMThttps://hdl.handle.net/2123/329732024-01-01T00:00:00ZClosed-Form Solutions for Distributionally Robust Inventory Management: A Controlled Relaxation Method
https://hdl.handle.net/2123/30290
Closed-Form Solutions for Distributionally Robust Inventory Management: A Controlled Relaxation Method
Li, Zhaolin; Qi (Grace), Fu; Chung-Piaw, Teo
When only the moments (mean, variance or t-th moment) of the underline distribution are known, a variety
of many max-min optimization models choose actions to maximize the firm’s expected profit against the
most unfavorable distribution. We introduce relaxation scalars to reformulate the max-min model as a
relaxed model and demonstrate that the closed form solutions (if they exist in the first place) can be
quickly identified when we reduce the relaxation scalars to zero. To demonstrate the effectiveness of this new
method, we provide closed-form solutions, hitherto unknown, for several distributionally robust inventory
models, including the newsvendor problem with mean and t-th moment (for t > 1), the pricing model, the
capacity planning model with multiple supply sources, and the two-product inventory system with common
component.
Sun, 01 Jan 2023 00:00:00 GMThttps://hdl.handle.net/2123/302902023-01-01T00:00:00ZClosed-Form Solutions for Distributionally Robust
Inventory Management: Extended Reformulation
using Zero-Sum Game
https://hdl.handle.net/2123/29332.3
Closed-Form Solutions for Distributionally Robust
Inventory Management: Extended Reformulation
using Zero-Sum Game
Li, Zhaolin; Qi, Fu; Chung-Piaw, Teo
When only the moments (mean, variance or t-th moment) of the underline distribution are known, numerous
max-min optimization models can be interpreted as a zero-sum game, in which the decision maker (DM)
chooses actions to maximize her expected profit while Adverse Nature chooses a distribution subject to
the moments conditions to minimize DM’s expected profit. We propose a new method to efficiently solve
this class of zero-sum games under moment conditions. By applying the min-max inequality, our method
reformulates the zero-sum game as a robust moral hazard model, in which Adverse Nature chooses both the
distribution and actions to minimize DM’s expected profit subject to incentive compatibility (IC) constraints.
Under quasi-concavity, these IC constraints are replaced by the first-order conditions, which give rise to
extra moment constraints. Interestingly, these extra moment constraints drastically reduce the number of corner points to be considered in the corresponding semi-infinite programming models. We show that in the equilibrium, these moment constraints are binding but have
zero Lagrangian multipliers and thus facilitate closed-form solutions in several application examples with
different levels of complexity. The high efficiency of the method enables us to solve a large class of zero-sum
games and the corresponding max-min robust optimization models.
Wed, 27 Jul 2022 00:00:00 GMThttps://hdl.handle.net/2123/29332.32022-07-27T00:00:00ZManaging Inventory and Financing Decisions Under Ambiguity
https://hdl.handle.net/2123/29481
Managing Inventory and Financing Decisions Under Ambiguity
Li, Zhaolin; Qian, Cheng
Micro, small and medium-sized enterprises (MSMEs) face persistent challenges in raising capitals, and one of the practical reasons could be the high level of ambiguity in this sector. As many not-for-profit organizations or governmental agencies strengthen financial supports to MSMEs, the important issue of stimulating growth while protecting fund providers under ambiguity arises. We propose a robust optimization framework to jointly determine the firm's production planning and financing decisions in a principal-agent model with the presence of distributional ambiguity. We apply the notion of absolute robustness to derive a financing agreement that is both feasibility-robust and performance-robust. We assume that both the firm and the investor base their decisions on two fundamental descriptive statistics: the mean and the variance of the demand. The firm jointly determines the production quantity and financial agreement to maximize the worst-case expected profit, while the investor approves the financial agreement if the worst case expected return can cover the cost of capital. We show that equity financing is one of the robust optimal financing agreements. We also consider loan financing as an alternative. We derive the firm's robust optimal interest rate and production quantity in closed forms. Notably, the robust optimal interest rate depends on the demand variability and the asset recovery ratio, which comprehensively considers the value of collateral, initial capital, and production quantity.
Tue, 30 Aug 2022 00:00:00 GMThttps://hdl.handle.net/2123/294812022-08-30T00:00:00ZHigher Moment Constraints for Predictive Density Combinations
https://hdl.handle.net/2123/22140
Higher Moment Constraints for Predictive Density Combinations
Pauwels, Laurent; Radchenko, Peter; Vasnev, Andrey
The majority of financial data exhibit asymmetry and heavy tails, which makes forecasting the entire density critically important. Recently, a forecast combination methodology has been developed to combine predictive densities. We show that combining individual predictive densities that are skewed and/or heavy-tailed results in significantly reduced skewness and kurtosis. We propose a solution to over- come this problem by deriving optimal log score weights under Higher-order Moment Constraints (HMC). The statistical properties of these weights are investigated theoretically and through a simulation study. Consistency and asymptotic distribution results for the optimal log score weights with and without high moment constraints are derived. An empirical application that uses the S&P 500 daily index returns illustrates that the proposed HMC weight density combinations perform very well relative to other combination methods.
Fri, 01 May 2020 00:00:00 GMThttps://hdl.handle.net/2123/221402020-05-01T00:00:00ZPython Language Companion to Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares
https://hdl.handle.net/2123/21370
Python Language Companion to Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares
Leung, Jessica; Matsypura, Dmytro
This Python Language Companion is drafted as a supplement to the book Introduction to Applied Linear Algebra: Vectors, Matrices, and Least Squares written by Stephen Boyd and Lieven Vandenberghe (referred to here as VMLS). This Python Language Companion is meant to show how the ideas and methods in VMLS can be expressed and implemented in the programming language Python.
Wed, 13 Nov 2019 00:00:00 GMThttps://hdl.handle.net/2123/213702019-11-13T00:00:00ZConsistent Estimation of Linear Regression Models Using Matched Data
https://hdl.handle.net/2123/18063
Consistent Estimation of Linear Regression Models Using Matched Data
Hirukawa, Masayuki; Prokhorov, Artem
Economists often use matched samples, especially when dealing with earnings data where a number of missing observations need to be imputed. In this paper, we demonstrate that the ordinary least squares estimator of the linear regression model using matched samples is inconsistent and has a nonstandard convergence rate to its probability limit. If only a few variables are used to impute the missing data, then it is possible to correct for the bias. We propose two semiparametric bias-corrected estimators and explore their asymptotic properties. The estimators have an indirect-inference interpretation and they attain the parametric convergence rate if the number of matching variables is no greater than three. Monte Carlo simulations confirm that the bias correction works very well in such cases.
JEL Classi cation Codes: C13; C14; C31.
Thu, 16 Mar 2017 00:00:00 GMThttps://hdl.handle.net/2123/180632017-03-16T00:00:00ZRandom Effects Models with Deep Neural Network Basis Functions: Methodology and Computation
https://hdl.handle.net/2123/17877
Random Effects Models with Deep Neural Network Basis Functions: Methodology and Computation
Tran, Minh-Ngoc; Nguyen, Nghia; Nott, David; Kohn, Robert
Deep neural networks (DNNs) are a powerful tool for functional approximation. We describe flexible versions of generalized linear and generalized linear mixed models incorporating basis functions formed by a deep neural network. The consideration of neural networks with random effects seems little used in the literature, perhaps because of the computational challenges of incorporating subject specific parameters into already complex models. Efficient computational methods for Bayesian inference are developed based on Gaussian variational approximation methods. A parsimonious but flexible factor parametrization of the covariance matrix is used in the Gaussian variational approximation. We implement natural gradient methods for the optimization, exploiting the factor structure of the variational covariance matrix to perform fast matrix vector multiplications in iterative conjugate gradient linear solvers in natural gradient computations. The method can be implemented in high dimensions, and the use of the natural gradient allows faster and more stable convergence of the variational algorithm. In the case of random effects, we compute unbiased estimates of the gradient of the lower bound in the model with the random effects integrated out by making use of Fisher's identity. The proposed methods are illustrated in several examples for DNN random effects models and high-dimensional logistic regression with sparse signal shrinkage priors.
Sun, 01 Jan 2017 00:00:00 GMThttps://hdl.handle.net/2123/178772017-01-01T00:00:00ZEndogenous Environmental Variables In Stochastic Frontier Models
https://hdl.handle.net/2123/16763
Endogenous Environmental Variables In Stochastic Frontier Models
Amsler, Christine; Prokhorov, Artem; Schmidt, Peter
This paper considers a stochastic frontier model that contains environmental variables that affect the level of inefficiency but not the frontier. The model contains statistical noise, potentially endogenous regressors, and technical inefficiency that follows the scaling property, in the sense that it is the product of a basic (half-normal) inefficiency term and a parametric function of the environmental variables. The environmental variables may be endogenous because they are correlated with the statistical noise or with the basic inefficiency term. Several previous papers have considered the case of inputs that are endogenous because they are correlated with statistical noise, and if they contain environmental variables these are exogenous. One recent paper allows the environmental variables to be correlated with statistical noise. Our paper is the first to allow both the inputs and the environmental variables to be endogenous in the sense that they are correlated either with statistical noise or with the basic inefficiency term. Correlation of inputs or environmental variables with the basic inefficiency term raises non-trivial conceptual issues about the meaning of exogeneity, and technical issues of estimation of the model.
Sun, 09 Apr 2017 00:00:00 GMThttps://hdl.handle.net/2123/167632017-04-09T00:00:00Z