Stochastic Dynamic Programming Heuristic for the (R, s, S) Policy Parameters Computation

17 Feb 2022  ·  Andrea Visentin, Steven Prestwich, Roberto Rossi, S. Armagan Tarim ·

The (R, s, S) is a stochastic inventory control policy widely used by practitioners. In an inventory system managed according to this policy, the inventory is reviewed at instant R; if the observed inventory position is lower than the reorder level s an order is placed. The order's quantity is set to raise the inventory position to the order-up-to-level S. This paper introduces a new stochastic dynamic program (SDP) based heuristic to compute the (R, s, S) policy parameters for the non-stationary stochastic lot-sizing problem with backlogging of the excessive demand, fixed order and review costs, and linear holding and penalty costs. In a recent work, Visentin et al. (2021) present an approach to compute optimal policy parameters under these assumptions. Our model combines a greedy relaxation of the problem with a modified version of Scarf's (s, S) SDP. A simple implementation of the model requires a prohibitive computational effort to compute the parameters. However, we can speed up the computations by using K-convexity property and memorisation techniques. The resulting algorithm is considerably faster than the state-of-the-art, extending its adoptability by practitioners. An extensive computational study compares our approach with the algorithms available in the literature.

PDF Abstract