0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

1 Ansichten153 Seiten© © All Rights Reserved

PDF, TXT oder online auf Scribd lesen

© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

0 Bewertungen0% fanden dieses Dokument nützlich (0 Abstimmungen)

1 Ansichten153 Seiten© All Rights Reserved

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

Sie sind auf Seite 1von 153

Christopher Grob

Inventory

Management in

Multi-Echelon Networks

On the Optimization of Reorder Points

AutoUni – Schriftenreihe

Band 128

Volkswagen Aktiengesellschaft

AutoUni

Die Volkswagen AutoUni bietet Wissenschaftlern und Promovierenden des Volks

wagen Konzerns die Möglichkeit, ihre Forschungsergebnisse in Form von Mono

graphien und Dissertationen im Rahmen der „AutoUni Schriftenreihe“ kostenfrei zu

veröffentlichen. Die AutoUni ist eine international tätige wissenschaftliche Einrich

tung des Konzerns, die durch Forschung und Lehre aktuelles mobilitätsbezogenes

Wissen auf Hochschulniveau erzeugt und vermittelt.

Die neun Institute der AutoUni decken das Fachwissen der unterschiedlichen

Geschäftsbereiche ab, welches für den Erfolg des Volkswagen Konzerns unabding

bar ist. Im Fokus steht dabei die Schaffung und Verankerung von neuem Wissen und

die Förderung des Wissensaustausches. Zusätzlich zu der fachlichen Weiterbildung

und Vertiefung von Kompetenzen der Konzernangehörigen fördert und unterstützt

die AutoUni als Partner die Doktorandinnen und Doktoranden von Volkswagen

auf ihrem Weg zu einer erfolgreichen Promotion durch vielfältige Angebote – die

Veröffentlichung der Dissertationen ist eines davon. Über die Veröffentlichung in der

AutoUni Schriftenreihe werden die Resultate nicht nur für alle Konzernangehörigen,

sondern auch für die Öffentlichkeit zugänglich.

The Volkswagen AutoUni offers scientists and PhD students of the Volkswagen

Group the opportunity to publish their scientific results as monographs or doctor’s

theses within the “AutoUni Schriftenreihe” free of cost. The AutoUni is an

international scientific educational institution of the Volkswagen Group Academy,

which produces and disseminates current mobility-related knowledge through its

research and tailor-made further education courses. The AutoUni’s nine institutes

cover the expertise of the different business units, which is indispensable for the

success of the Volkswagen Group. The focus lies on the creation, anchorage and

transfer of knew knowledge.

In addition to the professional expert training and the development of specialized

skills and knowledge of the Volkswagen Group members, the AutoUni supports and

accompanies the PhD students on their way to successful graduation through a

variety of offerings. The publication of the doctor’s theses is one of such offers. The

publication within the AutoUni Schriftenreihe makes the results accessible to all

Volkswagen Group members as well as to the public.

Volkswagen Aktiengesellschaft

AutoUni

Brieffach 1231

D-38436 Wolfsburg

http://www.autouni.de

Christopher Grob

Inventory

Management in

Multi-Echelon Networks

On the Optimization of Reorder Points

Christopher Grob

Wolfsburg, Germany

Any results, opinions and conclusions expressed in the AutoUni – Schriftenreihe are

solely those of the author(s).

AutoUni – Schriftenreihe

ISBN 978-3-658-23374-7 ISBN 978-3-658-23375-4 (eBook)

https://doi.org/10.1007/978-3-658-23375-4

Library of Congress Control Number: 2018953778

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part

of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,

recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission

or information storage and retrieval, electronic adaptation, computer software, or by similar or

dissimilar methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this

publication does not imply, even in the absence of a specific statement, that such names are exempt

from the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors, and the editors are safe to assume that the advice and information in this

book are believed to be true and accurate at the date of publication. Neither the publisher nor the

authors or the editors give a warranty, express or implied, with respect to the material contained

herein or for any errors or omissions that may have been made. The publisher remains neutral with

regard to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH

part of Springer Nature

The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany

Preface

I would like to use this opportunity to thank everybody who made this work possible and

supported me. I developed this work during a 3 year PhD-program at Volkswagen AG in

cooperation with the University of Kassel.

Foremost, my thanks is due to Prof. Dr. Andreas Bley and my adviser from the Volkswagen

AG. Their support was essential and during numerous discussions they did not only offer

encouragement but especially invaluable ideas, feedback and inspiration. My thanks is also

due to Stefan Minner for being the second reviewer of this thesis.

I would also like to thank all my colleagues at Volkswagen who aided me wherever they

could. Without them, this project would not have happened.

Finally, I like to thank my parents and friends for their support. I want to highlight the

contribution of Günther Emanuel and Tobias Berg for their valuable feedback and especially

Marie Perennes who supported me in every step.

Christopher Grob

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IX

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XI

Nomenclature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . XV

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Inventory management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1 Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 Supply chain structures . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.2 Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.3 Customer response to stock-outs . . . . . . . . . . . . . . . . . . . 11

2.1.4 Lead time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1.5 Demand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.6 The order quantity . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.1.7 Order policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.1.8 Service level deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . 14

2.1.9 Cost structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2 Single-echelon systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.2.1 Stochastic Lead Time Demand . . . . . . . . . . . . . . . . . . . . 16

2.2.2 Insights on the inventory position and the inventory level . . . . . . 17

2.2.3 Calculating the ﬁll rate for some demand distributions . . . . . . . 18

2.2.4 Determining optimal reorder points . . . . . . . . . . . . . . . . . 20

3.1 Lead time demand and ﬁll rate at non-local warehouses . . . . . . . . . . . 23

3.1.1 Mean and variance of lead time demand . . . . . . . . . . . . . . . 23

3.1.2 Fitting a distribution and calculating the ﬁll rate . . . . . . . . . . . 25

3.2 Wait time approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2.1 METRIC-type approximation . . . . . . . . . . . . . . . . . . . . 26

3.2.2 Kiesmüller et al. approximation . . . . . . . . . . . . . . . . . . . 27

3.2.3 Berling and Farvid approximation . . . . . . . . . . . . . . . . . . 28

3.2.4 Negative Binomal approximation . . . . . . . . . . . . . . . . . . 29

3.3 Summary of assumptions and classiﬁcation within the literature . . . . . . 30

4 Multi-echelon optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1 The 2-echelon case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.1.1 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1.2 The general algorithm . . . . . . . . . . . . . . . . . . . . . . . . 34

4.1.3 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

VIII Contents

4.1.5 Adding new breakpoints . . . . . . . . . . . . . . . . . . . . . . . 42

4.1.6 Heuristic to speed up the algorithm by introducing additional step

size constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.2 The n-echelon case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.2.1 Wait time calculation in n-echelons . . . . . . . . . . . . . . . . . 51

4.2.2 Structure of the optimization problem . . . . . . . . . . . . . . . . 54

4.2.3 Optimization algorithm . . . . . . . . . . . . . . . . . . . . . . . . 55

4.2.4 Step size heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . 64

5 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.1 Set-up of simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

5.2 Comparison of wait time approximations . . . . . . . . . . . . . . . . . . . 72

5.2.1 Random data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.2.2 Real-world data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.2.3 Summary of comparison . . . . . . . . . . . . . . . . . . . . . . . 94

5.3 2-echelon results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

5.3.1 Structure of the optimal solution . . . . . . . . . . . . . . . . . . . 95

5.3.2 Optimization and simulation using real-world data . . . . . . . . . 99

5.3.3 Performance of the algorithm . . . . . . . . . . . . . . . . . . . . 103

5.3.4 Analysis of heuristics . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.4 n-echelon results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.4.1 Set-up of optimization and input data . . . . . . . . . . . . . . . . 107

5.4.2 Structure of the solution . . . . . . . . . . . . . . . . . . . . . . . 108

6.1 Selection of suitable wait time approximations . . . . . . . . . . . . . . . . 113

6.2 Selection of optimization algorithms and heuristics . . . . . . . . . . . . . 114

6.3 Key takeaways for implementation in real world applications . . . . . . . . 115

7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

List of Figures

2.2 Example of a divergent system . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Example of a convergent system . . . . . . . . . . . . . . . . . . . . . . . 9

2.4 Example of a general system . . . . . . . . . . . . . . . . . . . . . . . . . 9

3.2 Effects of increasing a non-local reorder point . . . . . . . . . . . . . . . . 22

4.1 Example of a 2-echelon distribution network . . . . . . . . . . . . . . . . . 32

4.2 Flow chart of our optimization algorithm . . . . . . . . . . . . . . . . . . . 35

4.3 Example of an under- and overestimator . . . . . . . . . . . . . . . . . . . 37

4.4 Reﬁning the constraint of a warehouse i by adding new breakpoints . . . . 43

4.5 Local reorder point needed for a given cental reorder point to fulﬁll ﬁll rate

targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.6 Length of the step sizes and corresponding central reorder point . . . . . . 45

4.7 Actual local reorder points needed and underestimating constraints . . . . . 47

4.8 Actual local reorder points needed and estimating constraints with or

without adjusted slopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4.9 Network with three echelons . . . . . . . . . . . . . . . . . . . . . . . . . 50

4.10 Reduction of Ri, j given different Rl,m ∈ Pi, j . . . . . . . . . . . . . . . . . 57

4.11 Reﬁnement of Constraint 4.38 for one warehouse (l, m) ∈ Pi, j . . . . . . . . 59

5.2 Process of calculating reorder points for the simulation given a prescribed

central ﬁll rate target, a wait time approximation and local ﬁll rate targets . 73

5.3 Errors of KKSL approximation for mean and standard deviation of wait

time for the “medium high” scenario and the test set of base

parametrization of Table 5.4 for the different warehouses . . . . . . . . . . 78

5.4 Errors of mean and standard deviation of the wait time for the different

approximations and network sizes in the “medium low” scenario . . . . . . 79

5.5 Average value of reorder points and inventory levels for local warehouses

for the different scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.6 Average value of reorder points for local warehouses for the different

scenarios excluding test cases not suitable for NB . . . . . . . . . . . . . . 82

5.7 Absolute error of mean of the different approximations for different classes 88

5.8 Absolute error of standard deviation of the different approximations for

different classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.9 Boxplot of central ﬁll rates for different wait time approximations . . . . . 96

5.10 Deviations of simulation results from ﬁll rate targets, real-world scenario . . 100

5.11 Real-world 3-echelon distribution network with expected transportation times108

X List of Figures

5.12 Boxplot of the share of stock at different echelons for results with non-zero

non-local reorder points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.13 Scatter plot of the share of reorder points on ﬁrst and second echelon for

results with non-zero non-local reorder points . . . . . . . . . . . . . . . . 110

A.2 Varying variance of local demand . . . . . . . . . . . . . . . . . . . . . . 134

A.3 Varying central order quantity . . . . . . . . . . . . . . . . . . . . . . . . 135

A.4 Varying local order quantity . . . . . . . . . . . . . . . . . . . . . . . . . 136

A.5 Varying price at central warehouse . . . . . . . . . . . . . . . . . . . . . . 137

A.6 Varying lead time at central warehouse . . . . . . . . . . . . . . . . . . . . 138

A.7 Varying ﬁll rate targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

A.8 Varying number of local warehouses . . . . . . . . . . . . . . . . . . . . . 140

List of Tables

3.1 Classiﬁcation within the literature as suggested by de Kok et al. [dKGL+ 18] 30

5.1 Warehouse input data needed for simulation . . . . . . . . . . . . . . . . . 70

5.2 Parameters of simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5.3 Performance measures for each warehouse of the simulation . . . . . . . . 72

5.4 Sample network characteristics . . . . . . . . . . . . . . . . . . . . . . . . 74

5.5 Parameter variations and variation type, multiplicative (m) or absolute (a),

for the creation of the test cases . . . . . . . . . . . . . . . . . . . . . . . . 75

5.6 Prescribed central ﬁll rate, average simulated ﬁll rate and name of scenario . 76

5.7 Average wait time and standard deviation for different scenarios and wait

time approximations in comparison to the simulated results . . . . . . . . . 77

5.8 Average deviations from ﬁll rate targets for local warehouses, average of

all test cases except the test cases for the network size n and the ﬁll rate

targets β̄ (i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

5.9 Average deviation from ﬁll rate target for local warehouses for NB

approximation excluding test cases not suitable for NB . . . . . . . . . . . 81

5.10 Prescribed central ﬁll rate and average simulated value . . . . . . . . . . . 84

5.11 Relative accuracy of the KKSL approximation of mean and standard

deviation (sd) for scenarios with medium to high ﬁll rate relative to the

other approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.12 Average simulated values and absolute error of the approximations for test

cases in different scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.13 Error of the approximations in different scenarios . . . . . . . . . . . . . . 86

5.14 Relative accuracy of KKSL approximation compared to other

approximations for test cases with small and large differences between

local warehouses, with difference deﬁned as in eq. (5.5) . . . . . . . . . . . 87

5.15 Absolute error of approximated standard deviation (sd) of wait time for test

cases with different values of Qi /μi . . . . . . . . . . . . . . . . . . . . . 88

5.16 Average simulated values and error of the approximations of mean and

standard deviation (sd) for different scenarios, for test cases with Qi /μi ≤ 25 90

5.17 Relative accuracy of the NB approximation for the mean, the standard

deviation (sd) and a combined (comb.) compared to other approximations.

Evaluated for different scenarios regarding the central ﬁll rate (fr), only for

test cases with Qi /μi ≤ 25 . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.18 Relative accuracy of the KKSL approximation for the mean, the standard

deviation (sd) and a combined (comb.) compared to other approximations.

Evaluated for different scenarios regarding the central ﬁll rate (fr), only for

test cases with Qi /μi ≤ 25 . . . . . . . . . . . . . . . . . . . . . . . . . . 90

XII List of Tables

for test cases with small and large differences between local warehouses,

for medium to high central ﬁll rate scenarios and Qi /μi ≤ 25 and with

difference deﬁned as in eq. (5.5) . . . . . . . . . . . . . . . . . . . . . . . 91

5.20 Relative accuracy of NB approximation compared to other approximations

for test cases with small and large differences between the order quantity

of local warehouses, for low central ﬁll rate scenarios and Qi /μi ≤ 25 . . . 92

5.21 Mean of simulation and error of approximations for the standard deviation

for results with different number of local warehouses . . . . . . . . . . . . 92

5.22 Relative accuracy of the BF approximation for the mean and standard

deviation (sd), comparison for high and low mean of central lead time

compared to other approximations . . . . . . . . . . . . . . . . . . . . . . 93

5.23 Relative accuracy of AXS approximation compared to other

approximations for test cases with small and large differences between

local warehouses, with difference deﬁned as in eq. (5.5) . . . . . . . . . . . 94

5.24 Average simulated values and errors of the approximations for the mean

and standard deviation (sd), for test cases with different values of σ 2 /μ . . 94

5.25 Sample network characteristics . . . . . . . . . . . . . . . . . . . . . . . . 95

5.26 Average daily inventory value, real-world scenario . . . . . . . . . . . . . 101

5.27 Total value of reorder points, real-world scenario . . . . . . . . . . . . . . 101

5.28 Deviation of the average daily inventory value from the optimal solution

given a prescribed central ﬁll rate . . . . . . . . . . . . . . . . . . . . . . . 101

5.29 Approximation and simulation results for the mean and standard deviation

of the wait time, real-world scenario . . . . . . . . . . . . . . . . . . . . . 102

5.30 Approximation and simulation result for the wait time for big differences

in replenishment order quantity, real-world scenario . . . . . . . . . . . . . 103

5.31 Comparison of runtime, objective values and share of optimal results of

heuristics and original optimization algorithm . . . . . . . . . . . . . . . . 105

5.32 Comparison of runtime, objective values and share of optimal results of

heuristics and original optimization algorithm for high demand instances

with long runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.33 Distribution of stock in the network . . . . . . . . . . . . . . . . . . . . . 109

5.34 Distribution of stock in the network . . . . . . . . . . . . . . . . . . . . . 109

5.35 Pearson correlation coefﬁcients of different values with the share of

reorder points kept at respective warehouses . . . . . . . . . . . . . . . . . 111

6.1 Percentage share of number of parts and revenue in the classes of the

multi-criteria abc analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.2 Multi-criteria ABC analysis and selection of suitable algorithms . . . . . . 115

A.1 Data that was used to create the example in the 2-level heuristics section.

pi is equal for all warehouses. . . . . . . . . . . . . . . . . . . . . . . . . 128

A.2 Mean simulated ﬁll rates over all variations and instances for local

warehouses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

List of Tables XIII

A.3 Absolute error of the standard deviation for test cases with different ratios

of order quantity and mean daily demand, Analysis of H3 . . . . . . . . . . 130

A.4 Relative accuracy of the AXS approximation regarding the mean, standard

deviation (sd) and combined (comb.) for different central ﬁll rates (fr) and

test cases with Qi /μi ≤ 25 . . . . . . . . . . . . . . . . . . . . . . . . . . 130

A.5 Average simulated values and absolute errors of mean and standard

deviation (sd) of the approximations for different scenarios, for test cases

with Qi /μi ≤ 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

A.6 Mean of simulation and absolute error of the approximations for the

standard deviation for test cases with different number of local warehouses . 131

A.7 Average simulated values and absolute error of mean and standard

deviation (sd) of the approximations, for test cases with different values of

σ 2 /μ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

A.8 Approximation and simulation results for the mean and standard deviation

of the wait time, retrospective scenario . . . . . . . . . . . . . . . . . . . . 131

A.9 Approximation and simulation results for the wait time for large

differences in replenishment order quantities, retrospective scenario . . . . 132

Nomenclature

Roman Symbols

B Backorders

D Demand

L Lead time

Q Order Quantity

R Reorder point

T Transportation time

W Wait time

X Random variable

p Price of part or cost of holding an item in storage during one time unit

Abbreviations

cd complete deliveries

pd partial deliveries

fr ﬁll rate

sd standard deviation

XVI Nomenclature

Mathematical Symbols

bini (R0 ) Inverse ﬁll rate function, which returns the lowest Ri for a given R0 such that the

ﬁll rate target is fulﬁlled

pdf X (x) Probability distribution (or mass) function of the random variable X evaluated at x

1 Introduction

Inventory is one of the most important levers in supply chain management and is often used

to balance competing goals. Developments in the last decades with rapid advancements in

information technology have in theory enabled companies to plan inventory along their en-

tire supply chain and enter the ﬁeld of multi-echelon inventory management. By holistically

looking at the supply chain, it takes inventory at every echelon into consideration. Simultan-

eously, academic research has developed algorithms to plan inventory and optimize safety

stocks across all echelons. However, even today many companies just do single stage in-

ventory planning with only little advance towards a true network planning. Nevertheless,

Silver et al. state that those methods are a “critical element of long-term survival and com-

petitive advantage” [SPT16, p1]. The gap between the existence of algorithms and the low

penetration of industry with advanced inventory planning methods was the motivation for

this thesis.

The reasons for this gap are twofold: Many of the proposed algorithms make unrealistic as-

sumptions about real-world supply chains and rather focus on what is mathematically tract-

able than what is being found in practice. For many real-world settings, companies have

to deal with “second best solutions” or even evaluate options by simulation only [Axs15].

Cattani et al. even claim that [traditional inventory literature is] “practically useless in an

actual multi-echelon system” [CJS11]. At the same time most companies do not fully un-

derstand the complexity of inventory management and prefer to rely on easy to understand

heuristics or on the experience of their planers rather than mathematically sound inventory

management methods. In fact, many companies and commercial software solutions fall

back to simple single-echelon calculations [CJS11].

Inventory is a measure to deal with uncertainty in demand and it allows a company to satisfy

the demand of its customers for the respective goods. The two competing goals are usually

to keep costs down while maintaining a high service level. A high service level is a major

driver of customer satisfaction. Companies therefore usually have a lot of capital tied in

inventory. With better tools, it is possible to reduce inventory without increasing other costs

and maintaining the same service level.

The aim of our work is to

1. develop efﬁcient algorithms to manage inventory that can be applied in real-world supply

chains,

2. use real-world data to point out the signiﬁcant savings that can be obtained by the use of

these algorithms,

3. and test the quality of the solutions with simulations based on historic demand data.

We supply companies with the needed algorithms to plan inventory in networks and show

that an implementation is worth the effort. By running simulations based on historic demand

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_1

2 1 Introduction

data, we validate that our assumptions and theoretical insights are suitable to use in real-

world supply chains.

Our work was inspired by the needs of an after sales supply chain of a big automotive

company. This company is supplying customers all over the world with spare parts for

their cars and operates a distribution network with more than 60 warehouses. This supply

chain faces two particular challenges: Demand is highly uncertain and customers expect

a high service level. The algorithms, however, can be applied in all supply chains which

operate with the same inventory policy, i.e., distribution networks where all locations use

(R,Q)-policies. The (R,Q)-order policy is probably the most widely used policy in practical

applications. Our aim is to determine the optimal reorder points for given order quantities.

In the next section, we give a literature review and point out how and where our research

adds value to the ﬁeld of inventory management.

service approach was started at the end of the 50s with a work by Clark and Scarf on serial

systems [CS60] [CS62]. The ﬁrst model on divergent systems is by Sherbrooke who intro-

duced the so called METRIC model (Multi-echelon Technique for Recoverable Item Con-

trol) [She68]. The other approach on so called guaranteed service was started by Simpson

[Sim58]. The approaches differ in a central assumption about lead time. In the stochastic

service approach, each stage is subject to stochastic delays in deliveries caused by stock-

outs, in the following referred to as wait time. The guaranteed service approach assumes

that demand is bounded or each stage has resources other than on hand inventory to handle

stock-outs. Therefore, no additional delay caused by stock-outs occurs. These additional

resources are usually summarized by the term operational ﬂexibility and are, for example,

expediting or overtime. For a comparison of the two approaches, we refer to Graves and

Willems [GW03] as well as Klosterhalfen and Minner [KM10]. A recent review and typo-

logy of the entire ﬁeld of multi-echelon optimization was done by de Kok et al. [dKGL+ 18].

Other recent reviews are by Gümüs et al. [GG07] and Simchi-Levi and Zhao [SLZ12]. In

our work we focus solely on the stochastic service approach due to its better ﬁt to the prac-

tical applications that motivated our work. Especially, that the guaranteed service model

neglects the possibility of internal stock inefﬁciencies is not a realistic assumption in many

of the supply chains we deal with in this work.

The majority of the research focuses on 2-echelon networks only. For many models the

extension to n-echelons is not straightforward, which is why we will especially highlight

the research that considers n-level networks. Customer demand that cannot be fulﬁlled from

stock on hand is either backordered or lost. We will only consider the backorder case in this

literature review and our work, which is motivated by a spare parts supply chain. In such a

supply chain the assumption that customers wait until parts are delivered is realistic.

1.1 Literature review 3

There are two major different objectives of multi-echelon optimization. The ﬁrst one is cost

minimization, which usually means the minimization of the sum of holding and backorder

costs in the network. This concept is difﬁcult to implement in practice because it is not easy

to estimate backorder cost. The second one is to minimize the holding cost or the investment

in stock subject to service level constraints. In practice, service level constraints are easier

to deﬁne than backorder cost. Another not so common objective is minimizing the number

of expected backorders.

A lot of research covers (s,S)-policies, which is also referred to as base-stock policies. The

Clark and Scarf model and subsequent papers based on it use those policies with periodic in-

ventory review. Clark and Scarf considered a serial system and showed that a (s,S)-policy is

optimal in this system [CS60]. The extension to distribution systems relies on the so called

balance assumption. Upstream warehouses can allocate negative quantities to downstream

warehouses. This implies that an optimal allocation of stock in the network is always pos-

sible. Eppen and Schrage used this in a 2-level network with identical retailers and a normal

distribution of demand [ES81]. The central warehouse is not allowed to hold stock in their

model. Various extensions exist, for example, for non-identical retailers and stock at the

central warehouse ([Fed93], [DdK98] and many more). Dogru et al. studied the effects of

the balance assumption extensively and conclude, that it is not appropriate in many settings

but leads to good results in some [DdKvH09]. Especially if the lead time of the central

warehouse is long or the coefﬁcient of variation of demand is large, the balance assumption

is often violated.

The approximate METRIC approach ([She68]) and its ﬁrst extensions ([Muc73], [She86],

[Gra85]) use a continuous review base-stock policy and assume Poisson demand. Therefore,

the demand at upstream warehouses also follows a Poisson distribution. This approach is

suitable for low demand items with low variance. It was originally intended for repairable

items but can also be used for consumable items [AR93]. By applying Little’s formula,

the expected wait time is computed as an average for all preceding warehouses. Then, the

stochastic lead time is simply approximated by its mean, i.e., the sum of expected transport-

ation time and wait time. The original METRIC model does not model the variance of the

wait time but only considers the mean. Axsäter additionally included the variance of the

wait time with the help of a normal approximation [Axs03].

If a cost minimization approach is used, the optimal solution can be found by enumera-

tion (cp. [Axs03]). A different approach is to minimize the number of expected backorders

(or the expected backorder cost) for a given investment in inventory (for example [She13]).

METRIC is simple and easy to implement and is probably the most widely used approach

in practical applications. If local warehouses are non-identical, applying the METRIC ap-

proach means that the average delay of all retailers is used and this may lead to large errors

for individual local warehouses. An alternative to the METRIC solution is to analyze the

system by disaggregating the backorders of the central warehouse to the individual local

warehouses, i.e., to determine the distribution of outstanding orders of each local warehouse.

This way, an exact solution is possible (cp. [Gra85]).

4 1 Introduction

Batch-ordering systems, especially systems where all warehouses use a (R,Q)-policy, are

much more difﬁcult to evaluate [Axs03]. The reason is that the demand process at a central

warehouse is a superposition of the order processes of all succeeding warehouses. Most

publications on this kind of systems consider only policy evaluation and not optimization.

Deuermeyer and Schwarz were the ﬁrst to extend METRIC approximations to batch-order-

ing systems [DS81]. They consider a 2-level distribution system with identical retailers and

Poisson demand. Schwarz et al. developed heuristics based on this model which maximize

system ﬁll-rate subject to a constraint on system safety stock [SDB85]. Svoronos and Zipkin

additionally derived an approximation of the variance of the wait time [SZ88]. Axsäter used

a similar technique [Axs03]. He presents a simple search algorithm for optimization that

may end up in a local optimum and also sketches how his model can be extended to n-

levels.

Andersson et al. suggested an iterative coordination procedure for the 2-level case, identical

local warehouses and a METRIC-type approach [AAM98]. They decomposed the cost

minimization model and introduced a shortage penalty cost at the central warehouse level

for delayed orders of succeeding warehouses. If the lead time demand is normally dis-

tributed, they can guarantee optimality. Andersson and Marklund generalized this for non-

identical local warehouses [AM00]. Berling and Marklund analyzed what the optimal short-

age penalty cost for various system parameters is [BM06]. Later, Berling and Marklund

also extended the model and introduced ﬁll rate constraints at local warehouses ([BM13],

[BF14]).

Disaggregating the backorders at the central warehouse is also possible for batch-ordering

systems. Determining the exact distribution is difﬁcult and was only done for Poisson de-

mand by Chen and Zheng [CZ97]. Approximate procedures were developed by Lee and

Moinzadeh ([LM87a], [LM87b]). Another possibility is tracking each supply unit. Due to

computational limitations, this is only possible for relatively small problems [Axs00]. For

a short overview of this line of research we refer to Axsäter [Axs15].

More recently Kiesmüller et al. developed an approximation for the ﬁrst two moments of the

wait time due to stock-outs in general n-level distribution systems by using a two-moment

distribution ﬁtting technique [KdKSvL04]. They assumed compound Poisson demand. Ber-

ling and Farvid considered a similar system but only deal with the 2-level case [BF14].

To our knowledge, no techniques to ﬁnd optimal reorder points for the approximations of

Kiesmüller et al. and Berling and Farvid exist until now.

It is clear from the literature review that multiple models to characterize the system charac-

teristics of multi-echelon inventory systems exist. They vary widely in the assumptions they

make and produce results of different quality in different circumstances. However, the lack

of ﬁtting techniques to obtain the optimal solution is striking. A notable exception is the

line of research by Andersson, Berling and Marklund, who supply optimization algorithms

for the 2-level case and a METRIC-type approach. Many papers omit the topic of ﬁnding

the optimal parametrization for an order policy completely. The few that dealt with it either

offer some kind of heuristic or simply suggest an enumeration or simulation approach to

1.1 Literature review 5

obtain good results. Most available algorithms make assumptions like identical local ware-

houses or Poisson demand, that are hardly valid in practice. Furthermore, most papers focus

on 2-level networks with no ready means to extend algorithms to the n-level case.

In our work, we ﬁll this research gap and introduce a generic method for the 2-level case,

which only relies on mild assumptions and can handle different types of wait time approx-

imations. Additionally, we develop a method for the n-level case which builds on the work

by Kiesmüller et al. [KdKSvL04].

In Chapter 2, we present the basic concepts of inventory management, derive the necessary

equations for a single-echelon system and summarize the assumptions of our work. We then

extend the ﬁndings to multi-echelon distribution networks and discuss different wait time

approximations as well as introducing a new approximation in Chapter 3. The core of our

work are the optimization algorithms, which we develop in Chapter 4 for the 2-echelon case

and the general n-echelon case. For both cases, we develop additional heuristics, which can

not guarantee optimality anymore but are expected to ﬁnd good solutions fast and often end

up in the optimum. In Chapter 5, we report on extensive numerical experiments and simu-

lations. First, we compare how accurate different wait time approximations are in different

settings and develop guidelines how to choose a suitable distribution. Then, we analyze

results of our 2-level optimization algorithm with test as well as real-world data to gain in-

sights into the structure of an optimal solution given the different wait time approximations.

At the end we do a similar but somewhat more limited analysis for the n-level algorithm by

computing results for a real-world 3-level distribution network. In Chapter 7, we summarize

our ﬁndings and highlight future research opportunities.

2 Inventory management

In this chapter we introduce the basic deﬁnitions and concepts of inventory management in

a multi-echelon setting. We hereby focus on the content needed for our thesis, but also high-

light other essentials of inventory management. Then, we give a comprehensive overview

of the assumptions made in this thesis. In the last section of this chapter, we discuss the

characteristics of a single-echelon system as many results carry over to the multi-echelon

case.

The diversity of supply chains has lead to the establishment of different structures in the

literature, which cover a wide range of problem settings. We present the four basic types:

serial, convergent, divergent and general structures.

The most essential nodes in our work are the warehouses (indicated by triangles in the

following ﬁgures), where inventory is kept. Note, that warehouses could also be considered

as production or transformation nodes in a more general setting. They are under control

of the planer, which is illustrated by the area encircled by the dotted line in the following

ﬁgures. The suppliers (indicated by an rectangle) and the customers (indicated by circles)

are outside of the system and exogenous factors. The arrows indicate the ﬂow of material.

By ordering stock from the supplier, material enters the system. When goods are demanded

by the customer and the order is fulﬁlled, stock exits the system. Material moves through

the system and we will refer to each stage or level as an echelon. We enumerate the echelons

(levels) of the system from top to bottom along the ﬂow of material.

The nodes of the last echelon, that fulﬁll external customer demand, are often called retailers

or local warehouses. In a 2-level system, i.e., a system with two echelons, the warehouse in

the ﬁrst echelon is commonly refereed to as central warehouse. In all networks, warehouses

on the ﬁrst (up-most) echelon, which receive goods from external suppliers, are sometimes

referred to as master warehouses or entry nodes. Only the last echelon in the network

fulﬁlls customer demand. If a network is considered where this constraint is violated, virtual

warehouses with zero lead time can be added. This procedure is explained in more detail at

the beginning of Chapter 3.

We will ﬁrst explain our notation in more detail with the simplest system, a serial system,

and then carry it over to more complicated systems.

Deﬁnition 1 (Serial system). A serial system is a system of warehouses where each node

in the network may only have one predecessor and one successor.

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_2

8 2 Inventory management

An example of this kind of system with two echelons is shown in Figure 2.1. Each echelon

consists of only one warehouse (indicated by a triangle) with the up-most warehouse in the

chain being the ﬁrst echelon and the succeeding warehouse being part of the second echelon.

The supplier (rectangle) and the customers (circles) are outside of the planners control.

each node in the network may only have one predecessor but many successors. A divergent

system is arborescent.

the entry node inventory is distributed along the network. This is the classical system of

spare parts inventory management. Figure 2.2 shows a divergent system with two echel-

ons.

each node in the network may have many predecessors but only one successor.

Convergent systems have the contrary deﬁnition compared to divergent systems. This type

of structures is mainly used for manufacturing supply chains where certain materials are

processed through several echelons until the end product is manufactured.

2.1 Basic concepts 9

tions on the number of predecessors and successors.

A general system allows for any kind of relationship and has no restrictions whatsoever.

Examples for this are a generalization of distribution systems which permit lateral trans-

shipment or assembly systems where a node has more than one successor, i.e., a good that

is processed in one node is needed in more than one node in the next echelon (cp. Figure 2.4

for an example).

The considered supply chain in this work has a divergent structure and we will only deal

with structures of this type unless noted otherwise.

2.1.2 Inventory

In order to discuss inventory, we introduce different concepts of how to think of stock or the

lack thereof at each individual warehouse in our network.

10 2 Inventory management

Deﬁnition 5 (Stock on hand). Stock on hand is the physical inventory in a warehouse that

is immediately available to fulﬁll orders.

Deﬁnition 6 (Outstanding orders). Outstanding orders is the total quantity of stock that

has been ordered by a warehouse but yet has to arrive.

Deﬁnition 7 (Backorders). Backorders are demand that already occurred but could not yet

be delivered to a succeeding warehouse or customer, respectively.

Backorders do not occure in the case of lost sales (cp. Section 2.1.3).

Deﬁnition 8 (Inventory level). The inventory level I Lev is the stock on hand minus the

backorders.

The inventory level describes how much inventory is available to satisfy incoming demand.

It can also be negative if backorders are larger than the stock on hand.

Deﬁnition 9 (Inventory position). The inventory position I Pos is deﬁned as stock on hand

plus outstanding orders minus backorders and characterizes the current stock situation at a

warehouse.

The inventory position is a useful concept for implementing order policies, as an ordering

decision can not only be made depending on the stock on hand (see Section 2.1.7). It

also has to take backorders as well as any outstanding order that has not yet arrived into

consideration. In a multi-echelon context two different concepts of the inventory position

exist. The installation stock concept considers only the stock at the current installation. In

this case, the inventory position is determined as described in Deﬁnition 9.

The alternative is the echelon stock concept. The echelon inventory position is the installa-

tion inventory position of the current warehouse plus the installation inventory position of

all succeeding downstream warehouses. Here, the idea is to capture the inventory situation

in the respective warehouse itself and all dependent warehouses.

While echelon stock reorder point policies dominate installation stock reorder point policies

for serial and assembly systems, this is not the case for divergent systems. For divergent

systems, it depends on the characteristics of the system which stock concept performs better

([Axs93], [AJ96], [AJ97]). For 2-level divergent systems, echelon stock seems to outper-

form installation stock for long central warehouse lead times, while installation stock seems

to be better for short lead times, but the cost difference was small in many cases [AJ96].

We only consider installation stock in this thesis, which is commonly used in practical

applications. The concept is much easier understood and handled by practitioners.

2.1 Basic concepts 11

If demand cannot be satisﬁed immediately, there are three different possible customer reac-

tions: backorders, lost sales and guaranteed service. Backorders imply that customers wait

until additional stock arrives to satisfy their order in case of a stock-out. On the contrary,

lost sales imply that the demand and therefore the possible associated sale cannot be real-

ized and therefore is lost. As already pointed out in Section 1.1, we focus on the stochastic

service approach which allows either backordering or lost sales.

To contrast the two approaches and for the sake of completeness, we nevertheless provide

some details about guaranteed service: Here, the assumption is made that all demand can be

satisﬁed in any case, which is either obtained by bounding demand or by introducing some

unspeciﬁed ﬂexibility into the model. By this unspeciﬁed ﬂexibility, additional inventory

appears, for example, by some emergency measure if a costumer demand occurs during

a stock-out situation. Strictly speaking, the customer response in stock-out situations is

unspeciﬁed, as such a situation does not occur.

We assume full backordering, i.e., all demand that occurs during stock-outs is backordered

and fulﬁlled at some later point in time. This is a common assumption in the literature (cp.

[dKGL+ 18]), and especially valid for spare parts networks. If a customer requires a part for

a repair, the assumption that he waits for the part to arrive and the sale is not lost is much

more plausible than for example for consumable goods.

Lead time L is the time it takes for an order of a warehouse to be fulﬁlled, i.e., from the time

the order is placed until the goods arrive at the warehouse and are available to fulﬁll the

demand. Lead time therefore consists out of the transportation time T and some additional

delay due to stock-outs at the supplying unit, the so called wait time W . Note, that the

transportation time T does not only capture the pure transit time from supplier to warehouse.

It is the entire time frame from when the stock is available at the supplier to fulﬁll the order

until it is available for customer orders at the warehouse. Therefore, it does among others

also include inspection time at the warehouse or time it takes to store the goods in the

warehouse after they arrived.

The transportation time can be deterministic or stochastic. In the literature it is often as-

sumed to be deterministic [dKGL+ 18]. However, it is stochastic in real-world due to a num-

ber of different factors such as trafﬁc, delays in custom, cutoff times for sea or rail transport

and many more. We will therefore assume stochastic transportation time. Many models

require the additional assumption that orders cannot cross each other in time if transporta-

tion time is stochastic (for example [KdKSvL04]). An extensive discussion of this issue can

be found in the textbook of Axsäter [Axs15]. Our optimization algorithms do not require

any assumption about order crossing. Our novel wait time approximation, however, is a

modiﬁcation of the model of [KdKSvL04] and therefore also requires this assumption.

12 2 Inventory management

Wait time either is a stochastic variable or 0. The wait time is 0 if the assumption is that the

respective supplier has ample stock. This assumption is often made for external suppliers

when no insight into the stocking situation and decision of the supplier is available. Any

variation in lead time is then modeled as part of a stochastic transportation time. Wait time

is only 0 for the entry node/master warehouse in the supply chain, which is supplied by an

external supplier.

2.1.5 Demand

Throughout our work stochastic, stationary customer demand D is considered. For many

formulas and the experimental results, we focus on a compound Poisson process. Customers

arrive according to a Poisson process with intensity λ and the demand size of each incoming

customer is a stochastic variable. However, our optimization algorithms themselves make

very little assumptions about the type of stochastic demand and can be applied to other

settings as well.

In practical applications, the properties of the stochastic demand are not given. Usually a

forecast value for a time period and a historical forecast error such as the root mean squared

error (RMSE) are available. As inventory can always be replenished within the lead time, it

is usually considered as “the period of risk”. Often forecast and forecast error are used to

derive mean and variance during lead time. Then, a suitable distribution is selected and a

two-moment ﬁtting is done. We will detail this out in Section 2.2.1. An alternative to this

is to use the historic demand data to do a more sophisticated distribution ﬁtting or construct

an empirical distribution.

If demand during a time period is large, the discrete demand is commonly approximated by

a continuous distribution in literature as well as practical applications [Axs15]. This either

allows more analytical insights into models or simply offers computational beneﬁts.

The order quantity (or lot size) Q is the quantity a warehouse orders from its predecessor

or supplier respectively if it has decided to place a replenishment order. Lot sizes are either

ﬁxed or dynamic. In our work, the order quantity for an item is ﬁxed for the considered time

period and given exogenously.

The underlying assumption for ﬁxed order quantities is usually a constant demand. The best-

known approach for ﬁxed lot sizes is the so called economic order quantity, which derives

an optimal order quantity, i.e., minimize holding and ordering costs. It was ﬁrst developed

by Harris [Har13]. It is very easy to apply, but relies on restricting assumptions which are

usually violated in practical applications. There are numerous extensions that include all

kinds of additional factors. A common approach in practice is to determine an economic

order quantity and then round to the nearest package size, which is often a limitation on

2.1 Basic concepts 13

how to set order quantities in practical applications. For a more indepth discussion of order

quantities, we refer to Silver et al. [SPT16, pp.145ff.].

An order policy prescribes how and when new stock is ordered by a warehouse. A number

of different order policies exist in research as well as in practice. One essential difference

is whether discrete or continuous time is assumed. We assume continuous time throughout

our thesis and omit all policies based on discrete time. We refer to Silver et al. and Axsäter

instead ([SPT16], [Axs15]). Note that the policies presented here can also be used in dis-

crete time if a review period is additionally speciﬁed. A common inventory management

policy in practice and in the literature is an order point policy due to its simplicity [SPT16].

Two essential order policies are the (R,Q)-policy and the (s,S)-policy.

Deﬁnition 10 ((R,Q)-policy). An order of size Q is placed every time the inventory position

is at or below the reorder point R until the inventory position is above R again.

In case of continuous review and continuous or unit-sized demand, the order is always made

when the inventory position is exactly at R. In all other cases this is not guaranteed. It may

be necessary to order multiple Q until the inventory position is sufﬁciently high again. This

policy is therefore sometimes also denoted as (R,nQ)-policy.

In this work we do not allow for negative reorder points, as is in line with many practical

applications. Our algorithm, however, is not restricted to positive reorder points. If imple-

mented with negative reorder points, we advice to carefully check the implications and the

effect that negative reorder points should have on the objective function.

An alternative to the (R,Q)-policy is the (s,S)-policy.

point s, enough stock is ordered to ﬁll the inventory position back up to the order-up-to

level S.

used, the order quantity is not constant and is equal to the difference between inventory

position and S. The (s,S)- and (R,Q)-policies are equivalent if an order is always placed

exactly at s and R, respectively, while s = R and S = R + Q hold true.

A special case of the (s,S)-policy is the so called base-stock policy, where s = S − 1, i.e.,

an order is always made when a demand occurs (in case of integer demand and continuous

review). The base-stock policy could alternatively also be interpreted as a special case of a

(R,Q)-policy with Q = 1.

14 2 Inventory management

In this section, we focus on service level towards external customers, i.e., the service level

of local warehouses. In general three different measures for service level are considered in

research as well as in practical applications with slightly varying deﬁnitions (cp. among oth-

ers [Axs15, pp. 79-80], [SPT16, pp. 248-250] and [Tem99, pp. 368-371]). We differentiate

between demand and orders. Demand is the total number of parts requested during a time

period, while order is the number of parts requested by an individual customer at a speciﬁc

point in time.

Deﬁnition 12 (Fill rate). The ﬁll rate is the fraction of demand (or orders) that can be

satisﬁed directly from stock on hand. In the ﬁrst case we refer to it as volume ﬁll rate while

in the second case we refer to it as order ﬁll rate. Fill rate is also called type 2 service level

or β service level.

Demand satisﬁed from stock on hand

Fill Ratevolume = (2.1)

Total demand

Fill Rateorders = (2.2)

Total number of orders

The order ﬁll rate is the most commonly used deﬁnition in practical applications and also

the deﬁnition we will use throughout this work. The literature focuses mainly on the volume

ﬁll rate. This implies that partial deliveries are possible. A notable exception are Larsen and

Thorstenson who also consider order ﬁll rates and discuss in detail the difference between

the measurements and the underlying implications [LT08]. The other two service level

measures are the cycle service level and the ready rate.

Deﬁnition 13 (Cycle service level). The cycle service level is the probability that no stock-

out occurs during an order cycle, i.e., the probability that no stock-out occurs during two

consecutive replenishment orders. It is also called type 1 or α service level. Assuming a

(R,Q)-order policy and that we order the batch size Q exactly when the inventory level hits

the reorder point R, we can denote the cycle service level as the probability that the reorder

point R is greater or equal than the demand during the lead time L:

Cycle service level = Pr (R ≥ D(L)) . (2.3)

Deﬁnition 14 (Ready rate). The ready rate is the fraction of time with a positive inventory

level:

Ready rate = Pr(I lev > 0). (2.4)

Recall that the inventory level is deﬁned as stock on hand minus backorders (Deﬁnition 8).

The ready rate describes how often the system is “ready” to fulﬁll new incoming orders

because it has stock on hand. In case of continuous demand the ready rate is equal to the ﬁll

rate.

2.2 Single-echelon systems 15

The ﬁrst type is the holding cost, i.e., the cost of holding an item in storage. Holding cost is

usually expressed as a percentage of the items value and apply for the time that an item is

kept in storage. Part of the holding cost is the return on an alternative investment, handling,

storage, damage, obsolescence, insurance, taxes, etc.

The second type is the penalty cost, i.e., the cost of backordering an item or the cost of a

lost sale, depending on the type of customer the supply chain is dealing with. This cost is

difﬁcult to estimate in practice. Therefore, it is often replaced by a service constraint (see

Section 2.1.8).

The third type is the order cost, i.e., a cost that occurs every time a (replenishment) order

is made. It may incorporate administrative, handling, transportation, machine set-up costs,

etc.

The common assumption in literature is that holding and penalty costs are linear in time and

linear in the number of stock on hand or number on backorder respectively, while order cost

is a ﬁxed cost for each order made [dKGL+ 18].

We use service level constraints throughout this work and therefore do not consider penalty

cost. Furthermore, as our order quantities are exogenously given (see Section 2.1.6), we

cannot inﬂuence order cost. The number of orders needed to satisfy demand is ﬁxed as we

assume full backordering (cp. Section 2.1.3).

We are left with the holding cost as our sole cost parameter. An alternative to using some

kind of holding cost, is to just consider the investment in stock. We will use the parameter p

which can either stand for the price of the considered part or the (absolute) cost of holding

one item during one time period in storage. The latter one can be derived as the product of

the holding cost times the price of the item.

We now derive some insights on single-echelon systems. Those insights can later on be used

for local warehouses, i.e., warehouses on the last echelon of a multi-echelon system, with

little modiﬁcations. They will form the basis with which we model all warehouses with

external customer demand.

A single-echelon system is a warehouse that orders parts from an external supplier and

fulﬁlls demand from external customers. We assume that the lead time L as well as the

demand during a single time period D is stochastic. The ﬁrst two moments of those random

variables are given, while the distribution is not necessarily known. The warehouse has to

fulﬁll a given ﬁll rate target with minimum inventory.

16 2 Inventory management

First, we derive some demand and inventory characteristics. Then, we detail out how the ﬁll

rate is determined for several demand distributions and how a reorder point can be determ-

ined for a given ﬁll rate target.

We determine mean and variance of the demand during lead time D(L). Given the expected

demand μ and the variance σ 2 of demand for one time unit, we can compute

and

[Axs15, p.100].

Now, we ﬁt a suitable distribution to the mean and variance. Many research papers assume

a normal or Poisson distribution [dKGL+ 18]. We believe, that the choice of these two

distributions was made because they are widely known and easy to handle. Both, however,

have some elementary ﬂaws when applied to model lead time demand.

A severe limitation of the normal distribution is that its domain is in the real numbers R.

It always has a positive probability mass on negative values. Unless customer returns are

possible and should be accounted for in the model, this concept of negative demand is not

realistic. Truncating the distribution at 0 increases computational complexity dramatically

[Bur75]. The quality of the approximation depends on the size of the probability mass on

negative values. Most researchers that use the normal distribution argue that it is a good

estimate if the probability mass on negative values is sufﬁciently small (for a discussion

of this claim cp. among others [TO97]). As the normal distribution is usually used for

high demand this translates to a low coefﬁcient of variation. The normal distribution also

leads to analytical issues, for example, that the uniformity property (Lemma 15) does not

hold anymore, but is just an approximation. Lau and Lau furthermore argue that the normal

approximation is not very robust in many situations and leads to non-optimal solutions even

if the coefﬁcient of variation is very low (i.e., ≤ 0.3) [LL03]. It is even more unsuitable in

situations where the coefﬁcient of variation is high [GKR07].

The Poisson distribution has the limitation that mean and variance are equal by deﬁnition,

which is not true in many applications. The demand is therefore not adequately modeled

and the speciﬁc uncertainty of demand not captured if this distribution is used.

Experience in industrial settings shows that an approximation with a negative binomial (cp.

Appendix A.2.4) or gamma distribution (cp. Appendix A.2.2) is more ﬁtting. This is in

line with Rossetti and Ünlü, who show in a numerical analysis that those two distributions

are suitable to approximate lead time demand and lead to low errors [RÜ11]. They still

perform reasonably well even if the true distribution is different. In their simulations, the

negative binomial performs slightly better than the gamma distribution. Both distributions

2.2 Single-echelon systems 17

are only deﬁned for non-negative values and therefore do not suffer the drawbacks of the

normal distribution. Furthermore, they are not symmetrical like the normal distribution but

skewed to the right which often better represents demand patterns observed in practice. An

alternative to settling on one distribution only is the use of distribution selection rules which

outperforms the use of one distribution only in many cases.

Our algorithms do not rely on the assumption of a speciﬁc distribution. For the experimental

results, we choose to approximate the lead time demand by a negative binomial with the

correct mean and variance as stated in eqs. (2.5) and (2.6).

If a (R, Q)-policy (see Deﬁnition 10) is used and under the assumption that demand is non-

negative and not all realizations of demand are multiples of some integer larger than one,

the following lemma can be shown:

deﬁned by Deﬁnition 9 is uniformly distributed on the integers R + 1, R + 2, ..., R + Q in

steady state.

Let t be an arbitrary time and assume the system is in steady state, then the inventory level

(cp. Deﬁnition 8) can be expressed as

Now we can determine the expected number of units in stock (I Lev )+ for integer demand:

+

+

E I Lev = E I Pos (t) − D(L)

1 R+Q

= ∑ E ( j − D(L))+

Q j=R+1

R+Q j

1

=

Q ∑ ∑ ( j − k)pdf D(L)(k), (2.8)

j=R+1 k=0

where pdf D(L) is the probability distribution function of the demand during the lead time

L.

and the expected number of units backordered (see Deﬁnition 7) E[B] = E[(I Lev )− ]:

R+Q ∞

1

E[B] =

Q ∑ ∑ (k − j)pdf D(L) (k). (2.9)

j=R+1 k= j+1

18 2 Inventory management

If demand is continuous, a modiﬁed version of the uniformity property is still valid as long

as the probability for negative demand is 0. The inventory position is then uniformly dis-

tributed on the interval [R, R + Q]. In cases where the probability for negative demand is

non-zero, for example when using the normal distribution, this is a good approximation as

long as the probability for negative demand is small ([Axs15]). Equations (2.8) and (2.9)

can also be modiﬁed to accommodate for continuous demand.

In this section, we will detail out how the ﬁll rate is calculated for two demand distributions,

i.e., compound Poisson and gamma for a given reorder point R. We will denote the ﬁll rate

function as β (R).

If demand during lead time follows a continuous distribution, this implies that we have

a continuous demand stream over the considered time period. The concept of order size

therefore does not exist and ﬁll rate and ready rate are equal. Those two insights can be

used to calculate the ﬁll rate.

Compound Poisson demand assumes that customers arrive according to a Poisson process

and that the probability for the order size k of each incoming customer is Pr(K = k). If

the order size K follows a logarithmic distribution, the demand during a time period (for

example the lead time) is negatively binomial distributed.

If partial deliveries (pd) are possible and a customer order arrives, the minimum between

the inventory level j and the order size K will be delivered and we can calculate the ﬁll rate

as (cp. [Axs15]):

∑∞

R+Q

k=1 ∑ j=1 min( j, k)Pr(K = k)Pr(I

lev = j)

βpd (R) = ∞ (2.10)

∑k=1 k Pr(K = k)

∑∞

R+Q

k=1 ∑ j=1 min( j, k)Pr(K = k)Pr(I lev = j)

= . (2.11)

E[K]

The probability that the inventory level is equal to a value j can be expressed as:

1 R+Q

Pr(I lev = j) = ∑ Pr(D(L) = l − j).

Q l=R+1

(2.12)

This follows from the uniformity property of the inventory position (cp. Lemma 15).

2.2 Single-echelon systems 19

If only complete deliveries (cd) are permitted, an order can only be fulﬁlled if the inventory

level j is at least as big as the order size k:

R+Q R+Q

βcd (R) = ∑ ∑ Pr(K = k)Pr(I lev = j). (2.13)

k=1 j=k

The above formula for complete orders weights all orders equally, independent of size. If

orders should be weighted by size (cdw - complete deliveries weighted), the following for-

mula can be used:

R+Q R+Q

∑k=1 ∑ j=1 k Pr(K = k)Pr(I lev = j)

βcdw (R) = (2.14)

∑∞k=1 k Pr(K = k)

R+Q

∑k=1 k Pr(K = k)Pr(I lev ≥ k)

= (2.15)

E[K]

where

1 R+Q

Pr(I lev ≥ k) = ∑ Pr(D(L) ≤ l − k).

Q l=R+1

(2.16)

Equation (2.16) gives the probability that the inventory level is larger than or equal to a

certain value k. We sum up all all possible starting values l of the uniformly distributed

inventory position (cp. Lemma 15) multiplied by their respective probability Q1 . We multiply

each of the values by the probability that the demand during lead time is less than or equal

to the inventory position, i.e., Pr(D(L) ≤ l − k), and get the steady state probability for the

inventory level.

Gamma demand

1

β (R) = 1 − E[backorders at the end of an order cycle] (2.17)

Q

− E[backorders at the start of an order cycle] .

An order cycle is deﬁned as the time between two subsequent orders. The average back-

orders during one cycle can be calculated as the difference between the expected backorders

at the end and the start of the cycle.

∞

E[backorders at the start of an order cycle] = (y − R − Q)Pr(D(L) = y)dy (2.18)

R+Q

∞

E[backorders at the end of an order cycle] = (y − R)Pr(D(L) = y)dy (2.19)

R

20 2 Inventory management

Equation (2.17) works for any continuous distribution and can be calculated for the gamma

distribution as:

1 α

β (R) = 1 − γ[α + 1, (R + Q)λ ] − γ[α + 1, Rλ ] (2.20)

Q λ Γ(α + 1)

γ[α, Rλ ] γ[α, (R + Q)λ ]

+R − (R + Q)

Γ(α) Γ(α)

where γ(·, ·) is the lower incomplete gamma function, Γ(·) is the gamma function, α =

E[D(L)]2 /Var[D(L)] and λ = E[D(L)]/Var[D(L)] (derived from [Bur75] and [Tem99, p.

381]). Equation (2.20) is only deﬁned for R ≥ 0.

Assuming (R,Q)-order policy (Deﬁnition 10) with a given order quantity Q and a ﬁll rate

target β̄ , we can determine the smallest reorder point R to fulﬁll β̄ using a binary search.

The ﬁll rate (eqs. (2.10) and (2.13)) is increasing in R. Furthermore, for any negative Q

and R ≤ Q the inventory is always non-positive and therefore β = 0. Sometimes, practical

considerations require R ≥ 0. We therefore choose either −Q or −1 as a lower bound for

the binary search and a sufﬁciently high value for the upper bound.

The general outline of the binary search is explained in Algorithm 16.

1. Choose initial suitable lower and upper bounds lb and ub for the binary search.

2. If ub − lb ≤ 1, terminate the search and return ub.

3. Set m = 12 (lb + ub).

4. If β (m) < β̄ , set lb = m. Otherwise, set ub = m. Go to Step 2.

Note that, because we have chosen to update the lower bound in Step 4 only if β (m) < β̄ ,

we can terminate the search if ub − lb ≤ 1 in Step 2. Choosing tight bounds is imperative in

keeping the runtime low.

3 Multi-echelon distribution networks

In this chapter we are going to extend the results of single-echelon systems in Section 2.2 to

divergent distribution network (cp. Deﬁnition 2). We hereby describe how demand and ﬁll

rate can be modeled on the different echelons and how stock on the echelons is connected

via the wait time. Figure 3.1 shows an exemplary 3-echelon distribution network.

We assume that only the last echelon fulﬁlls customer demand. We refer to customer serving

warehouses on the last echelon as local warehouses and non-customer serving warehouses

on higher echelons as non-local warehouses to distinguish these two types of warehouses.

The assumption of customer demand only at the last echelon can be overcome, for example,

by the introduction of virtual warehouses with 0 lead time, as sometimes suggested in liter-

ature ([Axs07], [BM13]). Then, stock is exclusively kept for both, local customers and suc-

ceeding warehouses, leading to a potential overstock. We advice to only introduce the vir-

tual warehouses to apply the optimization algorithms presented in this thesis. Subsequently,

an inventory rationing strategy for two demand classes could be introduced (as described

for example by Teunter and Haneveld [TH08] or references therein).

For local warehouses i, the properties and formulas of single-echelon systems apply as

described in Section 2.2, but the lead time Li may depend on the stocking decisions at

preceding warehouses. The lead time Li is not a given random variable but is the sum of the

endogenous wait time Wi , which depends on the characteristics of the supplying warehouse,

and the given exogenous transportation time Ti .

For non-local warehouses, we have to aggregate the demand from all succeeding ware-

houses to compute the lead time demand and subsequent measures like ﬁll rate. We derive

the necessary formulas for this step after discussing the basic mechanics in multi-echelon

distribution networks.

Supplier

Warehouses

Customers

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_3

22 3 Multi-echelon distribution networks

While we focus on distribution networks with (R,Q)-order policies and ﬁll rate targets for

local warehouses, many of our results can be extended to other order policies and service

level concepts. In the following we consider networks of a structure as deﬁned by Deﬁni-

tion 17.

tion network. All warehouses operate according to a (R,Q)-policy. A service level target is

given for each local warehouse. Furthermore, Qi is considered a ﬁxed input. We are there-

fore concerned with determining the reorder points Ri for warehouse i, such that the service

level target at each local warehouse is fulﬁlled.

The central mechanism that connects stocking decisions at different echelons is the so called

wait time:

Deﬁnition 18 (Wait time). Wait time is the time an order has to wait at a non-local ware-

house due to a stock-out until sufﬁcient stock is available again to fulﬁll the order.

In Section 3.2 a more detailed discussion of wait time and its approximation is given. A

stocking decision at a warehouse has consequences for all succeeding warehouses, as the

wait time increases with decreasing reorder point and the lead time is subsequently increased

as well. To illustrate the mechanics at work, an overview of the trade-off of selecting suitable

reorder points is given in Figure 3.2.

After increasing or decreasing a non-local reorder point, we can always ﬁnd a suitable (local)

reorder point at each local warehouse such that service level targets are met. The same is

not necessarily true if we change local reorder points. Non-local reorder points can only

inﬂuence the wait time and not the transportation time. We always need sufﬁciently high

local reorder points to cope with demand during transportation time. We cannot compensate

the ﬁll rate loss of any decrease of a local reorder points by an increase of a non-local reorder

point.

Lemmas 19 and 20 state two main properties of our system which may sound trivial at ﬁrst,

but supply us with useful information about the dynamics in our system. We later use those

properties to construct our optimization algorithms in Chapter 4.

the reference network (Deﬁnition 17) is non-decreasing in the reorder points of the local

warehouse itself and in the reorder points of all its predecessors.

3.1 Lead time demand and ﬁll rate at non-local warehouses 23

While Lemma 19 is intuitively true, we proof it for the special case of order ﬁll rate and

compound Poisson demand in Appendix A.1.1. Similar proofs can be constructed for other

settings.

Lemma 20 (Reorder points substitution). Consider the reference network (Deﬁnition 17).

Each warehouse has a parametrization (Ri , Qi ), such that the local service level targets

are fulﬁlled. The reorder points Ri are the minimal reorder points needed to fulﬁll the

service level target: No reorder point Ri of this base parametrization can be lowered without

violating the targets. Starting from this parametrization, an increase of a reorder point at

any warehouse by 1 permits at best a decrease of a reorder point at a succeeding warehouse

by 1 and an increase of a reorder point at a succeeding warehouse by 1 permits at best a

decrease of the reorder point at a preceding warehouse by 1.

Lemma 20 can easily be veriﬁed by starting from a serial system with two warehouses as

a special case of Deﬁnition 17 and zero transportation time between the two warehouses.

This is the ideal case where we have perfect substitution between the two reorder points and

an exchange ratio of 1:1. Any extension or change to the network or transportation time

worsens the possibility of substitution between reorder points.

To determine the lead time demand at non-local warehouses, we extend the results for single-

echelon systems (cp. Section 2.2.1) to non-local warehouses. These results differ from local

warehouses as downstream demand has to be aggregated and the order quantities Q of all

succeeding warehouses have to be taken into consideration.

We assume a 2-level network for ease of notation and refer to the central warehouse with

the index 0, while local warehouses have an index i > 0. An extension to a n-level network

is straightforward.

We derive the mean and variance of central lead time demand by extending the work of

Berling and Marklund to allow for stochastic lead time at the central warehouse [BM13].

Let L0 be the given random lead time. Furthermore, we assume that the support of this

function is positive.

Due to the uniformity property of the inventory position (cp. Lemma 15), (IiPos (t) − Ri ) is

uniformly distributed on (1, Qi ]. The probability of warehouse i placing at most k orders

conditioned on L0 = l, δi (k|L = l), is:

Qi

1

δi (k|L0 = l) = ∑ Qi Pr (Di(l) ≤ kQi + x − 1) for k = 0, 1, 2, ... (3.1)

x=1

24 3 Multi-echelon distribution networks

We get a shift of −1 in eq. (3.1) because we want to consider the probability that less than

x orders of size Qi are made during the next lead time l.

Now we can compute the probability by conditioning (cp. [Ros14, p.115]) and obtain for a

continuous distribution of the lead time:

∞

Qi

1

δi (k) = ∑ Pr (Di(l) ≤ kQi + x − 1) Pr(L0 = l) dl

l=1 x=0 Qi

(3.2)

Directly from eq. (3.1) or eq. (3.2) respectively, we get the probability that exactly k sub-

batches of size Qi are placed from warehouse i:

⎧

⎪

⎨δi (0), if k = 0

fiord (k) = δi (k) − δi (k − 1), if k > 0 (3.3)

⎪

⎩

0, otherwise.

Lead time demand is a random variable which describes the demand during the lead time

that occurs at a speciﬁc warehouse. The mean and variance of lead time demand at the

central warehouse can now be determined as

N

E[D0 (L0 )] = ∑ μi E[L0 ] and (3.4)

i=1

N ∞

Var[D0 (L0 )] = ∑ ∑ (μi E[L0 ] − kQi )2 fiord (k) . (3.5)

i=1 k=0

Equation (3.4) is a summation of the multiplied expected values of all local warehouses

1, ..., N, where N is the number of local warehouses. For eq. (3.5), we have to assume that

the variances induced from the order process of each retailers are independent. The variance

in the order process of local warehouse i is ∑∞ 2 ord

k=0 (μi E[L0 ] − kQi ) f i (k), i.e., the sum of

all possible deviations from the mean squared and weighted by their probabilities. Only

multiples k of the order quantity Qi can be ordered and we can have between 0 and (theoret-

ically) ∞ orders during a lead time. Due to the independence assumption the variances from

all local warehouses can then be summed up.

If a common subbatch q, i.e., a greatest common divisor, of all order quantities Qi exists, the

calculation should be done in units of q as this improves the quality of the approximation.

3.2 Wait time approximation 25

A commonly used measure of multi-echelon inventory systems is the ﬁll rate at the central

warehouse. In the previous section, we have already determined the mean and variance

of lead time demand in eqs. (3.4) and (3.5). We can now ﬁt standard distributions to the

mean and the variance and calculate the ﬁll rate as described in Section 2.2.3. The ﬁrst two

moments of lead time demand can also be used to compute any other service level measure

by assuming a distribution.

We use the following distribution selection rule, which is similar to, for example, Berling

and Marklund [BM13]. If Var[D0 (L0 )] > E[D0 (L0 )], we use the negative binomial distri-

bution. This is a necessary precondition for the distribution to exist. In all other cases, we

suggest the gamma distribution.

This calculation can only give an estimate of the actual ﬁll rate obtained, as we do not ex-

plicitly consider the order quantities of the succeeding warehouses. The larger the common

subbatch q is, the better the approximation of the ﬁll rate will be as the effect of neglecting

the order quantities gets smaller.

The demand in reality is often more intermittent compared to the distributions proposed here

if order quantities of local warehouses are large. In the numerical sections, we compare

the calculated ﬁll rate of central warehouses with the ﬁll rate obtained in simulations at

several occasions (for example in Tables 5.6 and 5.10). These results conﬁrm that the ﬁll

rate calculation proposed here is not suitable if order quantities play a signiﬁcant role and

should only be used as an indication of how high the ﬁll rate may be.

There is only one approach to exactly compute the wait times in a distribution systems using

(R, Q)-policies to the best of our knowledge, which is by Axsäter [Axs00]. Following each

supply unit through the system, Axsäter is able to derive an exact evaluation. However,

he permits partial deliveries, the necessary evaluations are computationally very demanding

and feasible only for very small problem sizes. Therefore, we have to use approximations of

wait times in our algorithm. In this section, we brieﬂy review three wait time approximations

from the literature and additionally introduce a new approach. These approximations are

more suitable for computations with real-world size networks than the above mentioned

exact method.

We have chosen an approximation approach by Axsäter as a representative of the “classic”

METRIC line of research (in the following refered to as AXS) [Axs03]. Furthermore, we

use the more recent approximation by Kiesmüller et al. [KdKSvL04] and Berling and Farvid

[BF14], respectively referred to as KKSL or BF. Each approximation has shortcomings and

weaknesses, which we will discuss in more detail later in this section. These shortcomings

served as the motivation for creating a new kind of approximation, the NB approximation.

26 3 Multi-echelon distribution networks

All approximations are presented for the 2-level case to simplify notation. The approxi-

mation by Berling and Farvid is only applicable for 2-levels while the approximation by

Kiesmüller et al. is shown for the general n-level case in their paper. Axsäter presents the

2-level case but also sketches an extension to n-level at the end of his paper. Our own work

is in theory extendable to n-levels but in limited experiments, we found that the quality of

the approximation gets bad and therefore we cannot recommend our approximation for the

n-level case.

Later, in Section 5.2, we will consider each of these approximations and develop guidelines

for choosing the most suitable approximation for given circumstances. Furthermore, we

analyze in Section 5.3 how the chosen approximation inﬂuences the structure of the resulting

optimal solutions in the 2-level case. For the experimental results in the n-level case in

Section 5.4, we only consider the KKSL approximation.

law is used. It was originally introduced by Sherbrooke [She68]. Deuermeyer and Schwarz

were the ﬁrst to adapt it for batch ordering systems and it was widely used in research

publications since then [DS81]. They assume identical demand and order quantities at local

warehouses. If the characteristics are not identical, the formula yields the average delay

for all warehouses. This means that we may get signiﬁcant errors, if the order quantities

of local warehouses differ substantially. Nevertheless it is still used as an approximation

for the (real) expected value of the wait time. Due to their simplicity, the METRIC-type

approximations are probably the most widely used approach in practical applications.

Some approaches exist which also model the variance of the average delay. Svoronos and

Zipkin derive the variance based on an extension of Little’s law for higher moments, but it

is only exact if the order quantity of all local warehouses is 1 [SZ88].

Axsäter, which is our “representative” for the METRIC-type approximations, derives not

only the mean but also the variance of the average delay based on a normal approximation

of lead time demand [Axs03]. He derives mean and variance based on Little‘s law, the

normal approximation of lead time demand, a constant central lead time and a continuous

uniform approximation of the (discrete) central inventory position. They are identical for

all local warehouses, therefore we drop the subscript i of Wi . To apply the formulas also for

random central lead time we replace the constant central lead time in the original formula of

Axsäter by the expected lead time E[L0 ]. Under these assumptions the mean can be obtained

as the expected number of backorders at the central warehouse E[B0 ] = E[(I0Lev ))− ] divided

by the sum of expected demand at local warehouses for a time unit ∑Ni=1 μi :

E[W ] =

∑Ni=1 μi

= N ∑ σiE[L0],

∑i=1 μi i=1

(3.6)

3.2 Wait time approximation 27

2

E[Wi ] (E[Wi ])2 k(R0 )

Var[W ] = (1 − Φ(k(R0 ))) − − (E[Wi ])2 , (3.7)

G (k(R0 )) G(k(R0 ))

where G(·) is the normal loss function and

R0 + Q0 − ∑Ni=1 (μi E[L0 ])

k(R0 ) = .

∑ni=1 σi E[L0 ]

Axsäter compares the approximation to the exact solution for a problem instance with very

low demand, as the exact solution is computable in this case. He concludes that the devi-

ations of the approximation from the exact solution would be acceptable in most practical

situations.

Kiesmüller et al. suggest approximations for the ﬁrst two moments of the wait time in a

system with compound renewal customer demand [KdKSvL04]. They use the stationary

interval method of Whitt to superpose the renewal processes with mixed Erlang distributions

[Whi82]. Their model only applies for non-negative central reorder points. They introduce

an additional random variable Oi for the actual replenishment order size of warehouse i

and state the proposition that the distribution function of Wi conditioned on Ti = t can be

approximated for 0 ≤ x ≤ t by

1

Pr(Wi ≤ x|Ti = t) ≈ 1 − (E[D0 (l − x)] + Oi − R0 ) − E[D0 (l − x)] + Oi − (R0 + Q0 )) .

Q0

(3.8)

Equation (3.8) is again based on the idea of starting from all possible values of the inventory

position, which is uniformly distributed (Lemma 15). They consider the inventory position

I0pos (x0 ) at time x0 just before the order of warehouse i is placed. Let D0 (x1 , x2 ) be the

aggregate demand at 0 during the interval (x1 , x2 ]. If T0 = t and 0 ≤ x ≤ t, I0pos (x0 + x − t) −

D0 (x0 + x − t, x0 ) is larger than Oi (x0 ) if and only if i has to wait less than x. Subsequently,

Pr(Wi ≤ x|Ti = t) = Pr(I0pos (x0 + x −t) − D0 (x0 + x −t, x0 ) ≥ Oi (x0 ). After conditioning and

several reformulations, they obtain eq. (3.8). For details we refer to the paper of Kiesmüller

et al.

The ﬁrst moment can then be approximated as

E[L0 ]

+ +

E[Wi ] ≈ E D0 (L̂0 ) + Oi − R0 − E D0 (L̂0 ) + Oi − (R0 + Q0 ) , (3.9)

Q0

where L̂0 is a random variable representing the residual lifetime of L0 with the cumulative

distribution function

y

1

cdf L̂0 (y) = 1 − cdf L0 (z) dz. (3.10)

E[L0 ] 0

28 3 Multi-echelon distribution networks

E[L02 ]

+ +

E[Wi2 ] ≈ E D0 (L̃0 ) + Oi − R0 − E D0 (L̃0 ) + Oi − (R0 + Q0 ) , (3.11)

Q0

where L̃0 is a random variable with the distribution function

y ∞

2

cdf L̃0 (y) = (z − x)dcdf L0 (z)dx. (3.12)

E[L02 ] 0 x

For further details of the calculation we refer to the original paper of Kiesmüller et al.

[KdKSvL04].

Kiesmüller et al. report that the performance of their approximation is excellent if order

quantities are large. This applies especially for the order quantities of the local warehouses.

The error of the approximation reportedly also decreases with increasing demand variabil-

ity.

Unfortunately, their formulas sometimes yield negative variances, an observation also made

by Berling and Farvid [BF14].

We also encountered numerical instabilities due to the two-moment ﬁtting technique of

Kiesmüller et al. using the mixed Erlang distribution. This happens if the squared coefﬁcient

of variation is very low, for example in situations where the order quantity of the local

warehouse is very large compared to lead time demand.

Berling and Farvid estimate mean and variance by replacing stochastic lead time demand

with a stochastic demand rate and offer several methods how this demand rate can be ap-

proximated [BF14]. Their methods differ in how the demand rate is estimated. We use

method “E5” from their paper, which is an approximation based on the lognormal distribu-

tion. Berling and Farvid obtained good results with this method in their numerical analysis.

Additionally, it is easy to implement and it does not require challenging computations. The

authors, however, assume a constant central lead time.

Berling and Farvid compared their methods to the ones by Kiesmüller et al., Axsäter, An-

dersson and Marklund as well as Farvid and Rosling ([KdKSvL04], [Axs03], [AM00],

[FR14]). Their method showed superior results for the cases considered. Kiesmüller et

al. claim that their approximation works best with highly variable demand and large order

quantities. Berling and Farvid use very low order quantities (compared to the demand rate)

and a demand process with low variance. Thus, the superior results are not surprising.

The Berling and Farvid approximation does not work if the order quantity of a local ware-

house is larger than the sum of the order quantity of the central warehouse and the central

reorder point. In this case a negative expected wait time is calculated. However, this scen-

ario is not common. We recommend setting the wait time equal to the central lead time in

these cases.

3.2 Wait time approximation 29

at the central warehouse by a negative binomial distribution and then compute the ﬁrst two

moments of the wait time based on Kiesmüller et al. [KdKSvL04]. The negative binomial

distribution is a ﬂexible distribution and suitable to model lead time demand as already

pointed out in Section 2.2.1. Our approach does not face numerical instabilities and negative

variances as encountered by Kiesmüller et al.

We determine mean and variance of Di (L̂0 ) and Di (L̃0 ) using eqs. (3.4) and (3.5), replacing

L0 with the respective random variables. Instead of deriving the real distribution function,

we ﬁt the mean and variance to a negative binomial distribution. If the variance is smaller

than the mean, the negative binomial distribution is not deﬁned. This can only happen if

local order quantities and the coefﬁcient of variation of local demand are small. In this case,

we propose using the gamma distribution instead.

In our industrial application the order quantity is large compared to lead time demand. We

therefore assume that the actual order size is always equal to the order quantity. If order

quantities are too small for this assumption to hold, it is easy to incorporate the changes in

the model. Adapting the eqs. (3.9) and (3.11) given by Kiesmüller et al., we can express the

ﬁrst two moments of the wait time as

E[L0 ]

+ +

E[Wi ] ≈ E D0 (L̂0 ) + Qi − R0 − E D0 (L̂0 ) + Qi − (R0 + Q0 ) (3.13)

Q0

and

E[L02 ]

+ +

E[Wi2 ] ≈ E D0 (L̃0 ) + Qi − R0 − E D0 (L̃0 ) + Qi − (R0 + Q0 ) , (3.14)

Q0

where L̂0 and L̃0 are deﬁned in eqs. (3.10) and (3.12).

For a discrete distribution with positive support such as the negative binomial distribution,

it is easy to show that

z

E[(X − z)+ ] = E[X] − ∑ x pdf X (x) − z(1 − cdf X (z)), z ≥ 0. (3.15)

x=0

Given the issues with the original Kiesmüller et al. wait time approximation, our motivation

was to develop a more stable approach for the use in practical applications. Our negative

binomial approximation, although inspired by the Kiesmüller et al. model, does not suffer

from the same instabilities. However, the approximation faces problems if Qi >> Q0 . In

this case, the variance can be negative as well but this is illogical and not a likely setting in

real-world applications.

30 3 Multi-echelon distribution networks

We use the typology introduced by de Kok et al. [dKGL+ 18] to place our research in the

general ﬁeld of multi-echelon optimization and summarize the assumptions we make. Our

dissertation can be classiﬁed as: 2n, D, C, G | I, G | B, B | q, Q, N | S || AEFS, O. This

translates to:

Table 3.1: Classiﬁcation within the literature as suggested by de Kok et al. [dKGL+ 18]

System speciﬁcation:

Echelons 2n two echelons, general number of echelons

Structure D divergent

Time C continuous

Information G global

Resources speciﬁcation:

Capacity I inﬁnite

Delay G general stochastic

Market speciﬁcation:

Demand B compound “batch” Poisson

Customer B backordering

Control type speciﬁcation:

Policy q installation (s,nQ)

Lot sizing Q ﬁxed order quantity

Operational ﬂexibility N none

Performance speciﬁcation:

Performance indicator S meeting operational service requirements

Direction of the paper:

Methodology AEFS approximate, exact, ﬁeld study, simulation

Research goal O optimization

4 Multi-echelon optimization

In this chapter we are concerned with ﬁnding optimal reorder points in multi-echelon distri-

bution systems.

In the literature it is often assumed that service level targets are given for each warehouse,

including non-local warehouses. Then, the system can be decoupled and a parametriza-

tion of a given order policy can be determined for each warehouse individually as in the

single-echelon case. However, the only important service level for companies in the end

is the service of (external) customer demand, while (internal) service levels of non-local

warehouses are only means to an end. Therefore, we consider systems where service level

targets are only given for local warehouses. In that case the optimal parameters of a chosen

order policy cannot be determined separately for each warehouse and we have to optimize

the network as a whole.

This complicates the optimization drastically but provides signiﬁcantly better stocking de-

cisions. We start by solving the easier 2-level case before applying the insights gained on

the systems to develop an algorithm to ﬁnd optimal reorder points in the general n-level

case.

In this section, we present and describe our optimization algorithm for the 2-echelon case.

Section 4.1 is in publication as a joint work together with Andreas Bley [GBS18] with the

exception of the heuristics presented in Section 4.1.6. We start with a general outline of

the algorithm and afterwards discuss its main components, namely its initialization and the

reﬁnement and addition of breakpoints to the used models, in detail. Subsequently, we

analyze experimental results using the 2-echelon algorithm with test as well as real-world

data in the next chapter in Section 5.3.

In our model, we consider a 2-level divergent system as deﬁned in Deﬁnition 2. A central

warehouse is supplied with parts from an external source. The external supplier has unlim-

ited stock, but the transportation time is random with known mean and variance. The central

warehouse distributes the parts further on to n local warehouses, which fulﬁll the stochastic

stationary customer demands. Figure 4.1 shows a sample network. The central as well as

the local warehouses order according to a continuous review (R, Q)-inventory policy (cp.

Deﬁnition 10).

The local warehouses i = 1, 2, ..., N order from the central warehouse 0, whereas the central

warehouse orders from the external supplier. If sufﬁcient stock is on hand, the order arrives

after a random transportation time with known distribution, T0 or Ti , respectively. We only

allow for complete deliveries. If stock is insufﬁcient to fulﬁll the complete order, it is back-

ordered, i.e., the order is fulﬁlled as soon as enough stock is available again. Orders and

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_4

32 4 Multi-echelon optimization

supplier warehouse warehouses demand

backorders are served ﬁrst come - ﬁrst serve. If a stock-out occurs at the central warehouse,

orders of local warehouses have to wait for an additional time until sufﬁcient stock is avail-

able again. This additional time is modeled as a random variable, the wait time Wi . The

stochastic lead time therefore is Li = Ti + Wi . Ti and Wi are independent random variables.

We assume unlimited stock at the supplier. Hence, the wait time of orders of the central

warehouse is 0.

The local warehouses fulﬁll external customer demand. Unfulﬁlled customer orders are

backordered until sufﬁcient stock is on hand and are served on a ﬁrst come - ﬁrst serve

basis.

Throughout this section we will restrict to a type 2 service level (cp. Deﬁnition 12). This is

one of the most widely used service levels in practical applications [LT08]. Yet, our model

also works with other service levels, such as those deﬁned in Section 2.1.8. For each node

i > 0 we want to fulﬁll some given ﬁll rate target β̄i . The ﬁll rate target is an externally set

goal specifying how many orders should be fulﬁlled from stock on hand. The ﬁll rate βi ,

that is actually achieved, depends on both the local reorder point Ri and the central reorder

point R0 . The higher the central reorder point is set, the lower is the resulting wait time and,

therefore, the lead time.

The goal of the problem considered throughout this section is to determine the reorder points

Ri , i = 0, ...., N, of all warehouses, such that customer demands can be satisﬁed at least

with the probability speciﬁed via the service quality requirements and, on the other hand,

investment in stock held at the warehouses is minimized. We introduce pi as the price of

the considered part in node i.

4.1 The 2-echelon case 33

Note that we assume that the order quantity Qi of each warehouse i is ﬁxed and given as

already pointed out in Sections 2.1.6 and 3.3. In the practical application that motivated

this work, and in many other applications we are aware of, specialized algorithms are used

to determine an order quantity that permits smooth and efﬁcient handling operations in

the warehouse and cost-efﬁcient transport. These algorithms usually take many factors

into account, including cost considerations, storage limitations for speciﬁc parts, volumes

and weights for ergonomic handling, package sizes, hazardous goods handling restrictions,

limited shelf life, and many more. Hence, we take the order quantity as a given and ﬁxed

input parameter and focus on the optimization of the reorder points, although the two are

closely connected. The assumption of a ﬁxed order quantity is common in literature [Axs15,

p.107][SPT16, p.257].

The goal of our optimization model is to simultaneously determine the reorder points Ri

of the central warehouse i = 0 and all local warehouses i = 1, . . . , N such that the working

capital is minimized while the ﬁll rate target β̄i is fulﬁlled at each local warehouse. This

problem can be formulated as

N

min ∑ (pi · Ri) (4.1)

i=0

s.t. βi (R0 , Ri ) ≥ β̄i for all i = 1, . . . , N .

In industrial applications, it is common practice to determine reorder points such that the

investment in capital, the so-called working capital, is minimized. By minimizing the

weighted sum of the reorder points Ri and the price at the respective location pi , we impli-

citly minimize the weighted sum of the inventory positions due to the uniformity property.

The inventory position includes stock on hand as well as stock on order. The weighted sum

of the inventory positions thus can be seen as the total value of stock in the distribution net-

work, including the investment that is either tied by stock on hand, stock in transit between

warehouses or stock already ordered from the supplier.

Note that this objective function differs from those typically used in the literature (for ex-

ample in [Axs03], [BM13]). Normally, the aim is to minimize costs like holding, order and

backordering costs (cp. Section 2.1.9). Backordering costs are implicitly taken care of in

our model via the ﬁll rate constraints. Order costs are irrelevant due to the given and ﬁxed

order quantities. While an alternative interpretation of pi could be the cost of stocking a unit

of this item in the location or the holding cost, normally the positive parts of the inventory

levels, (Ii )+ , are minimized and not the reorder points. This was not the focus of our work,

but we did some limited numerical experiments and found that minimizing reorder points

is a good proxy for minimizing the positive parts of the inventory levels. In case of min-

imizing the positive part of the inventory level, stock in transit between warehouses is not

considered anymore and therefore would not carry a cost penalty.

34 4 Multi-echelon optimization

Due to the nonlinear nature of the constraints, Model 4.1 is not easy to solve.

If the central reorder point R0 is ﬁxed at some chosen value, then the optimal local reorder

points Ri for this particular value of R0 and the given ﬁll rate targets β̄i can be computed,

in principle, by solving an inverse ﬁll rate computation problem for each local warehouse

independently. Hence, the problem could be considered as a 1-dimensional problem whose

only decision variable is R0 . However, even for a given value R0 = R̄0 , computing the

corresponding optimal Ri -values is difﬁcult. In general, it is not possible to analytically

determine the inverse of the ﬁll rate function βi (R̄0 , .), i.e., to derive a closed formula for

the minimal value of Ri for which βi (R̄0 , Ri ) ≥ β̄i . To compute this value Ri numerically,

one has to use binary search, which becomes cumbersome if the mean and the variance

of the demands or the order quantities are large. Throughout this section, we will refer to

the inverse ﬁll rate function, which returns the lowest Ri for a given R0 such that the ﬁll

rate target is fulﬁlled, as bini (R0 ). As this function must be evaluated via binary search

in practice, enumerative solution approaches quickly become computationally infeasible,

especially if the optimization problem has to be solved for many different parts, which is

the case in our application. In order to solve real-world instances of the problem, it is

absolutely necessary to reduce the number of evaluations of this function and to reduce

the computational costs within the binary search using additional bound strengthening and

adaptive speedup techniques.

The central idea of the optimization algorithm is to construct an under- as well as an over-

estimating integer linear programming (ILP) model. In these models, we approximate the

non-linear constraints by piecewise linear functions that either under- or overestimate the

inverse ﬁll rate functions bini (R0 ) of the original constraints. These two linear programs

will then be iteratively reﬁned and resolved until a given target optimality gap is reached or

an optimal solution is found. In every iteration the underestimating model yields a global

lower bound for the objective value, while the overestimating model yields a valid solution,

i.e., reorder points that satisfy the ﬁll rate constraints. In each iteration, we thus can decide

to either stop the optimization, if the found solution is good enough, or to continue until,

ultimately, the two solution values of the under- and the overestimating model are identical

and an optimal solution is found. The solution values of the under- and overestimating in-

teger linear programs converge towards the optimal solution value of the original Model

(4.1). As the solution space of the original model is ﬁnite, both approximations eventually

reach this global optimum.

Figure 4.2 shows a ﬂow chart of our optimization algorithm. In general, it works as follows:

In order to construct the piecewise linear over- or underestimators of the original non-linear

constraints, we evaluate these constraints at certain breakpoints, i.e., compute Ri = bini (R0 )

for some given values of R0 . Instead of letting the binary search run until it terminates with

the exact value of Ri , we propose to use a truncated binary search: After a ﬁxed number of

iterations, we abort the binary search and use the lower and upper bounds computed for Ri so

far to construct less tight piecewise linear under- and overestimators of the constraints to be

4.1 The 2-echelon case 35

used in the under- and overestimating ILP models. This substantially reduces the computing

time necessary to set up the initial ILP models. In later iterations of our algorithm, we then

iteratively reﬁne the piecewise linear functions in an adaptive way, adding either further

breakpoints or improving the lower and upper bounds on the Ri values by continuing the

binary search only if and where necessary. This leads to a substantial reduction of the overall

computational effort compared to an a-priori discretization of the original constraints with

an exact evaluation of the constraints via binary search.

Note that this approach relies on only two properties of the system: Firstly, we require that

bini (R0 ) is monotonously decreasing in R0 (cp. Lemma 19). Secondly, we use Lemma 20,

i.e, that bini (R0 ) will reduce by at most 1 if we increase the central reorder point R0 by

1. This means bini (R0 ) has a “gradient” in the range [−1, 0] for all i. These two bounds

on the gradient are used in the construction of the piecewise linear functions to under- and

36 4 Multi-echelon optimization

overestimate bini (R0 ). It is easy to show that both properties apply for divergent 2-level in-

ventory systems. Relying only on these mild properties, our algorithm can directly be used

for a number of different set-ups of divergent 2-level inventory systems. For example, it can

handle different types of service level constraints as well as different wait time approxim-

ations. This ﬂexibility allows the application of the algorithm in a wide range of practical

applications. Even more, adapting the piecewise linear under- and overestimating functions,

our algorithm can be adapted easily to different bounds for the gradients of bini (R0 ).

It is easy to show that those properties apply for divergent 2-level inventory systems. How-

ever, in certain situations the wait time approximations discussed in Section 3.2 do not

satisfy these properties: The KKSL approximation has certain instabilities that may lead

to violations of both properties, as already mentioned in Section 3.2.2. The BF approxima-

tion as well as the NB approximation cause problems, if the expected lead time demand, the

variance-to-mean-ratio of the lead time demand, and the ﬁll rate targets of a local warehouse

are very high. In these cases, the gradient might be smaller than −1 for these approxima-

tions. This, however, is only an artifact of these approximations.

For the latter case, our algorithm still produces excellent results, but we might end up in a

local optimum. In our numerical experiments, however, our algorithms actually terminated

with the optimal solution also in these cases (veriﬁed by enumeration). The irregularities

only occur when the central reorder point R0 is close to 0. As long as the global and all

locally optimal values for R0 are not close to 0, the algorithm ends up with the optimal

solution. In fact, our algorithm can be easily extended to handle correctly also cases where

the gradient of the functions bini (R0 ) is within [−z, 0] for some ﬁxed z ∈ N.

We approximate the non-linear constraints of the original Model (4.1) by piecewise linear

functions, which either over- or underestimate the functions bini (R0 ) for all warehouses i.

Assume we are given a set Vi = {ri1 , . . . , rik } of k candidate values for the central reorder

point R0 , which shall be used to approximate the local reorder point at warehouse i. For each

ri j ∈ Vi , we then calculate the local reorder point yi j := bini (ri j ) for the given ﬁll rate target

β̄i . For each individual candidate value ri j , we obtain an overestimating function oi j with

oi j (R0 ) ≥ bini (R0 ) for all R0 as well as an underestimating function ui j with ui j (R0 ) ≤ Ri (R0 )

for all R0 based on the value yi j and the monotonicity and gradient property of bini (R0 ):

(yi j + ri j ) − R0 , if R0 < ri j

oi (R0 ) = (4.2)

yi j , if R0 ≥ ri j

yi j , if R0 ≤ ri j

ui (R0 ) = (4.3)

(yi j + ri j ) − R0 , if R0 > ri j .

Note that the set of central reorder points Vi is the same for the over- and underestimating

functions, but may depend on the local warehouse i. Combining eq. (4.3) for all central

4.1 The 2-echelon case 37

100

90

80

Local reorder point

70

60

50

40

30

20

10

0 200 400 600 800 1000

Central reorder point

bini (R0 ) fiu (R0 ) fio (R0 )

reorder point candidates ri j ∈ Vi , we get a piecewise linear function fiu (R0 ) underestimating

bini (R0 ) for warehouse i:

⎧

⎪

⎪ yi1 , if R0 ≤ ri1

⎪

⎪

⎪

⎪ max(y + r − R0 i2 ),

, y if ri2 ≥ R0 > ri1

⎪

⎪

i1 i1

⎨ max(y + r − R , y ), if ri3 ≥ R0 > ri2

i2 i2 0 i3

fiu (R0 ) = (4.4)

⎪

⎪ ...

⎪

⎪

⎪

⎪max(yik + rik − R0 , yi(k+1) ), if rik ≥ R0 > rik−1

⎪

⎪

⎩

max(yik + rik − R0 , 0), if R0 > rik .

Analogously, one obtains a piecewise linear overestimating function fio (R0 ) for each ware-

house i based on eq. (4.2). Figure 4.3 illustrates these under- and overestimators for one

warehouse. The black line bini (R0 ) shows the original function bini (R0 ), the green line

shows the underestimating function, and the red line the overestimating function. The ap-

proximate functions are based on ﬁve breakpoints, which are the positions where the three

functions are identical. Between these breakpoints, the under- and overestimating functions

are implicitly deﬁned by the two properties we assumed, the monotonicity and the bounded

“gradient” of bini (R0 ): If we reduce the central reorder point, then the local reorder point

does not increase, and, if we increase the central reorder point by 1, then the local reorder

point will decrease by 1 at most.

In order to model these piecewise linear functions, we use so-called special ordered set

constraints of type 2 (SOS 2) for each node i [VKN08]. Let buij , j ∈ {1, . . . , 2k − 1}, denote

38 4 Multi-echelon optimization

the breakpoints of fiu (R0 ), i.e., the values of R0 where the gradient of fiu (R0 ) changes, and let

fiuj := fiu (buij ). Note that the breakpoints buij are equal to the central reorder points ri,( j+1)/2

for all odd j, while they are derived from ri, j/2 and ri, j/2+1 and their corresponding yi j -values

as buij = ri, j/2 + yi, j/2 − yi, j/2+1 for all even j.

The piecewise linear underestimator fiu (R0 ) then can be modeled in a straightforward way,

introducing continuous variables ui1 , ..., uik ≥ 0, which form the special order set and de-

scribe which segment of the piecewise linear function is binding at the current solution.

Constraints (4.6) and (4.7) connect the central reorder point to each of the local warehouses

via the SOS 2 variables u. By specifying the variables u to form an SOS 2 set in constraint

(4.10), at most two of these variables may be non-zero and, furthermore, if two variables are

non-zero they must be consecutive. Two consecutive u variables represent a line segment

in the piecewise linear function and the values of the u variables indicate an exact position

on that line segment. Together with constraints (4.8) and (4.9) this ensures that exactly one

line segment and one point on this segment is chosen. The additional constraints (4.11) en-

force global lower bounds τi ≥ 0 on the Ri -variables, which can be computed by assuming

Li = Ti . This can also be interpreted as choosing R0 so large that the wait time Wi is 0. In

this situation, increasing central stock does not lead to any further decrease of local stock.

The modeling of optimization problems involving non-convex piecewise linear functions

as is the case here as MIPs and solving it with a general purpose MIP solver is a known

technique in optimization ([VAN10] and references therein). The advantage of the approach

is that state of the art technology of MIP solvers is taken advantage of. In the tests of Vielma

et al. SOS2 based formulations are always among the 5 best formulations.

Altogether, the underestimating integer linear program can be written as:

N

min ∑ (pi · Ri) (4.5)

i=0

k

s.t. R0 − ∑ buij ui j ≥ 0 for all i = 1, . . . , N (4.6)

j=1

k

Ri − ∑ fiuj ui j ≥ 0 for all i = 1, . . . , N (4.7)

j=1

ui1 + ... + uik = 1 for all i = 1, . . . , N (4.8)

ui j ≥ 0 for all i = 1, . . . , N, j = 1, . . . , k (4.9)

{ui1 , ..., uik } : SOS 2 set for all i = 1, . . . , N (4.10)

Ri ≥ τi for all i = 0, . . . , N (4.11)

Note that the SOS 2 constraints in this model implicitly involve integrality constraints to

restrict the number and the pattern of the non-zero u variables. However, it is not neces-

sary to explicitly enforce integrality of the reorder point variables R0 and Ri as long as all

candidate central reorder points ri j used to build the piecewise linear approximations are

integer. Due to the structure of the optimization model, the reorder points will always be

4.1 The 2-echelon case 39

integer in an optimal solution of the model. One easily veriﬁes that any non-integer solution

can be turned into a nearby integer solution without increasing cost. In practice, omitting

the integrality constraints for the reorder point variables speeds up the solution of the ILP

models signiﬁcantly.

Analogously, we deﬁne the overestimating integer linear program using fio (R0 ). Here the

breakpoints boij for odd j are equal to the central reorder points ri,( j+1)/2 , while the break-

points boij with even j are derived from ri, j/2 and ri, j/2+1 as boij = ri, j/2+1 − yi, j/2 + yi, j/2+1 .

Solving these ILPs, we obtain a lower and an upper bound for the optimal solution of the

original model. Furthermore, the solution of the overestimating ILP yields reorder points

that are also valid in the original model: The ﬁll rates in a solution of the overestimating

ILP are higher than the ﬁll rate targets, as we overestimate all inverse ﬁll rate functions and,

thus, underestimate the true ﬁll rates of the solution.

In practice, the integer linear programs are solved by standard ILP solvers very quickly. In

later iterations of our algorithm, the solutions of previous ones can be used to create good

starting solutions and further speed up the solution of the modiﬁed ILPs.

As mentioned before, we perform a binary search on the ﬁll rate βi (Ri , R0 ) to compute

bini (R0 ), which is computationally very expensive. One goal therefore is to reduce the num-

ber of breakpoints in the piecewise linear over- and underestimators fio (R0 ) and fiu (R0 ) of

the nonlinear functions bini (R0 ). In order to reduce the overall computational effort, we in-

terrupt each evaluation of bini (R0 ) after several iterations of the binary search and construct

or update the piecewise linear over- and underestimators using the bounds on bini (R0 ) found

by the halted binary search until then. We resume the binary search of the function evalu-

ation to compute more precise bounds or even the exact function value in later iterations of

the overall algorithm only when necessary. Our results show that limiting the number of

binary search iterations in each reﬁnement step of the overall algorithm actually improves

the performance of the algorithm dramatically. Note that, as a result of interrupting the bin-

ary search, the yi j -values used to construct fiu (R0 ) and fio (R0 ) are not necessarily the same

anymore. If the binary search has been interrupted, we have to use the current upper bound

of the search for the overestimator and the current lower bound for the underestimator. Thus,

we differentiate between values yuij used to construct the underestimator fiu (R0 ) and values

yoij used to construct the overestimator fio (R0 ) in the following.

A summary of the overall algorithm is shown in Algorithm 21. In a nutshell, the algorithm

approximates the non-linear constraints using piecewise linear under- and overestimating

functions, which are iteratively reﬁned in the proximity of the current solution until the gap

between the over- and underestimator is sufﬁciently small. For values of R0 that are far

away from the optimum, we do not care about the size of the gap as long as the estimators

are good enough to prove that these R0 values will not be optimal. In each iteration, we

reﬁne the under- and overestimating models in one of the two following ways: If some lin-

ear constraint that is binding at the solution of the current under- or overestimating model

involves bounds yuij = fiu (ri j ) or yoij = fio (ri j ) that have been computed by a prematurely in-

terrupted binary search for yi j = bini (ri j ), then we ﬁrst resume this binary search to tighten

40 4 Multi-echelon optimization

these bounds in Step 3 and then update the corresponding over- and underestimating piece-

wise linear functions with the tighter bounds (or even the exact value of yi j ) in Step 5 of the

algorithm. Otherwise, if all constraints that are binding at the current solutions are based

on exact values for yi j , i.e., yi j = yuij = yoij = bini (ri j ), we reﬁne the piecewise linear under-

and overestimators by adding further breakpoints near the respective solutions’ R0 -values to

these functions in Steps 4 and 5.

1. Choose initial sets of breakpoints and determine fiu (R0 ) and fio (R0 ). Section 4.1.3

2. Solve the resulting under- and overestimating ILPs.

3. If possible reﬁne the breakpoints, on which both solutions are based, by continuing the

binary searches. Section 4.1.4

4. Otherwise, add additional breakpoints to the model. Section 4.1.5

5. Update and resolve under- and overestimating ILPs.

6. Repeat Steps 3 to 5 until the gap between over- and underestimating ILP is sufﬁciently

small or the optimal solution is found.

In the following we describe our strategies for initializing, reﬁning and adding new break-

points to the over- and underestimating ILPs.

4.1.3 Initialization

The algorithm is initialized with a set of central reorder points, which deﬁne the breakpoints

of the initial over- and underestimating piecewise linear functions. The choice of these

reorder points is important to steer the algorithm in the area of optimality and therefore

limit runtime. We denote by Dlw = ∑ni=1 E[D(Ti )] the sum of the expected demand during

the transportation time at all local warehouses. As suggested in the literature, the optimal

reorder points at the central warehouse tend to be low in the optimal solution ([Wah99],

1

[Axs15], [BM06], [MY08]). We thus decided to choose the 4 values of 0, 16 Dlw , 18 Dlw and

1

4 Dlw as initial central reorder points.

For the chosen initial central reorder points, then breakpoints for all local warehouses in the

network are computed. In principle, we compute bini (ri j ) for all ri j ∈ Vi via binary search.

To speed up the binary search, the local reorder point computed for the ﬁrst (lowest) central

reorder point, bini (ri0 ), can be used as an upper bound in the binary search to compute

bini (ri1 ), and so forth. Vice versa, the result bini (R0 ) of a larger central reorder point can

be used as a lower bound later on. A trivial lower bound used during initialization is 0. We

only require these bounds for the ﬁrst binary search done in our algorithm to speed up the

initial calculation. After this, we always have bounds available from previous computations

for the binary searches themselves as well as for the reorder points.

4.1 The 2-echelon case 41

Choosing a suitable upper bound b¯ui to evaluate bini (R0 ) at the ﬁrst breakpoint is quite

challenging, even for ri j = 0. This bound mainly depends on the mean and the variance

of the demand during the lead time, the order quantity, and the ﬁll rate target. The higher

any of these values except the order quantity is, the higher the local reorder point will be

and the higher our initial upper bound should be set. A higher order quantity allows for a

lower initial upper bound. On the other hand, choosing an upper bound much higher than

the actual reorder point leads to a substantially larger running time of the binary search. In

our numerical experiments, we obtained good results with the heuristic shown in Algorithm

22.

Set b¯ui = Max E[D(Li )] + k(β̄i ) ∗ σ (D(Li )) − Qi , 0

Calculate β (b¯i )u Fill Rate for reorder point at upper bound.

while β (b¯ui ) < β̄i do Check if upper bound is high enough.

b¯ui = b¯ui + 0.5E[D(Li )] If not, increase it.

Recalculate β (b¯ui )

end while

The intention of Algorithm 22 is to guess a small but valid upper bound b¯ui for bini (R0 ) based

on the mean and the standard deviation of the lead time demand as well as the order quantity,

check if this bound is valid and, if not, iteratively increase it. In this procedure, the standard

deviation is adjusted by a factor k(β̄i ), which depends on the ﬁll rate target. Promising

values of this parameter have been determined empirically and are shown below:

⎧

⎪

⎪ 0, if β̄i < 0.3

⎪

⎪

⎪

⎪ 0.5, if 0.3 ≤ β̄i < 0.6

⎪

⎪

⎨ 2, if 0.6 ≤ β̄i < 0.8

k(β̄i ) :=

⎪

⎪ 3, if 0.8 ≤ β̄i < 0.9

⎪

⎪

⎪

⎪ 3.5, if 0.9 ≤ β̄i < 0.95

⎪

⎪

⎩

4, if β̄i ≥ 0.95 .

In Step 3 of our algorithm, we check and reﬁne the breakpoints bi j . For each local ware-

house i, we check which of the variables ui j are non-zero. For each warehouse i, these

variables form an SOS 2 set, so either only one variable ui j or two neighboring variables ui j

and ui, j+1 are non-zero in an optimal solution. The corresponding breakpoints bi j or bi j and

bi, j+1 have been derived from either one or two central reorder points ri j (see Section 4.1.2).

For these ri j , we check if yuij and yoij can be reﬁned.

Let k be the index of the reorder point in question. If the binary search used to evaluate

bini (rik ) was interrupted prematurely, we resume it for another round of iterations and then

use the new results to reﬁne the corresponding breakpoints and function values. Let su be

42 4 Multi-echelon optimization

the lower bound and so the upper bound returned by the resumed binary search for bini (rik ).

Note, that su = so if the binary search was truncated again prematurely after several iter-

ations and su = so if it terminated with the exact value. Then, we set the new bounds to

yuik = su and yoik = so . Furthermore, these new bounds are also used to tighten the bounds

{yuij } and {yoij } at the other candidate values ri j ∈ Vi as follows:

• For all ri j with j < k, we set yuij = max(yuij , su ).

• For all ri j with j > k, we set yoij = min(yoij , so ).

The validity of this operation again follows directly from the monotonicity of bini (R0 ) (cp.

Lemma 19).

With the reﬁned sets {yuij } and {yoij }, we then update bi j and the corresponding fi j as de-

scribed in Section 4.1.2.

In this subsection we present our strategies for adding new breakpoints to the over- and

underestimating ILP models in Step 4 of the master algorithm.

Before we discuss these strategies, we explain the general methodology for inserting a new

ri j into the set Vi of an existing model. Note that we always improve the under- and over-

estimating constraints for a local warehouse i in parallel, i.e., both ILP models are always

based on the same set Vi . Yet, the sets of breakpoints {bi j } of the under- and overestim-

ating functions fiu (R0 ) and fio (R0 ) differ due to the different construction of the functions

(eqs. (4.2) and (4.3)) for the same set Vi .

Assume we have selected a local warehouse i for reﬁnement. The constraints associated

with i are altered based on the solution of the under- as well as the overestimating model. In

either case, we take Rˆ0 := ∑kj=1 buij ui j , implied by eq. (4.6), as a starting point for adding a

new breakpoint.

In the underestimating model, we add Rˆ0 to Vi if it is not already a member of Vi . If Rˆ0

is already a member, we do not need to add additional breakpoints as the underestimating

model is already exact at the current value Rˆ0 .

For the overestimating model, we have to distinguish a number of cases:

• If Rˆ0 is not a member of Vi , then we add Rˆ0 to Vi .

• If Rˆ0 is a member of Vi and

– Rˆ0 is the largest member of Vi , then we add Rˆ0 + 12 Dlw to Vi .

– Otherwise, if Rˆ0 ∈ Vi but Rˆ0 is not the largest member of Vi , let k be the position of Rˆ0 in

Vi , i.e., rik = Rˆ0 . In this case we add the value 1/2(rik + ri(k+1) ) rounded to the nearest

integer as a new candidate to Vi . If it is not possible to add a new candidate between rik

and ri(k+1) , we try to add a new candidate to the left of rik instead, i.e., 1/2(rik + ri(k−1) )

rounded to the nearest integer.

4.1 The 2-echelon case 43

To illustrate this, consider Figure 4.4. Assume we add new breakpoints based on the edge

marked by the green circle for the underestimating model and the edge marked by the red

circle for the overestimating model. In the underestimating model, the candidate point is

not yet a member of Vi and is therefore added. In the overestimating model, it is already

a member, so we add 1/2(rik + ri(k+1) ) as a new candidate to Vi . Note, that we update

the over- as well as the underestimating constraints based on both new additions to Vi , as

they are always based on the same set Vi . The two models will therefore be exact at the

positions of the newly added members of Vi after the respective linear piecewise functions

were updated, assuming that bini (·) is computed exactly at the new breakpoints.

We now describe this update process. Assume we have added a new member r̂i j to Vi . We

ﬁrst determine upper and lower bounds ŝuij and ŝoij for bini (r̂i j ) running the binary search for

a speciﬁed number of iterations. If neighboring breakpoints j − 1 and j + 1 exist, then the

bounds yui( j−1) and yoi( j+1) already known for these breakpoints are used as initial upper and

lower bounds for the new binary search. This way, we get tighter bounds and are able to

decrease the time spend within the binary search. Afterwards, we update fiu (R0 ) (eq. (4.4))

and fio (R0 ) at r̂i j with values ŝuij and ŝoij , respectively. Consequently, we update the set of

associated breakpoints {buij } and {boij } as well as the function values { fiuj } and { fioj }.

If a new element was added to Vi , we can use the new information gained to potentially reﬁne

all other elements of Vi that originate from unﬁnished binary searches. We therefore propag-

ate the new result through the set of central reorder points as described in Section 4.1.4. We

44 4 Multi-echelon optimization

then construct improved over- and underestimating functions as a new basis for our ILPs as

described in Section 4.1.2.

The following strategies are used to decide for which warehouses i the over- and underes-

timating linear functions are reﬁned by adding new breakpoints.

In the naive approach, we reﬁne all under- and overestimating functions for each local

warehouse in every iteration. The new breakpoints to be added are derived from both the

solution of the underestimating ILP as well as the solution of the overestimating ILP.

In the greatest delta approach, we also choose the new breakpoints based on both the solu-

tion of the underestimating and the solution of the overestimating program. However, we

reﬁne the over- and underestimating piecewise linear functions only for some of the ware-

houses. The goal of our algorithm is to minimize the gap between over- and underestimating

ILP. Looking at (|Roi − Rui |)pi , we try to identify those constraints that have the largest im-

pact on the gap. More formally, we reﬁne the constraints of the

N2 warehouses with the

largest values (|R̂oi − R̂ui |)pi , where R̂ui is the reorder point in the solution of the underes-

timating ILP model and R̂oi that of the solution of the overestimating ILP for warehouse i.

If this deviation is 0 for all local warehouses, we insert new breakpoints for all local ware-

houses, as in the naive approach. The deviation can be 0 for all local warehouses and the

algorithm is not yet terminated if the local reorder points of the solutions of the over- and

underestimating ILP are equal, but the central reorder point is different.

In the ﬁll rate violation approach, we select those local warehouses for reﬁnement, for

which the solution of the underestimating ILP violates the original model’s ﬁll rate con-

straints the most. If the ﬁll rate constraint is not violated by the current solution, then the

underestimator is actually accurate at this position and no reﬁnement is needed. Similar

to the greatest delta approach, we only reﬁne the constraints for the

N2 local warehouses

with the largest deviation between actual ﬁll rate of the underestimating solution and the ﬁll

rate target. The disadvantage of the ﬁll rate violation approach is that ﬁll rates need to be

computed in every iteration, which adds to runtime.

4.1.6 Heuristic to speed up the algorithm by introducing additional step size constraints

During the work on our 2-level algorithm, we noticed a pattern in the relationship between

local and central reorder points that can be exploited to introduce additional constraints,

which strengthen the model and possibly speed up computations. The downside of this is

that optimality cannot be guaranteed anymore.

As an example, consider the relationship between the reorder point at a local warehouse and

the reorder point at the central warehouse depicted in Figure 4.5. The slope of the curve ﬁrst

decreases before increasing again. The curve has a turned over S-shape form. We noticed

this form over and over again. Recall that, however, the curve is not a smooth curve but a

step function. The data used to generate this example is shown in Table A.1.

4.1 The 2-echelon case 45

Figure 4.5: Local reorder point needed for a given cental reorder point to fulﬁll ﬁll rate targets

Figure 4.6 shows the length of each step in combination with the associated central reorder

point. The step size, i.e., how much the central reorder point has to be increased before

the local reorder point can be decreased by 1, ﬁrst decreases, then stays relatively level

before increasing again. Here, we often get a characteristic U-shape. The zigzagging effect,

that can be observed, especially where the step size is relatively level, is due to a ﬁll rate

“stack-up”: We select the lowest local reorder point, such that the ﬁll rate target is satisﬁed.

Due to the integrality of the reorder point, we are sometimes close on target and sometimes

further above target. If we are further above target, the next step may be shorter and we are

subsequently close on target again.

Our 2-echelon optimization algorithm uses the results of Lemma 20, i.e., a best-case step

size of 1, which we then use to construct our under- and overestimators (cp. Figure 4.3). In

the example shown in Figure 4.6 the lowest step size however is 11. This results in a large

initial gap between our under- and overestimators and the actual function. We subsequently

need many iterations for reﬁnement until we have found the optimal solution.

Figure 4.6: Length of the step sizes and corresponding central reorder point

46 4 Multi-echelon optimization

Unfortunately, it is not easy to ﬁnd the smallest step size. We were not able to analytically

derive a smallest step size nor the inﬂection points of the curve. Both, Figures 4.5 and 4.6,

were derived by enumeration. We had to run a binary search 3000 times to get the results

for these graphs, which is the same as just solving the optimization problem by enumeration

and computationally not feasible unless for very small problem sizes.

To still be able to use the general insights, our intention is the following. We try to select a

point which is probably in the area where the step size curve is relatively level. In Figure 4.6

this would be approximately between step 30 and 80 or, alternatively, for an central reorder

point between about 500 and 1200. We then compute the step size and use it to strengthen

our underestimating model by introducing additional constraints based on this step size.

We use a central reorder point as a starting point for Algorithm 23 to determine the step size.

In this algorithm, we perform a simple search in both directions from the starting point to

determine the length of the corresponding step size.

1. Choose initial central reorder point Rinit

0 .

i needed to fulﬁll ﬁll rate target using Algorithm 16.

3. Decrease R0 starting from Rinit init

0 .

0 + 1 until Ri

4. Increase R0 starting from Rinit init can be lowered and ﬁll rate target is still

end

fulﬁlled. Denote this position as R0 .

5. Return si = Rend

0 − R0

start as step size.

To limit runtime, it is essential to integrate Algorithm 23 well within the 2-level optimization

algorithm (Algorithm 21). The initial central reorder point in Step 1 should be chosen from

the set of central reorder points used during initialization in Section 4.1.3. This way we

avoid an additional binary search. If the search was truncated at that central reorder point

we have to, however, continue it until the bounds are tight (but we can also use the tighter

bounds for the construction of constraints in our under- and overestimator). Step 3 can then

be executed by using the binary search with lower bound Rinit i and upper bound Ri + 1 and

init

init init

For Figure 4.7, we have computed an si = 11 for an R0 = 648, which was the initialization

value R0 = 14 Dlw (cp. Section 4.1.3), to construct an additional constraint for that warehouse

to our underestimating model (shown as “Step size constraint”). From the underestimating

breakpoint at 0, we added a straight line with slope −1/(si − 1)R0 . The corresponding con-

straint in the ILP would be Ri ≥ yui0 − 1/(si − 1)R0 . The reason for subtracting 1 is, that

we try to capture the lowest step size to ensure we underestimate the actual model. As

observed before, we have zigzagging effect due to a ﬁll rate stack-up. Therefore, we gener-

ally subtract 1 in case we have hit a step which beneﬁted from that effect. If we consider

the “original underestimator”, i.e., the underestimating step size constraint obtained after

initialization, it is obvious that our model becomes a lot stronger by adding the step size

4.1 The 2-echelon case 47

constraint. Recall however, that we had to continue the binary search at the chosen cent-

ral reorder point (which was 648 in this instance). As one could argue that this additional

computational effort would have also proﬁted the original underestimator, we display the

effects of this as “strengthened underestimator”. Even the model with this strengthened

underestimator beneﬁts greatly from adding the additional step size constraint.

Figure 4.7: Actual local reorder points needed and underestimating constraints

By introducing the additional step size constraints for every local warehouse right from

initialization, the intention is to strengthen the underestimating model and speed-up runtime.

The downside of it is that we cannot guarantee optimality anymore. If we pick a step

size from a wrong region, which is too high, our underestimating model is not necessarily

underestimating the original constraints. In the example we picked for Figure 4.7, an si = 18

would be sufﬁcient for a violation to occur. The si would be even lower if the yui1 , i.e., the

underestimating value for Ri at 0 would not have been based on a prematurely ﬁnished

binary search.

In the following, we will specify how to implement the step size constraints within our 2-

level optimization algorithm, Algorithm 21. A part of this is also a correction mechanism,

which tries to detect violations as described above during each iteration.

Algorithm 24 (Adjustment procedure for Algorithm 21 to introduce step size con-

straint).

1. During initialization select one of the initial central reorder points to compute the step

size for each local warehouse using Algorithm 23. Add the constraint Ri ≥ yui1 − 1/(si −

1)R0 to the underestimating ILP (eqs. 4.5-4.11).

48 4 Multi-echelon optimization

2. Each time the underestimating ILP is solved, check for each local warehouse if the step

size constraint was a binding constraint.

3. For each local warehouse with a binding step size constraint check if a violation occurred.

We calculate the actual local reorder point Racti needed to fulﬁll the ﬁll rate target given

the solution of the underestimating ILP for the central reorder point, R∗0 . If this local

reorder point is smaller than the solution of the underestimating model, R∗i , we adjust the

slope of the step size constraint to (Ract ∗

i − yi0 )/R0 . The step size constraint will now be

u

exact in this point and not force an overestimation of the local reorder point.

For the initial central reorder point we have chosen R0 = 14 Dlw as in line with the initializa-

tion of our initial optimization algorithm (Section 4.1.3). While the correction mechanism

described in Step 3 of Algorithm 24 can prevent some non-optimal results, it is not sufﬁ-

cient to guarantee optimality. If the step size constraint does lead us away from the optimal

region and we never have a result of the underestimating ILP in that region, we never get

the chance to correct the constraint.

To apply the idea to the overestimating constraints, we normally would need to derive a max-

imal step size. As can be seen in Figure 4.6, the maximal step size over the entire region is

very large. In fact, the length of the last “step” - once the local reorder point is at its lower

bound - can be interpreted as inﬁnity. We do not have enough analytical insights to derive

a maximal step size in a region and how to deﬁne those regions. If we want to numerically

establish some good candidates for a maximal step size and the corresponding regions, a

lot of additional binary searches and therefore computational effort is needed. We believe

that strengthening just the underestimator is sufﬁcient and the additional effort would not be

justiﬁed. Recall, that the underestimating ILP alone already converges to the optimal solu-

tion. We basically keep the overestimator in every iteration to have an optimality gap and

to additionally steer the algorithm in the right direction while the additional computational

effort for the overestimator is small.

An alternative to introducing additional step size constraints would be to use the slope de-

rived from the step size directly in the construction of the linear piecewise functions that

underestimate the ﬁll rate constraints in our model. There, we use the constant slope of −1

based on Lemma 20. These constraints could be directly strengthened by using −1/(si − 1)

instead. The adjustment procedure is described in Algorithm 25. Instead of using the

slope −1, we will consequently use −1/(si − 1) for the construction of the underestimat-

ing model.

However, for the correction mechanism we cannot simply check for a binding constraint

anymore as we do not have a dedicated step size constraint anymore. Instead, we check

for each underestimating constraint if two SOS 2 variables of the underestimating solution

are non-zero. If this is the case, the combination of central and local reorder point is on a

segment of the underestimating constraints, which was constructed using the slope derived

from the step size. Then, we check for a violation as described in Step 3 of Algorithm 24.

If we detect a violation, we will adjust the underestimating constraint based on the updated

slope. For the calculation of the slope in Step 2 of Algorithm 25 we cannot use yi1 and R∗ as

4.1 The 2-echelon case 49

in Algorithm 24. Instead we need to calculate the slope from the next smaller neighboring

element in Vi . If we would calculate the slope starting from 0, we could end up with a even

higher slope, which may lead to further violations (recall that the slope is negative) due to

the potential S-shape of the curve (cp. Figure 4.5).

Algorithm 25 (Adjustment procedure of optimization algorithm to modify slope of

underestimating constraints).

1. During initialization select one of the initial central reorder points to compute the step

size for each local warehouse using Algorithm 23. For the construction of the underes-

timating model use −1/(si − 1) as a slope instead of 1 for each local warehouse i.

2. Each time the underestimating ILP is solved, check for each local warehouse if two

SOS 2 variables of the underestimating solution are non-zero. If so, verify if a violation

occurred. We calculate the actual local reorder point Ract i needed to fulﬁll the ﬁll rate

target given the solution of the underestimating ILP for the central reorder point, R∗0 . Let

k be the index of rik ∈ Vi , which is the next smaller reorder point to Ract

i in the set V .

If the reorder point Ract ∗

i is smaller than the solution of the underestimating model, Ri ,

∗

we adjust the slope of the step size constraint to (Ri − yik )/(R0 − rik ) and adjust the

act

Figure 4.8: Actual local reorder points needed and estimating constraints with or without adjusted

slopes

One could argue to use the new slope based on the step size of the underestimating con-

straints also for the overestimating constraints. We have again constructed the constraints

with the adjusted slops and the original constraints after initialization for our example in

Figure 4.8. In this instance, the underestimating constraint with adjusted slope does capture

50 4 Multi-echelon optimization

the original relationship very well and does not violate the postulate of underestimation.

The overestimator does also beneﬁt from the adjusted slope. However, we use the overes-

timator in our algorithm to have a valid solution in every iteration. If we use the step size,

we cannot guarantee this anymore. We would then need to check if we have a valid solution

when we want to terminate the algorithm. In the instance of Figure 4.8, the overestimator

with adjusted slope is in fact no valid overestimator anymore (at a central reorder point of

about 550).

In the Section 5.3.4, we test the runtime improvements of these algorithm and how often the

optimal result is not found because of the additional constraint.

In this section, we develop an optimization algorithm for the general n-level case. Major

parts of this section are in publication as a joint work together with Andreas Bley [GB18b],

with the exception of Section 4.2.4.

To extend our network from 2 to n-level, we have to adjust our notation to allow for a general

number of echelons. Reorder points are denoted as Ri, j , where i is the echelon, i = 1, ..., n

and j is the index of the warehouses in the respective echelon. In Figure 4.9, the reorder

point of the warehouse in the ﬁrst echelon, that gets supplied externally, is R1,1 , whereas for

the last warehouse in the third echelon the notation would be R3,6 . Accordingly, the order

quantity is denoted as Qi, j and μi, j and σi, j are the mean and standard deviation of demand

at a warehouse (i, j) per time unit. The lead time from warehouse j at echelon i from its

predecessor is Li, j = Ti, j +Wi, j , where Ti, j is the transportation time and Wi, j is the wait time

due to stock-outs at the predecessor. For i = 1 we assume L1,1 = T1,1 .

First echelon

Second echelon

Third echelon

C denotes the set of all local warehouses, i.e., warehouses without successors. We assume

as in the 2-level case that only local warehouses fulﬁll external customer demand and have

4.2 The n-echelon case 51

ﬁll rate targets β̄i, j . The set of each of its predecessors of a warehouse (i, j) is referred to as

Pi, j .

For the n-level case, we focus solely on the KKSL approximation, as the other ones con-

sidered in Section 3.2 are not suitable or have to be extended for more than 2-levels as

already discussed in Section 3.2.

We introduce pi, j as the price of the considered part in warehouse (i, j). Then, we have to

solve the following optimization model. The reasoning for this type of optimization model

is the same as in the 2-level case (cp. Section 4.1.1).

i j

s.t. βi, j (Ri, j , {Rl,m }(l,m)∈Pi, j ) ≥ β̄i, j for all (i, j) ∈ C (4.13)

We rewrite the ﬁll rate formula given by eq. (2.13) for the n-level case as

βi, j (Ri, j , {Rl,m }(l,m)∈Pi, j )

Ri, j +Qi, j Ri, j +Qi, j

= ∑ ∑ Pr(Ki, j = k)Pr(I lev = y)

k=1 y=k

Ri, j +Qi, j Ri, j +Qi, j R +Q

1 i, j i, j

= ∑ ∑ Pr(Ki, j = k)

Qi, j z=R∑

Pr(Di, j (Li, j ) = z − y), (4.14)

k=1 y=k i, j +1

where Li, j is a function of the reorder points of all preceding warehouses {Rl,m }(l,m)∈Pi, j .

The calculation of the wait time has to be done from top to bottom in the network as the

lead time of the predecessor has to be taken into consideration.

Each warehouse (i, j) has to take a wait time in addition to the transportation time from

its directly preceding warehouses (i − 1, m) ∈ Pi, j into consideration. We use the KKSL

approximation and rewrite the formulas already presented in Section 3.2.2 for the n-level

case. Note that we assume Qi, j E[Dind

i, j ], where Di, j is the size of an individual demand,

ind

and therefore assume that the average replenishment order size is Qi, j . This is the case in

our real-world data and simpliﬁes notation. This condition can easily be relaxed as in the

paper of Kiesmüller et al. [KdKSvL04] and does not inﬂuence our subsequent analysis.

E[Li−1,m ] +

E[Wi, j (Ri−1,m )] ≈ E Di−1,m (L̂i−1,m ) + Qi, j − Ri−1,m

Qi−1,m

+

−E Di−1,m (L̂i−1,m ) + Qi, j − (Ri−1,m + Qi−1,m ) , (4.15)

52 4 Multi-echelon optimization

where L̂i−1,m is a random variable representing the residual lifetime of Li−1,m with the cu-

mulative distribution function (cdf)

y

1

cdfL̂i−1,m (y) = 1 − cdfLi−1,m (z) dz (4.16)

E[Li−1,m ] 0

and

E[Li−1,m ]2 +

E[Wi,2j (Ri−1,m )] ≈ E Di−1,m (L̃i−1,m ) + Qi, j − Ri−1,m

Qi−1,m

+

− E Di−1,m (L̃i−1,m ) + Qi, j − (Ri−1,m + Qi−1,m ) , (4.17)

y ∞

2

cdf L̃i−1,m (y) = (z − x)dcdf Li−1,m (z)dx. (4.18)

E[(Li−1,m )2 ] 0 x

We deﬁne X = Di−1,m (L̂i−1,m ) + Qi, j and obtain

E[Wi, j (Ri−1,m )]

R +Q

E[Li−1,m ] i−1,m i−1,m

≈

Qi−1,m ∑ x pdf X (x) − Ri−1,m (1 − cdfX (Ri−1,m ))

x=Ri−1,m

+ (Ri−1,m + Qi−1,m ) (1 − cdf X (Ri−1,m + Qi−1,m )) . (4.19)

The ﬁrst two moments of the wait time are a function of the reorder point of the preceding

warehouse. We analyze how big the decrease δ (E[Wi, j (Ri−1,m )]) in the two moments of

the wait time is, if we increase the respective reorder point Ri−1,m by one and obtain the

following equation after numerous rearrangements

E[Li−1,m ]

= (cdf X (Ri−1,m + Qi−1,m ) − cdf X (Ri−1,m )) . (4.20)

Qi−1,m

4.2 The n-echelon case 53

Proof. Deﬁne X = Di−1,m (L̂i−1,m ) + Qi, j . Then we can rewrite eq. (4.19) using eq. (3.15) as

E[Li−1,m ]

Ri−1,m

E[Wi, j (Ri−1,m )] ≈

Qi−1,m

E[X] − ∑ x cdf X (x) − Ri−1,m (1 − cdf X (Ri−1,m )) − E[X]

x=0

Ri−1,m +Qi−1,m

+ ∑ x pdf X (x) + (Ri−1,m + Qi−1,m )(1 − cdf X (Ri−1,m + Qi−1,m )

x=0

E[Li−1,m ]

Ri−1,m +Qi−1,m

=

Qi−1,m ∑ x pdf X (x) − Ri−1,m (1 − cdf X (Ri−1,m ))

x=Ri−1,m +1

+ (Ri−1,m + Qi−1,m )(1 − cdf X (Ri−1,m + Qi−1,m ) . (4.21)

E[Li−1,m ]

i−1,m i−1,m

R +Q

=

Qi−1,m ∑ x pdf X (x) − Ri−1,m (1 − cdf X (Ri−1,m ))

x=Ri−1,m +1

Ri−1,m +1+Qi−1,m

+ (Ri−1,m + Qi−1,m )(1 − cdf X (Ri−1,m + Qi−1,m ) − ∑ x pdf X (x)

x=Ri−1,m +2

− (Ri−1,m + 1 + Qi−1,m )(1 − cdf X (Ri−1,m + 1 + Qi−1,m ))

E[Li−1,m ]

Qi−1,m

− (Ri−1,m + Qi−1,m + 1)pdf X (Ri−1,m + Qi−1,m + 1)

+ (1 − cdf X (Ri−1,m ) − (1 − cdf X (Ri−1,m + Qi−1,m ))

+ (Ri−1,m + 1)(1 − cdf X (Ri−1,m + 1) − 1 + cdf X (Ri−1,m ))

+ (Ri−1,m + Qi−1,m + 1)

(1 − cdf X (Ri−1,m + Qi−1,m ) − 1 + cdf X (Ri−1,m + Qi−1,m + 1))

E[Li−1,m ]

= (cdf X (Ri−1,m + Qi−1,m ) − cdf X (Ri−1,m ))

Qi−1,m

Due to cdf X being a cumulative distribution function, we can obtain from the deﬁnition

(cdf X (Ri−1,m + Qi−1,m ) − cdf X (Ri−1,m )) ∈ [0, 1] for all Ri−1,m and (4.23)

lim cdf X (Ri−1,m ) = 0. (4.24)

Ri−1,m →∞

54 4 Multi-echelon optimization

From eq. (4.23) we know that (cdf X (Ri−1,m + Qi−1,m ) − cdf X (Ri−1,m ) in eq. (4.20) can be at

most 1. The upper for bound for the decrease in mean wait time therefore is

E[Li−1,m ]

δ (E[Wi, j (Ri−1,m )]) ≤ . (4.25)

Qi−1,m

For the second moment eq. (4.17) can be rewritten analogously and we obtain

E[(Li−1,m )2 ]

δ (E[Wi,2j (Ri−1,m )]) ≤ (4.26)

Qi−1,m

as an upper bound.

What we are ultimately interested in is the effect on warehouses that do not have successors,

i.e., fulﬁll external customer demand. Therefore, let (i, j) ∈ C in the following. Then, for

all (l, m) ∈ Pi, j

E[Ll,m ]

δ (E[Wi, j (Rl,m )]) ≤ and (4.27)

∑(n,o)∈Pl,m Qn,o

E[(Ll,m )2 ]

δ (E[Wi,2j (Rl,m )]) ≤ , respectively. (4.28)

∑(n,o)∈Pl,m Qn,o

This follows directly from eqs. (4.25) and (4.26). Ll,m is deﬁned as the lead time to (l, m),

but we still have to consider the effects of all order quantities on the path and therefore have

∑(n,o)∈Pl,m Qn,o in the denominator. We collapse the chain up to (l, m) on this warehouse and

consider it as a warehouse that faces the entire lead time up to itself and orders the combined

order quantities up to itself.

From eqs. (4.27) and (4.28) we can derive the maximum possible impact on the variance

≤ δ (E[Wi,2j (Rl,m )]) − δ (E[Wi, j (Rl,m )])2

≤ E[(Ll,m )2 ]/( ∑ Qn,o ) − (E[Ll,m ]/ ∑ Qn,o )2 . (4.29)

(n,o)∈Pl,m (n,o)∈Pl,m

Our constraint is to achieve the ﬁll rate target at the last echelon of our supply chain. Each

change of reorder points at higher echelons leads to a changed wait time and therefore

potentially requires a change in local reorder points. The aim of this section is to quantify

the size of this change.

We can quantify the impact of a change of a reorder point at higher echelons on expected

wait time and variance of wait time by taking the derivative with respect to the non-local

reorder point Rl,m .

4.2 The n-echelon case 55

The ﬁll rate formula at local warehouses (i, j) ∈ C is driven by the mean and variance of de-

mand during lead time. Let Di, j (Li, j ) be the demand during the lead time of a warehouse j at

the last echelon i. The correct mean and variance can be computed as eqs. (2.5) and (2.6).

A change in expected wait time by 1 therefore changes E[Di, j (Li, j )] by a factor of μ and ac-

cordingly Var[Di, j (Li, j )] by a factor of σ 2 . Accordingly, a change in Var[Wi, j ] by 1 changes

Var[Di, j (Li, j )] by a factor of μ 2 .

If we plug in the maximum possible values from eqs. (4.27) and (4.29), we obtain the

highest possible change of mean and variance of demand during lead time. The largest

possible decrease in mean and variance of lead time demand of a local warehouse, if we

increase a reorder point Rl,m of any preceding warehouse (l, m) ∈ Pi, j , is

E[Ll,m ]

δ (E[Di, j (Li, j )]) ≤ μi, j and (4.30)

∑(n,o)∈Pl,m Qn,o

δ (Var[Di, j (Li, j )]) ≤ σi,2 j E[Ll,m ]/ ∑ Qn,o

(n,o)∈Pl,m

⎡ ⎤

(n,o)∈Pl,m (n,o)∈Pl,m

between non-local and local reorder points based on eqs. (4.30) and (4.31) in the next sec-

tion.

We construct an underestimating ILP for the model deﬁned by eqs. (4.12) and (4.13) by cre-

ating an approximate linear relationship between the reorder point of the local warehouse

and each of its predecessors. For each of the local warehouses, we introduce an underestim-

ating piecewise linear function which assumes that an increase of a predecessor’s reorder

point has a maximal impact on the wait time as described in the previous section.

To determine Ll,m in eqs. (4.30) and (4.31), we need in principle to set all reorder points of

warehouses in Pl,m and calculate the respective wait times. To avoid this, we assume that

we always have to wait at every preceding stage, i.e., we incorporate the full transportation

time of all predecessors in Ll,m . By doing so, we overestimate the possible impact.

For each (i, j) and (l, m) ∈ Pi, j the range of Rl,m in which the wait time of (i, j) is inﬂuenced

is limited and smaller than the actual range if we assume maximum impact. With this

assumption, we calculate for all predecessors (l, m) ∈ Pi, j the upper bounds, up to which

56 4 Multi-echelon optimization

the reorder point of the respective predecessor can inﬂuence the mean of the wait time and

therefore implicitly the variance. Using eqs. (4.30) and (4.31) we obtain:

(n,o)∈Pl,m (n,o)∈Pl,m

In eq. (4.32) we simply divide the expected lead time E[Ll,m ] by the upper bound on the

reduction of the wait time if we increase Rl,m by 1. If this upper bound was actually realized

in every increase, the lead time for the local warehouse (i, j) would be 0 by the time we

increase Rl,m to ūi,l,mj . We also assume that the lower bound for each Rl,m is 0. Rl,m then is in

the range [0, ūi,l,mj ].

Let Ri, j |(Li, j = L) be the reorder point needed at a local warehouse (i, j) assuming Li, j = L,

such that the ﬁll rate constraint is fulﬁlled. We calculate a general lower bound for all

local warehouses as li, j := Ri, j |(Li, j = Ti, j ). This assumes that the wait time caused by all

preceding warehouses (l, m) ∈ Pi, j is 0. We also calculate an upper bound by assuming the

order has to wait at every stage, i.e., R̄i, j := Ri, j |(Li, j = ∑(l,m)∈Pi, j Tl,m ).

local warehouse (i, j), we calculate the following:

1. Set the lead time Ll,m of the predecessor (l, m) to its upper bound, i.e., as the sum of

all transportation times to this predecessor, and assume all intermediate warehouses

between (l, m) and (i, j) act as cross-docks only, i.e., full transportation time on this path

applies.

2. Calculate Ri, j assuming Rl,m = 0. Call this reorder point R̂l,m

i, j .

i, j − li, j )/ūl,m .

4. With the above assumptions, the following function is an underestimator for Ri, j , given a

Rl,m , (l, m) ∈ Pi, j and all Rn,o = −1, (n, o) ∈ Pi, j \ {(l, m)}:

(R̂l,m

i, j − 1) − bl,m Rl,m , if ūi,l,mj ≥ Rl,m ≥ 0

ei,l,mj (Rl,m ) = (4.33)

(R̂i, j − 1) − bl,m ūl,m , if Rl,m > ūi,l,mj

l,m i, j

Step 1 ensures that we exclude all effects of the intermediate warehouses except the trans-

portation time. We therefore allow (l, m) to “buffer” the whole transportation time in the

network for the local warehouse (i, j). Each increase in Rl,m can therefore have the highest

possible impact on Ri, j : The nominator in the upper bound of the effect of an increase of

Rl,m on Ri, j is largest, cp. eqs. (4.27) and (4.28).

Note that we try to approximate a step function with the help of ei,l,mj (Rl,m ). If a local reorder

point is decreased by 1 right at the start when Rl,m is increased from 0 to 1, ei,l,mj (Rl,m ) can

intersect the true step function. By subtracting 1 in eq. (4.33) from the y-intercept, we

prevent this from happening.

4.2 The n-echelon case 57

Combining eq. (4.33) for all (l, m) ∈ Pi, j , we get an underestimator for Ri, j given the reorder

points of all predecessors.

(l,m)∈Pi, j

i, j ). Figure 4.10 shows how the function gl,m (Rl,m )

may look like.

!

In the construction of the linear relationship between (i, j) and (l, m), we ensured that the

slope bi,l,mj is overestimating the real decrease of Ri, j if Rl,m is increased. We limited the

range in which Rl,m inﬂuences Ri, j by assuming that the wait time is always decreased by

the maximum possible value (eq. (4.32)) and in a second step, calculated an upper bound

for the decrease of Ri, j caused by Rl,m and applied this upper bound to the limited range.

Additionally, we need to impose an upper bound on the sum of the reductions of a local

reorder point. Each non-local reorder point Rl,m can decrease the wait time to 0 for a local

reorder point Ri, j . In Step 1 of Algorithm 26, we assume that the lead time to each ware-

house (l, m) is the full transportation time from the supplier. We therefore need to introduce

an additional constraint to combine the reductions of all non-local warehouses, such that

the same transportation time is not (implicitly) buffered several times for a local warehouse.

The local reorder point would then be reduced too much.

/ C ∪ {(1, 1)}. (4.35)

(n,o)∈Pl,m

Here b̄i,l,mj is the maximum reduction of a local reorder point that can be achieved by the

i, j

warehouses (l, m) and (n, o) ∈ Pl,m . Deﬁning Hl,m as the set of all warehouses that connect

(i, j) and (l, m), including (i, j) and excluding (l, m). Also, let Ri, j [Li, j = l] be the reorder

58 4 Multi-echelon optimization

point needed at the local warehouse (i, j) to fulﬁll the ﬁll rate target given the lead time

Li, j = l. We can then calculate an upper bound on the reduction a warehouse (l, m) can

achieve on the reorder point of warehouse (i, j) as

⎛ ⎞ ⎛ ⎞

⎜ ⎟

b̄i,l,mj ≤ ⎝Ri, j [Li, j = ∑ Tn,o + Ti, j ]⎠ − ⎝Ri, j [Li, j = ∑ i,Tj n,o]⎠ . (4.36)

(n,o)∈Pi, j (n,o)∈Hl,m

Equation (4.36) calculates the difference in local reorder point needed to fulﬁll the ﬁll rate

target given full transportation time from supplier to the local warehouse minus the local

reorder point needed to fulﬁll the target if the wait time at warehouse (l, m) was 0.

Now we can construct an ILP, that is an underestimator of the model deﬁned by eqs. (4.12)

and (4.13).

i j

(l,m)∈Pi, j

i, j

gl,m (Rl,m ) ≤ b̄i,l,mj − ∑ (gi,n,oj (Rn,o )) for all (i, j) ∈ C, (l, m) ∈ Pi, j (4.40)

(n,o)∈Pl,m

The above optimization model is an ILP because we require special order sets of type 2

to describe the functions gi,l,mj (Rl,m ) (cp. Figure 4.10). After solving the ILP, we have two

options to reﬁne our model.

For each local warehouse (i, j) ∈ C, we can reﬁne the function gi,l,mj (Rl,m ) for all (l, m) ∈ Pi, j

based on the optimal solution determined for the ILP deﬁned by eqs. (4.37) to (4.40).

Recall that the function gi,l,mj (Rl,m ) gives an upper bound for the reduction of the local reorder

point Ri, j by setting a reorder point Rl,m of a predecessor to a certain value. With the solution

R∗l,m of the ILP, we can update the function gi,l,mj (Rl,m ), i.e., we recalculate the reduction of

gi,l,mj at R∗l,m and add update the function as illustrated in Figure 4.11.

The solution of the ILP deﬁned by eqs. (4.37) to (4.40) overestimates the impact of non-

local warehouse and therefore our solution is an underestimating solution, which violates

ﬁll rate constraints. The actual reduction of the local reorder points by non-local reorder

points as expressed by gi,l,mj for all (i, j), (l, m).

We have to charge the reduction to the different predecessors because our functions g only

express a relationship between one local and one non-local warehouse. We therefore assume

4.2 The n-echelon case 59

Figure 4.11: Reﬁnement of Constraint 4.38 for one warehouse (l, m) ∈ Pi, j

that Rn,o = 0, (n, o) ∈ Pl,m , and that all intermediate warehouses between (i, j) and (l, m) act

as cross-docks only. Furthermore, we set the variance of the wait time caused by R∗l,m to

0. With these settings, we make sure that we still overestimate the reduction of Ri, j by an

increase of Rl,m and our overall model is still underestimating the original problem. First,

we recalculate the local reorder point Ri, j given R∗l,m for each (l, m) ∈ Pi, j such that the ﬁll

rate constraint is met.

i, j i, j

Then we calculate the reduction as r̄l,m = R̂l,m

i, j − Ri, j + 1 and reﬁne gl,m (Rl,m ) by inserting

the new point (R∗l,m , r̄l,m

i, j

) using the following two properties: If Rl,m increases by 1, the

maximum possible decrease of Ri, j is bi,l,mj . If Rl,m decreases, a decrease of Ri, j is not possible.

Finally, we have to update the upper bound ūi,l,mj = max(ūi,l,mj , (R̄i, j − r̄l,m

i, j

)/bi,l,mj + R∗l,m ).

We can calculate the actual local reorder points Ract i, j , (i, j) ∈ C needed to fulﬁll the ﬁll rate

targets given the non-local reorder points from the optimal solution. If we reduce the reorder

points Rl,m , (l, m) ∈ Pi, j , the required local reorder point (i, j) can not be smaller:

∗

Ri, j ≥ Ract

i, j , if Rl,m < Rl,m f. a. (l, m) ∈ Pi, j . (4.41)

Furthermore, we can introduce constraints for all other cases, i.e., if we increase some or

many Rl,m , (l, m) ∈ Pi, j :

Ri, j ≥ Ract

i, j − ∑ (gi,l,mj (Rl,m )). (4.42)

(l,m):Rl,m ≥R∗l,m

60 4 Multi-echelon optimization

Equations (4.41) and (4.42) can be modeled in the ILP as indicator constraints or with the

help of auxiliary binary variables. By introducing these new constraints, our ILP is now

exact at {R∗l,m }, much tighter in the area around this spot and we can guarantee optimality

of our algorithm. However, especially if multiple constraints of this type are introduced,

runtime increases signiﬁcantly. The combination of general constraints and SOS constraints

of type 2 with the SOS variables appearing in different constraints seems to be difﬁcult for

the solver to tackle.

Because of this computational complexity, we use a somewhat weaker but less complex

formulation. For this we solely use the property, that a local reorder point can be reduced

by at most 1, if a non-local reorder point of a predecessor is increased by 1 (cp. Lemma 20).

If we reduce the reorder points Rl,m , (l, m) ∈ Pi, j , the required local reorder point (i, j) can

not be smaller:

∗

Ri, j ≥ Ract

i, j , if Rl,m < Rl,m f. a. (l, m) ∈ Pi, j . (4.43)

Furthermore, we can again introduce constraints for all other cases, i.e., if we increase some

or many Rl,m , (l, m) ∈ Pi, j :

Ri, j ≥ Ract

i, j − ∑ (Rl,m − R∗l,m ). (4.44)

(l,m):Rl,m ≥R∗l,m

Equations (4.43) and (4.44) have again to be modeled with indicator constraints, but we

do not have any entanglement with SOS constraints or variables anymore. Furthermore, if

we introduce constraints based on multiple actual solutions, we can use the dependencies

between the different constraints to reduce the number of variables and to convey informa-

tion about the structure of our optimization problem to the solver:

First, we have to introduce a big-M-type constraint and two binary variables bsmaller

l,m and

blarger

l,m to model eq. (4.44). For this, we can use the maximum of our already derived upper

bounds Ū(l,m) := max(i, j) ūi,l,mj + 1 as a sufﬁciently large but still small M number:

bsmaller

l,m + blarger

l,m =1 (4.45)

Rl,m ≤ (R∗l,m − 1) + Ū(l,m) blarger

l,m . (4.46)

l,m and blarger

l,m for each non-local warehouse (l, m)

as well as the help of indicator constraints, we can now model eqs. (4.43) and (4.44).

AND{b1 , b2 , ..., bn } denotes an and-constraint which is equal to 1 if all binary variables

b1 , b2 , ..., bn are 1 and 0 otherwise. IND{b → ax ≤ c} denotes an indicator-constraint. The

constraint on the right side, ax ≤ c, only applies if b is 1.

Then, eq. (4.43) can be written as

bsmaller = AND{bsmaller

l,m , f. a. (l, m)} (4.47)

4.2 The n-echelon case 61

and

i, j }. (4.48)

Note that we have to introduce eq. (4.47) only once for all local warehouses with the same

predecessors, while we set up eq. (4.48) for each local warehouse.

For eq. (4.44), we have to introduce a similar set of constraints for each combination of Rl,m

being smaller or larger than R∗l,m , except where all Rl,m are smaller, as this is already covered

by eqs. (4.47) and (4.48). In a 3-level network, for example, we have 3 additional possible

combinations. Let m be the index of the predecessors of a local warehouse (i, j) at the ﬁrst

level and n be the index of the predecessor at the second level. Then,

bcase1 = AND{bsmaller

1,m , blarger

2,n }, (4.49)

bcase2

= AND{blarger

1,m , b2,n

smaller

}, (4.50)

larger larger

b case3

= AND{b1,m , b2,n } (4.51)

and

i, j − R2,n }, (4.52)

IND{bcase2 → Ri, j ≥ Ract

i, j − R1,m }, (4.53)

IND{bcase3 → Ri, j ≥ Ract

i, j − R1,m − R2,n }. (4.54)

Again, eqs. (4.49) to (4.51) only have to be introduced once for each possible set of prede-

cessors, while eqs. (4.52) to (4.54) are set up for each local warehouse.

Because eqs. (4.47) and (4.49) to (4.51) are linking all local warehouses for which some or

all predecessors are the same, we replicate the network structure and the dependencies of

our distribution network within the ILP model.

If we introduce constraints of this sort based on more than one solution {R∗l,m }, R∗l,m can

be equal to the same value r for some warehouses in some solutions. We then can reuse

the same bsmaller

l,m and blarger

l,m and therefore convey information about the interdependence

between the constraints.

We start the construction of our algorithm with some key observations. After obtaining

an optimal solution for the ILP deﬁned by eqs. (4.37) to (4.40), which underestimates the

real problem deﬁned by eqs. (4.12) and (4.13), we can always obtain a valid solution by

recalculating all local reorder points, such that the ﬁll rate targets are met. We can use this

solution to calculate an optimality gap:

Lemma 27. Let Z ∗ be the objective value for the solution of the ILP and let Z act be the

objective value taking into consideration the recalculated local reorder points. Z act − Z ∗ is

then a bound for the absolute optimality gap of the solution of the current ILP.

62 4 Multi-echelon optimization

linear functions and introducing new constraints. Our algorithm is subdivided into two

parts: During the ﬁrst part we use the structural properties derived in the previous section,

to narrow down the area in which the optimal solution may be located. In the second part,

we use the actual local reorder points needed to fulﬁll eq. (4.13) to strengthen our model.

By doing so we reduce the optimality gap and can even guarantee optimality.

1. Construct initial ILP (eqs. (4.37) to (4.40)) as described in Section 4.2.3.

2. Complete ﬁrst reﬁnement process:

a) Solve ILP and obtain the solution R∗i, j for all (i, j) and the resulting objective value

z∗ .

b) If the solution of the ILP R∗i, j for all (i, j) did not change compared to the previous

iteration, go to Step 3a.

c) Otherwise, reﬁne the functions gi,l,mj (Rl,m ) as described in Section 4.2.3 and continue

with Step 2a.

3. Complete second reﬁnement process:

a) Based on all non-local reorder points R∗i, j , (i, j) ∈ / C, calculate the actual local re-

order points R̃i, j , (i, j) ∈ C needed to fulﬁll the ﬁll rate targets of eq. (4.13) and cal-

culate the objective value zact of this solution.

b) If zact = z∗ or the gap between the two values is as small as desired, return R∗i, j , (i, j) ∈

/

C, R̃i, j , (i, j) ∈ C as the solution. Otherwise, go to Step 3c.

c) Use R̃i, j , (i, j) ∈ C to introduce new constraints to the ILP based on actual solutions

as described in Section 4.2.3.

d) Solve ILP and obtain the solution R∗i, j for all (i, j) and the resulting objective value

z∗ . Go to Step 3a.

In Step 3 the right implementation is key to keeping runtime low. By setting up the model

in a solver, solving it to optimality, then adding the additional constraints to this model and

resolving, information from the previous solving process is retained by the solver.

One special case may be taken into consideration: If the ﬁrst solution found in Step 2a is 0

for all Ri, j , (i, j) ∈

/ C, the functions g cannot be reﬁned because of their initial design. They

are already constructed from a starting point of 0. Then, new constraints as in Step 3 should

be introduced for the Ri, j = 0, (i, j) ∈ / C before continuing with the ﬁrst reﬁnement process

(Step 2).

4.2 The n-echelon case 63

Performance improvements

Tighter bounds of binary search Recall, that we have to run binary searches in both

stages of Algorithm 28 on eq. (4.13) to reﬁne our ILP. By doing so, we determine the min-

imal Ri, j that leads to a ﬁll rate higher than the ﬁll rate target for given demand character-

istics. Those demand characteristics either stem from the set-up or reﬁnement of eq. (4.34)

of a given set of non-local reorder points when we calculate actual local reorder points as

described in Section 4.2.3.

During the execution of Algorithm 28 we keep a list for each local warehouse (i, j), that

links each Ri, j which was computed by a binary search to the mean and variance of lead

time demand used to compute it. At the beginning of each call to a binary search, we then

check if that list has entries where mean and variance are larger than what we currently

consider and take the Ri, j with the smallest mean as an upper bound. Analogously, we

check if the list has entries where mean and variance are smaller and take the Ri, j with the

largest mean as lower bound. The mean and variance of lead time demand are the two

decisive input factors besides the non-local reorder points needed to run the binary search

(cp. Section 2.2.4) as they are needed to calculate the ﬁll rate (cp. eq. (2.13)).

We are left with the case of determining good bounds when no usable information from

previous searches is available. For this, we reuse the heuristic developed for the 2-level case

(cp. Section 4.1.3), which can easily be modiﬁed for the n-level case. For the lower bound

we can always use 0 by deﬁnition in absence of any more usable information.

With the help of these two techniques, we can decrease the runtime of the binary searches

and therefore of our algorithm signiﬁcantly.

Tweaks for large solution spaces If the solution space is large, the runtime of our al-

gorithm may still not be sufﬁciently short. In that case, a number of performance tweaks

are possible.

If a greatest common divisor q > 1 of the order quantities of all warehouses exists, the entire

system should be expressed in units of q and this shrinks the solution space by the factor

q. This is a common method in inventory management. However, if no such q exists and

if order quantities are larger, we may still express the entire system in terms of some factor

q̂ if the rounding error is small. In our experience, the objective function is ﬂat around the

optimum and if the “rounding” error is small, we will deviate little from the optimal value,

especially if the rounding errors occur only at a limited number of warehouses. When

looking at real-world data, we often found that ﬁnding a greatest common divisor q > 1

was possible if not for one or two warehouses which had “odd” order quantities. This

may be better illustrated with an example: Imagine a situation where the order quantities

of all warehouses except one have the greatest common divisor of q = 10. The missing

warehouse has an order quantity of Qi, j = 78. Expressing the system in units of q, would

lead to Qi, j = 7.8, which is not possible due to the integrality constraint. We have to round

up to Qi, j = 8. However, this is only a deviation of 2.6% from the original order quantity. By

64 4 Multi-echelon optimization

rounding we speed up the computation of our algorithm dramatically, but we have to check

the found solution for ﬁll rate violations and increase local reorder points if necessary.

Especially during the second reﬁnement process (Step 3 of Algorithm 28), solving the ILP

for some iterations may take long. In these situations, the solver usually ﬁnds a very good

solution within very short time and then takes a long time to prove optimality of that solution

or to ﬁnd a solution which is only little better. It is then valuable to introduce an acceptable

optimality gap and/or a runtime limit. Recall, that during this reﬁnement process we use

the same optimization model during each iteration and only add additional constraints to it.

Therefore, the solver retains the information from previous iterations. If we are interested

in the optimal solution or can not ﬁnd a solution within the desired solution gap (between

actual and ILP solution), we can decrease the optimality gap and/or increase the runtime

limit gradually during later iterations.

The heuristics based on the idea of strengthening the underestimating model by using in-

sights on the step size, as done for the 2-echelon case in Section 4.1.6, can also be applied

for the n-echelon case. But, as before, when moving to n-echelons the extension is not

straightforward and has to be done carefully.

Once we have somehow derived a possible minimal size of the step size constraint, we have

three different possibilities how to use this information.

Firstly, we can introduce additional step size constraints, similar to Algorithm 24 for the

2-echelon case. In this case, the expectation is that our underestimating ILP, described by

eqs. (4.37) to (4.40), is tighter from the beginning and we move faster in the direction of the

optimal solution.

Secondly, we can use the step size directly to derive a slope that can be used in the second

reﬁnement process (Step 3) of the original Algorithm 28. Instead of using a slope of 1 in the

construction of constraints based on the optimal solution (cp. Section 4.2.3), we use the new

smaller slope. This option can be combined with the additional step size constraints, such

that we already beneﬁt from the step size information during the ﬁrst reﬁnement process of

Algorithm 28.

Thirdly, we can build a completely new algorithm that is solely based on the step size.

The challenge here is to model the connections between different preceding warehouses,

as the step size constraint only models a 1 : 1 connection between a local and a non-local

warehouse.

All three possibilities can implement a correction mechanism as done for the 2-echelon case.

We start by describing the design of the “guessing process” of the minimal step size before

sketching the three different possible heuristics.

4.2 The n-echelon case 65

For the n-echelon case it is in general difﬁcult to gain insights in the structure as possible

for the 2-echelon case, because it is not feasible anymore to explore sample problems by

enumeration. We therefore are left with applying our insights gained from the 2-echelon

analysis and from the construction of the original n-echelon optimization model.

From the analysis of the n-echelon optimization problem in Section 4.2.2, we know that a

preceding warehouse has the biggest impact on a local warehouse if the warehouses above

the preceding warehouse are empty. Experience from the computation of a step size in the

2-echelon case shows that the smallest step sizes are obtained if the reorder point Rl,m of

the preceding warehouse is in a medium range. In the 2-level case, we have simply chosen

a suitable Rl,m from the initialization values.

However, we do not have initial values for the non-local reorder points in the n-echelon

algorithm. Instead we construct the functions gi,l,mj (Rl,m ) during initialization as described

in “Constructing an underestimating ILP” in Section 4.2.3. Recall, that those functions

overestimate the impact of the warehouse (l, m) on the local warehouse (i, j) and therefore

underestimate the actually needed reorder points. This underestimation is usually quite

drastic in the beginning due to the assumption of a maximum impact of an increase in Rl,m ,

which is never obtained in reality.

We therefore propose using the initial upper bound ūi,l,mj up to which Rl,m has an impact as a

starting point to compute the step size and use the upper bound of the lead time Ll,m as done

during the initialization of the n-echelon optimization algorithm (Algorithm 26). It is then

straightforward to apply Algorithm 23 from the 2-level case.

Alternatively, the starting point of the step size computation can be based on the sum of the

expected local demand during local lead time of all local warehouses (i, j) that have the non-

local warehouse (l, m) as a predecessor. We calculate Dl,m = ∑(i, j)∈C|(l,m)∈Pi, j E[Di, j (Li, j )].

We then can take this value or a multiple as a starting point. If we want to mimic the logic

of the 2-level case, we take 1/4Dl,m as a starting point.

Algorithm 29 (Computation of step size for the n-echelon case). For each predecessor

(l, m) of a local warehouse (i, j), we calculate the following:

1. Set the lead time Ll,m of the predecessors (l, m) to its upper bound, i.e., as the sum of

all transportation times to this predecessor, and assume all intermediate warehouses

between (l, m) and (i, j) act as cross-docks only, i.e., full transportation time on this path

applies.

2. Calculate the step size si,l,mj assuming Rinit i, j

l,m = ūl,m using Algorithm 23.

66 4 Multi-echelon optimization

Option 1: Heuristic to introduce additional step size constraints in the original model

Once we have derived the step sizes for each connection between local and non-local ware-

houses as described in the previous section, it is straightforward to introduce similar con-

straints as done in the 2-echelon case in Section 4.1.6.

We also do not need to add any additional constraint to prevent an overreduction of local

reorder points by increasing several non-local reorder points too much, as this is already

taken care of by eq. (4.40) in the original ILP.

We therefore only introduce the constraint

1

Ri, j ≥ R̄i, j − ∑ i, j

Rl,m for all (i, j) ∈ C (4.55)

(l,m)∈Pi, j (sl,m − 1)

While we can compute the actual solution based on the non-local reorder points and there-

fore detect a violation of the assumption of underestimation, we have no easy way to correct

this violation as in the 2-echelon case. We need to charge the violation to the different pre-

ceding warehouses of a local warehouse to do so. Due to the lack of any sophisticated

model for the charging process, we simply suggest increasing (1/(si,l,mj − 1)) in parallel for

all preceding warehouses until the violation is resolved.

To check for a violation, we have to perform additional binary searches during every itera-

tion of the ﬁrst reﬁnement process, while no additional effort is needed during the second

reﬁnement process. It is therefore an open question when the constraints should be added.

Especially, as we expect that eq. (4.55) will dominate eq. (4.38) during at least the ﬁrst iter-

ations of the ﬁrst reﬁnement process. It will therefore often be the binding constraint of the

solution and the check for a violation would be essential. Further analysis is needed if this

additional computational effort during the ﬁrst reﬁnement process are justiﬁed by a faster

solving process.

Option 2: Heuristic to use the slope derived from the step size in the original model

During the second reﬁnement process of our two-step optimization Algorithm 28, we im-

plicitly use a slope of −1 for the modiﬁed version to introduce constraints starting from an

actual solution in case we increase one or several of the non-local warehouses that precede

a local warehouse (cp. eq. (4.44)). We can improve eq. (4.44) by using si,l,mj :

1

Ri, j ≥ Ract

i, j − ∑ i, j

Rl,m . (4.56)

(l,m)|Rl,m ≥R∗l,m (sl,m − 1)

Even if we use the more complex approach of eq. (4.42), we recommend replacing the

constraint by eq. (4.56). We expect the constraint to be tighter and it is easier to handle by a

solver.

4.2 The n-echelon case 67

If eq. (4.56) is a binding constraint, we can check for a violation using the actual solution

which is computed during each iteration of the second reﬁnement process anyways. We

then can use the same correction mechanism as described for option 1.

Option 2 can also be easily combined with option 1, no matter if the constraints of option 1

are already added during the ﬁrst reﬁnement process or only during the second reﬁnement

process.

The last and most radical option is to construct an optimization heuristic from scratch based

on the step size and drop the original idea. The original model uses the approach to assume

maximal impact on a local reorder point of each non-local reorder point. We iteratively

improved this underestimating solution by adding new constraints.

Similarly, we can start with an ILP with eq. (4.55) as constraints. However, we now need

to ﬁnd a new way to decide in which area to compute the step size, as ūi,l,mj , that was used

in Derive candidate for minimal step size in Section 4.2.4, is not available anymore. We

therefore fall back to the alternative starting point using the sum of expected local lead time

demand in the respective subnetwork.

Our new initial underestimating ILP is described by eqs. (4.57) to (4.59).

i j

1

s.t. Ri, j ≥ R̄i, j − ∑ i, j

Rl,m for all (i, j) ∈ C (4.58)

(l,m)∈Pi, j (sl,m − 1)

the original model, a way to prevent an over-reduction of a local reorder point by several

non-local reorder points is at least beneﬁcial. We now can not fall back to eq. (4.40) to take

care of this issue anymore. However, as we could not come up with another approach, we

rely only on the lower bound (eq. (4.59)). Any over-reduction will be taken care of by the

reﬁnement process, which is described in the following and which basically renders every

solution based on an over-reduction infeasible in the subsequent reﬁnement step.

Our new proposed heuristic is summarized by Algorithm 30 and makes heavy use of the

insights obtained in this and the previous section. We compute the step sizes and start

by solving our new initial ILP. Then we calculate the actual local reorder points needed

to fulﬁll the ﬁll rate targets. We use the actual solution to ﬁrst check for a violation of

the underestimating assumption by the step size constraints and correct the constraints if

necessary. We then use the actual reorder points to reﬁne our ILP similar to the second

reﬁnement process of Algorithm 28 using the improved eq. (4.56), which is based on the

step size as well. We can terminate our algorithm if the “optimality gap” is 0 or as small as

68 4 Multi-echelon optimization

desired. Recall, that we cannot guarantee optimality anymore and therefore the optimality

gap only describes the gap between the solution of the approximate ILP and the actual

solution at the current position.

Algorithm 30 (n-echelon step size heuristic).

1. Calculate Dl,m = ∑(i, j)∈C|(l,m)∈Pi, j E[Di, j (Li, j )] for each non-local warehouse.

3. Set up and solve initial ILP described by eqs. (4.57) to (4.59).

4. Complete reﬁnement process:

a) Based on all non-local reorder points R∗i, j , (i, j) ∈ / C, calculate the actual local re-

order points R̃i, j , (i, j) ∈ C needed to fulﬁll the ﬁll rate targets of eq. (4.13) and cal-

culate the objective value zact of this solution.

b) Check for a violation and, if a violation is detected, correct the size of the step size

as described in “Option 1: Heuristic to introduce additional step size constraints in

the original model”.

c) If no violation was detected and zact = z∗ or the gap between the two values is as

small as desired, return R∗i, j , (i, j) ∈

/ C, R̃i, j , (i, j) ∈ C as the solution. Otherwise, go

to Step 4d.

d) Use R̃i, j , (i, j) ∈ C to introduce new constraints to the ILP based on the actual solution

as described in “Reﬁning the ILP based on actual solutions” in Section 4.2.3 using

the improved eq. (4.56).

e) Solve ILP and obtain the solution R∗i, j for all (i, j) and the resulting objective value

z∗ .

5 Experimental results

In this chapter, we compare the quality of the different wait time approximations presented

in Section 3.2. We focus especially on the conditions in which each approximation performs

best in approximating the mean and variance of the wait time.

We also test the algorithms developed in Chapter 4: We analyze in-depth the structure of

the optimal solution for the 2-level and the n-level case. For the 2-level case, we also

investigate the runtime of the different algorithms and especially compare the heuristics

with the original optimization algorithm.

To be able to obtain all these experimental results, we have developed a simulation tool,

which was build to imitate the behavior of the considered real world supply chain as closely

as possible. One major strength of the results presented here is the use of real-world data.

On the one hand we solve real world problems and on the other hand we test the obtained

solution with real world historical demand data.

Our network simulates a divergent inventory system where all warehouses use a (R,Q)-order

policy. In the real-world network considered for this thesis ordering is done in continuous

time, for end customers as well as warehouses. However, the transports between the ware-

houses internally in the network usually have speciﬁed cut-off times, i.e., the time when the

truck, freight car or container is closed and leaves. To mimic this behavior we have therefore

chosen to work with discrete time steps and implement a modiﬁed ﬁrst come ﬁrst serve rule

for the handling of orders. Orders are generally served in the order of appearance. However,

as we only allow for complete orders to be fulﬁlled, there is the possibility that while a large

earlier order cannot be fulﬁlled anymore a smaller later order can still be fulﬁlled. In this

case we do not want a large order to block all subsequent orders. We therefore deviate from

the FCFS rule and use the remaining stock on hand to fulﬁll smaller orders. As long as the

replenishment order size Qi is sufﬁciently large, this does not hurt the customer with the

large order as he has to wait anyways until the next replenishment arrives but all customers

with smaller orders beneﬁt from this rule.

Figure 5.1 shows the schematics process diagram of our simulation. Our goal was to simu-

late the behavior of real-world warehouses, that is why we have chosen to move in discrete

time steps and have a speciﬁed order of receiving shipments, fulﬁlling demands and placing

new orders. In our application the discrete time step is one day, but it could also be any

other time unit.

The input data shown in Table 5.1 is needed for each warehouses (i, j) in the network. In the

following, we describe the simulation and especially the process steps S1-S4 of Figure 5.1

in more detail.

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_5

70 5 Experimental results

Reorder points Ri, j

Order quantities Qi, j

Expected transportation time E[Ti, j ]

Variance of transportation time Var[Ti, j ]

We initialize inventory on hand at each warehouse with Ri, j + 1. Then, new orders are

triggered as soon as we have the ﬁrst incoming demands at the local warehouses and orders

are propagated throughout the network. This ensures that our system moves towards a

“steady-state” fast. All other variables are initially set to 0 and there are no shipments on the

way at the start of the simulation.

First, in process step S1, we check for each warehouse in our network if there are any

incoming shipments for that speciﬁc day and add them to the inventory on hand.

In process step S2, we check for each warehouse if there are any outstanding backorders.

Here, we have to do a different procedure for local warehouses, which fulﬁll customer de-

mand, and non-local warehouses which only handle internal demand. For local warehouses,

we use our inventory on hand to serve backorders as far as possible. For non-local ware-

houses, we use our inventory on hand to fulﬁll orders of succeeding warehouses. In this

case, if we fulﬁll an order of a successor, we also have to schedule a shipment. For this we

5.1 Set-up of simulation 71

have to draw a random number t given the mean and variance of the transportation time and

assuming a distribution. In our simulations, we assumed a gamma distribution of transport-

ation time and round the drawn random number to the nearest integer. The order is then

scheduled to arrive t days in the future. The minimal transportation time possible is one

day.

Afterwards, in step S3, we iterate over all local warehouses and try to fulﬁll the customer

demand of that day ﬁrst come - ﬁrst serve. Here, our simulation allows for two modes of

operation: The ﬁrst one is based on historical demand data, while the second one is based

on artiﬁcial demand data drawn from random distributions.

In the ﬁrst case, the historical demand data for all local warehouses for a certain period of

time is given and on each day we try to fulﬁll all (historical) demand of that day in the order

it occurred. In the second case, we assume compound Poisson demand. Therefore, we need

the mean of the Poisson arrival process and mean and variance of the order size of each

arrival. We ﬁrst draw the number of customers arriving from the Poisson distribution (cp.

Appendix A.2.5) and then draw the order size for each arriving customer. The logarithmic

distribution (cp. Appendix A.2.3) is used as an order size distribution. We then have a

negative binomial distribution of lead time demand and can use the ﬁll rate formulas from

Section 2.2.3. When using real-world data, estimates for mean and variance are available

from the inventory planning system.

In the last step S4, we check if the inventory position, i.e., inventory on hand plus inventory

on order minus backorders, at that respective warehouse is below or equal to the reorder

point. If this is the case, we position orders of size Qi, j at the preceding warehouse until

the inventory position is raised above Ri, j again. If the preceding warehouse has enough

inventory on hand, we fulﬁll the demand and schedule a shipment in the future as done in

S2 for non-local warehouses.

Table 5.2 summarizes all parameters that can be conﬁgured for the simulation and gives the

default values that are used if not mentioned otherwise. We use a warm-up period which

should be a multiple of the longest transportation time in the network to allow for a few order

cycles to happen. We discard all performance measures obtained during warm up period.

As initial inventory we choose R + 1. One could argue that this creates a bias and it should

rather be set uniformly between R + 1 and R + Q. However, setting the initial inventory at

R + 1 causes the ordering cycle to start with the next demand at the respective warehouse

and steady state will be reached faster. Setting initial inventory uniformly between R + 1

and R+Q on the other hand would also create a basis in a different direction as the inventory

position and not the inventory level is uniformly distributed between the two. The reorder

points are computed prior to the simulation based on forecasting data. During the course

of the simulation we can update reorder points based on updated forecasting data but we

never use the results of the simulation to modify reorder points. Therefore, we do not need

to differentiate between in-sample and out-of sample. We only use the warm up period to

get the system close to a steady state behavior.

For each warehouse we have a number of performance measures as a result of the simulation

as shown in Table 5.3. Note, that all performance measures exclude the warm-up period.

72 5 Experimental results

Transportation time distribution gamma

Initial inventory pieces R+1

Warm-up period time -

Runtime time -

Demand type random or historical -

Order size distribution logarithmic

The order ﬁll rate of a warehouse can be directly computed as orders fulﬁlled/total orders

(cp. Deﬁnition 12).

Average inventory on hand Average inventory on hand over all time periods

Average inventory on order Average of outstanding orders over all time periods

Average backorders Average of backorders over all time periods

Total orders Sum of incoming orders

Orders fulﬁlled Sum of incoming orders, that were fulﬁlled on the same

day

Wait time Mean and variance of times a warehouse had to wait for

replenishment orders from the time the order was made

until it was shipped

In this section, we run simulations solely to test the performance of the different wait time

approximations. The results of the numerical experiments presented here are part of a joint

publication with Andreas Bley [GB18a]. In our experiments, we considered a 2-level net-

work and prescribed several central ﬁll rate targets. We then calculated the reorder points

for all warehouses such that the given local ﬁll rate targets are met. The local reorder points

depend on the wait time, so different wait time approximations will lead to different local

reorder points. Figure 5.2 illustrates the process of calculating reorder points given a pre-

scribed central ﬁll rate, a wait time approximation and local ﬁll rate targets. We want to

emphasize that for the same prescribed central ﬁll rate the resulting central reorder point

is the same for all wait time approximations. The choice of approximation method only

affects the local reorder points. Hence, the results are directly comparable.

The presentation of our experimental results is split in two sections. First, we try to establish

a general understanding in which situation which approximation has the highest accuracy.

For this purpose, we have set up experiments using an artiﬁcial network and random data

5.2 Comparison of wait time approximations 73

Figure 5.2: Process of calculating reorder points for the simulation given a prescribed central ﬁll

rate target, a wait time approximation and local ﬁll rate targets

drawn from certain distributions. In a second step, we try to verify our ﬁndings using the

network and historical demand data of a real-world industrial supply chain. The second

step is much more challenging than the ﬁrst one, as the real-world demand data is “dirty”.

Wagner coined this term to describe that real-world demand data usually contains many

irregularities, which often cannot be adequately represented by drawing random numbers

based on a ﬁxed probability distribution [Wag02]. The real-world demand data therefore

can be seen as a more challenging environment and, consequently, a worse accuracy of the

distribution-based wait time approximations should be expected. Nevertheless, any method

that claims to have value in practical applications should be veriﬁed within a practical set-

ting. To the best of our knowledge, we are the ﬁrst to report on numerical tests of wait time

approximations in a real-world setting with real-world demand data.

To evaluate the accuracy of the wait time approximations regarding the mean and standard

deviation, we use two measures which we will deﬁne in the following for the NB approx-

imation and which will be analogously used for all other approximations. Let μNB (t) and

σNB (t) be the computed estimators of the NB approximation for the mean and standard

deviation of the wait time for test case t. Analogously, let μSIMU (t) and σSIMU (t) be the

simulated mean and standard deviation of the wait time for test case t. In Section 5.2.1, we

consider random demand data. Here, μSIMU and σSIMU are the averages of the simulated

values of all instances (in the respective group of instances) that were simulated. We calcu-

late the error and the absolute error of the mean of the wait time for the NB approximation

for a set T of test cases as

1

Error = ∑ (μNB(t) − μSIMU (t)) and

|T | t∈T

(5.1)

1

Absolute Error = ∑ |μNB(t) − μSIMU (t)|.

|T | t∈T

(5.2)

In the same fashion, we calculate error measures for the standard deviation and all other

approximation methods. While eq. (5.1) conveys information about the average direction of

the errors, eq. (5.2) conveys information about the average size of the errors.

74 5 Experimental results

Our objective in the experiments with random demand data is two-fold: First, we want

to evaluate the accuracy of the given wait time approximations in different situations. In

practice, however, managers do not care much about the wait time as a performance measure,

they care much more if the resulting ﬁll rate targets are fulﬁlled. As second objective, we

thus also want to ﬁnd out if using the wait time approximations to determine reorder points

leads to small or big errors in the resulting ﬁll rates. For this, we ﬁrst compute the local

reorder points for each test scenario and target ﬁll rate using a wait time approximation, and

then, in a second step, compare this with the actual ﬁll rate observed for these reorder points

in a simulation.

Set-up of experiments

In order to compare the quality of the different wait time approximations for random demand

data, we consider a 2-level network, where warehouse 0 is the central warehouse. The

demand characteristics of the default case are shown in Table 5.4. In this table, μi and σi2

house

0 - - 500 - 60 30 0.5

1 2 4 50 0.9 5 3 1

2 3 6 50 0.9 5 3 1

3 4 8 100 0.9 5 3 1

4 5 10 100 0.9 5 3 1

5 6 12 150 0.9 5 3 1

6 7 14 150 0.9 5 3 1

7 8 16 200 0.9 5 3 1

8 9 18 200 0.9 5 3 1

denote the mean and the variance of the demand for one day. From these values, one can

easily derive the necessary parameters θ of the logarithmic order size and λ of the Poisson

arrivals as

μi

θi = 1 − and (5.3)

σi2

log(θi )

λi = −μi (1 − θi ) . (5.4)

θi

We have chosen to represent daily demand by its mean and variance, as this representation

is probably more intuitive for practitioners. One can easily grasp the size and the variability

of demand in the network.

5.2 Comparison of wait time approximations 75

From this default case we derive our set of test cases by varying the parameters. Table 5.5

summarizes the test cases we consider. If the variation typ is multiplicative, indicated by

‘m’ in Table 5.5, we multiply the parameter by the variation value in the respective line of

Table 5.4 . If the variation typ is absolute ‘a’, we replace it. A special case is the number n

of warehouses, where we vary the size of the network by considering the central warehouse

0 and n identical copies of warehouse 1. Note, that we only vary one parameter at a time

and keep all other parameters ﬁxed to the values shown in Table 5.4. Therefore, we have a

total of 40 test cases, including the base scenario as shown in Table 5.4.

Table 5.5: Parameter variations and variation type, multiplicative (m) or absolute (a), for the cre-

ation of the test cases

μi 2 0.25, 0.5 m

σi2 4 2, 4, 8, 16 m

Qi , i > 0 5 0.25, 0.5, 2, 4, 8 m

Q0 5 0.25, 0.5, 2, 4, 8 m

β̄i 4 0.25, 0.5, 0.8, 0.95 a

T0 5 0.0625, 0.125, 0.25, 0.5, 2 m

p0 3 2,4,8 m

n 10 2,3,4,5,6,7,8,10,15,20 -

Our objective is to test all wait time approximations over a wide range of central stocking

quantities, from storing little at the central warehouse to storing a lot. We therefore prescribe

different values for the central ﬁll rate target. Based on the prescribed central ﬁll rate, we

then calculate the central reorder point that is necessary to fulﬁll the speciﬁed ﬁll rate target

as illustrated in Figure 5.2 to obtain sets of reorder points for the simulation.

The different wait time approximations suggest different ways to compute the central ﬁll

rate. To obtain comparable results, we use the procedure described in Section 3.1 to approx-

imate the central lead time demand and then calculate the ﬁll rate using (2.13).

In the simulations, we considered a time horizon of 2000 days and used a warm-up period of

500 days. Complementary numerical experiments revealed that 100 independent runs of the

simulation for each instance are sufﬁcient to get reliable numerical results, so all reported

simulation results are averages over 100 runs per instance. For further details on these tests

we refer to A.3.2.

In our initial experiment, we considered ﬁve scenarios where we prescribed a central ﬁll

rate of 20%, 40%, 70%, 90% and 95%, respectively. We then calculated the central reorder

point based on the prescribed central ﬁll rate and equation (2.13) using a binary search and

obtained the local reorder points using the procedure shown in Figure 5.2. Finally, we ran

simulations to check if the intended central ﬁll rate was actually obtained. As shown in

Table 5.6, especially in the high ﬁll rate scenarios, the ﬁll rates observed in the simulation

are much lower than those anticipated. The ﬁve initial scenarios only cover the low and

medium range of central stocking quantities. As our objective is to cover a wide range of

76 5 Experimental results

Table 5.6: Prescribed central ﬁll rate, average simulated ﬁll rate and name of scenario

20% 10.86% low

40% 44.77% medium low

70% 56.06% /

90% 63.59% /

95% 67.80% medium high

new 95.16% high

central stocking quantities in our tests, we create a new scenario that led to a high central

ﬁll rate of 95.16% in the simulation. We start from the medium high scenario and increase

reorder points of test cases with ﬁll rate lower than 95%, obtain a new average simulated ﬁll

rate and repeat the procedure until a average simulated central ﬁll rate of more than 95% is

reached.

In the remaining experiments, we then only consider the scenarios 20%, 40%, 95% and

“new” shown in Table 5.6, which have been renamed for clarity to align with the central

ﬁll rates actually observed in the simulations. Note that the “high” scenario is somewhat

special: In this scenario, the central ﬁll rate is so high that the chance an order has to wait

at the central warehouse is really low. Hence, the effect of the wait time on the network is

very small in this case. Nevertheless, we found it worthwhile to also consider this case for

completeness.

For brevity, we refer to numbers obtained from the wait time approximations as computed

numbers and to the numbers observed in the simulation as simulated numbers in the follow-

ing.

In this section, we discuss the results of the experiments with random data described in the

previous section. We compare simulated and computed wait time results. Based on this

analysis, we give recommendations when to use which approximation. Then, we analyze if

the local ﬁll rate targets were satisﬁed and have a look at the inventory level for the different

scenarios and approximations.

Wait time results Table 5.7 shows the average simulated and computed mean and stand-

ard deviation for each scenario and approximation. The KKSL approximation seems to

perform best overall, while all other approximations seem to have great errors according to

the simulated results. However, we will observe later in this section that each approximation

performs best for at least some scenarios and test cases. Hence, the aggregated results can

only give a rough overview. Even more, some the aggregated results are strongly inﬂuenced

by few individual cases where an approximation did perform very poor. For example, the

standard deviation is badly overestimated by the NB approximation for a small number of

5.2 Comparison of wait time approximations 77

parameter settings, although this approximation performs very good for many other cases.

In the aggregate view, this is not properly represented.

Table 5.7: Average wait time and standard deviation for different scenarios and wait time approx-

imations in comparison to the simulated results

Mean SD Mean SD Mean SD Mean SD

Simulation 24.76 15.63 9.11 11.35 4.12 7.21 0.42 1.90

KKSL 29.80 22.13 8.83 11.77 3.30 5.84 0.88 2.37

NB 35.78 91.54 5.83 28.57 1.20 7.45 0.05 0.99

BF 1.18 6.08 0.67 4.70 2.27 4.98 0.38 3.76

AXS 25.15 9.60 5.35 6.96 0.73 1.63 0.11 1.05

In the following, we will analyze which approximation works best in which circumstances.

When we refer to certain input parameters being high or low, this is always with respect to

our base setting shown in Table 5.4.

Evaluation of the different approximations The KKSL approximation has good accur-

acy for a wide range of input parameters. It is rather accurate if the central ﬁll rate is medium

to high and if there are many local warehouses. It performs best if differences between local

warehouses are small and if local order quantities are in a medium range, i.e., Qi /μi ≈ 20.

Consider for example Figure 5.3, which shows the errors for the mean and the standard

deviation of the wait time for the “medium high” scenario and one test instance, i.e., the

base parametrization shown in Table 5.4. While the approximation performs good overall,

the differences between local warehouses are large. The approximation is very good if the

order quantity is neither too high nor too low, for example for warehouses 5 and 6, which

have an order quantity of 150. The accuracy is worst for warehouses with low demand and

low order quantity.

The NB approximation is very good for many input settings, especially at approximating

the mean of the wait time. However, it badly overestimates the standard deviation if Qi /μi

or if Q0 /Qi are large. On the other hand, we observed very good results with the NB

approximation when the central ﬁll rate is medium to high and when the local warehouses

are heterogeneous.

In scenarios where the local order quantity and the central lead time are very low, BF is a

suitable approximation. In all other cases, it is not. We believe that the main reason for this

is that it assumes a very smooth and nearly steady demand at the central warehouse. Berling

and Farvid state that this assumption is valid only for small Qi [BF14]. In the numerical

analysis presented in their paper, increasing Qi causes larger errors. In our data, the ratio

Qi /μi is much larger than what Berling and Farvid considered. Moreover, our central lead

times are much longer, so the system is not reset as frequently and the error of the linear

approach supposedly adds up.

78 5 Experimental results

!
!!"

Figure 5.3: Errors of KKSL approximation for mean and standard deviation of wait time for the

“medium high” scenario and the test set of base parametrization of Table 5.4 for the

different warehouses

By design, the AXS approximation computes only the average of the mean and the standard

deviation of the wait time over all local warehouses. Consequently, it is only suitable if

differences between local warehouses are not too large. Furthermore, it performs very good

if the central ﬁll rate is low or if the variance of the demand is high.

Next, we will have a closer look at the effect of the network size on the quality of the

approximations. For this experiment, we consider a network with n identical warehouses

with the characteristics of warehouse 1 in Table 5.4. Note that this construction favors

approximations that beneﬁt from homogeneous networks, namely KKSL and AXS. We

therefore focus on the trend and analyze how the accuracy of each individual approximation

changes if we increase the number of local warehouses.

Figure 5.4 shows the average error of the computed from the simulated mean and standard

deviation of the wait time for the “medium low” scenario. For the mean of the wait time,

we see no clear trend for n < 10, but for n ≥ 10 the accuracy of the approximation increases

for all methods except NB, which levels off for n ≥ 10.

For the standard deviation, we see an interesting behavior for the NB approximation: The

approximation is bad for small networks, which is consistent with our previous analysis (as

Qi

μi = 25), but the quality dramatically improves with more local warehouses. This indicates

that the mechanism that overestimates standard deviations for warehouses with those input

parameters diminishes with increasing network size. The other methods show a slightly

improving quality with increasing network size. This behavior is very similar for all other

central ﬁll rate scenarios.

5.2 Comparison of wait time approximations 79

!

(a) Mean

!!

""#$ % %&
'#

Figure 5.4: Errors of mean and standard deviation of the wait time for the different approximations

and network sizes in the “medium low” scenario

Finally, we look at the last test case, i.e., how do different wait time approximations perform

if we change the local ﬁll rate target (β̄i in Table 5.5). The local ﬁll rate target has no

effect on the accuracy of the approximations in our numerical experiments. This is hardly

surprising as the local ﬁll rate target only inﬂuences the local reorder point. Therefore

only the timing of orders changes but neither the quantity nor the frequency of the ordering

process.

Local ﬁll rates and inventory in network Ultimately, a planer does not care about wait

time but is interested in whether the ﬁll rate targets at the local warehouses are met or

violated. Additionally, as a secondary objective, she or he might prefer ﬁll rates to be not

much higher than the targets, as higher ﬁll rates require additional stock.

80 5 Experimental results

Table 5.8 gives an overview on the average deviation of the local warehouses’ ﬁll rates from

the ﬁll rate targets. It shows the average of all test cases except the text ones regarding the

network size n and the ﬁll rate targets β̄ (i) (cp. Table 5.5).

Table 5.8: Average deviations from ﬁll rate targets for local warehouses, average of all test cases

except the test cases for the network size n and the ﬁll rate targets β̄ (i)

low 7.31% 9.04% -46.64% -7.04%

medium low 3.36% 5.10% -13.83% -8.39%

medium high 2.75% 1.70% -2.15% -6.24%

high 5.13% 2.51% 5.19% 0.22%

AXS and BF on average violate the ﬁll rate constraints considerably, except for the “high”

scenario, where the quality of the wait time approximation is hardly important. This con-

ﬁrms our previous ﬁnding that these two wait time approximations are only suitable in some

rather restrictive cases.

For KKSL and NB, the ﬁll rate constraints are – on average – fulﬁlled in most cases. How-

ever, for the “low” and the “medium low” scenario the observed average ﬁll rates are much

higher than the ﬁll rate targets, indicating a substantial overstock.

Figure 5.5 shows the average value of the reorder points and inventory levels at local ware-

houses. As both bar charts show a very similar pattern, we discuss them together and more

generically refer to stock instead of reorder points and inventory levels.

Naturally, the local stock value is decreasing if more stock is kept at a central level.

The stock situation for AXS and BF reﬂects the violation of ﬁll rate targets: Whenever the

violation is high, stock is low. Both the ﬁll rate and the stock levels indicate that these two

approximations underestimate the wait time.

The much higher stock for the NB for the “low” scenario is striking. Contrary, for the

“high” scenario, results based on NB have the lowest stock. The higher the ﬁll rate, the

more the stock situation for NB is similar to the other approximations. The KKSL behaves

more moderate than the NB approximation. We already found that the NB approximation

dramatically overestimates the standard deviation of the wait time for some input parameter

constellations. For these instances, way too much stock is kept. If we remove those in-

stances from the analysis, the dramatic spike in the “low” scenario disappears. The NB

approximation still overstocks for low central ﬁll rates, but the accuracy for higher cent-

ral ﬁll rates is much better. One could assume that the underestimation of the average ﬁll

rate shown in Table 5.8 is also highly inﬂuenced by this and that NB may violate the ﬁll

rate constraints if these instances are excluded from analysis. However, this is not the case:

Although the average ﬁll rate is lower, the ﬁll rate targets are still met, see Table 5.9.

5.2 Comparison of wait time approximations 81

!

!"

#

!

Figure 5.5: Average value of reorder points and inventory levels for local warehouses for the differ-

ent scenarios

If we vary the local ﬁll rate targets, the behavior of all approximations is not surprising: The

higher the ﬁll rate target, the more local stock is needed. The more accurate the approxima-

tion, the more precise the ﬁll rate target is met and the less over- or understock, respectively,

is observed.

Table 5.9: Average deviation from ﬁll rate target for local warehouses for NB approximation ex-

cluding test cases not suitable for NB

Deviation 8.46% 3.04% 1.15% 2.42%

Summary of the analysis based on random data The KKSL approximation seems to

be a good default choice, unless local order quantities differ too much or the central ﬁll rate

82 5 Experimental results

Figure 5.6: Average value of reorder points for local warehouses for the different scenarios exclud-

ing test cases not suitable for NB

the central ﬁll rate is medium to high and if Qμii < 25 or if there are many local warehouses.

In these situations NB often outperforms the KKSL approximation.

The AXS approximation should only be used if the characteristics of local warehouses are

similar. If, additionally, the central ﬁll rate is low or the variance of demand is very high, it

is an excellent choice.

Only for very low central lead time and local order quantities, BF should be chosen as

approximation.

The aim of the experiments presented in this section was to verify the ﬁndings for random

data also in a real-world setting with real-world data. For this purpose, we considered a

real-world distribution network with one central and 8 local warehouses and 445 parts from

the total stock assortment in this network. For all parts, we were given ﬁll rate targets,

prices, order quantities and historical daily demand for all warehouses over a period of 1

year. The 445 parts were selected to make a small but representative sample of the total

stocking assortment of over 80,000 parts. For the majority of the parts, the variance of the

demand is very high compared to its mean.

In the simulation runs, the ﬁrst 100 days were used as a warm-up period and results from

those days were discarded. The lead times were generated from gamma distributions with

given means and variances. For all scenarios considered here, the same random seed was

chosen.

In order to mimic a realistic real-world scenario, the reorder points were (re-)computed

every 3 months based on (updated) demand distributions, order quantities, ﬁll rate targets

5.2 Comparison of wait time approximations 83

and prices. The demand distributions used as input in these computations were also updated

every 3 months based on the future demand forecast data from the productive inventory

planning system.

In the previous section, where we used artiﬁcial demand data, we followed an exploratory

approach, manually searching for patterns in the data obtained in the computations and sim-

ulations. This was possible, because the size of the data set to be analyzed was sufﬁciently

small. For the real-world data, this is not possible anymore. Therefore, we have chosen

to take a different approach here: We state the key ﬁndings from the previous sections as

hypotheses and try to conﬁrm or falsify them here.

The mean and the variance of the transportation time from the central warehouse to each

local warehouse is the same for all parts, but it differs between local warehouses. For each

local warehouse, the mean is between 4 and 18 days and the variance is small, about 30%

of the mean. The lead time from the supplier to the central warehouse depends on the

supplier and is therefore not the same for all parts. The mean is between 45 and 100 days.

The variance is much higher, for some parts it is up to several factors larger than the mean.

These characteristic values limit our analysis, so we cannot replicate all investigations from

the previous sections.

Also, the evaluation of the results contains an additional challenge. We constructed the simu-

lation to mimic the behavior of a real-world distribution system, where demands change and

reorder points are recalculated and adopted every 3 months. Every time the reorder points

are updated, also the wait time approximation will change implicitly. The stock situation at

the time of change, however, is determined by the results of the past. In the simulation, it

will take some time until the updated reorder points will become visible in the actual stock

and, thus, in the actual wait time. This behavior distorts the comparison of the approxima-

tion and the simulation, but this is something that would also happen if the approximations

were implemented in a real-world system. To cope with this difﬁculty, we will compare the

mean and the standard deviation of wait time computed by the respective wait time approx-

imation, averaged over all re-computations, to the mean and the standard deviation observed

in the simulation over the entire time horizon excluding the warm-up period.

We start by looking at the central ﬁll rate in order to verify the ﬁndings from Table 5.6,

namely that the simulated ﬁll rate is much lower than the prescribed ﬁll rate for high ﬁll

rate scenarios. The results shown in Table 5.10 clearly conﬁrm this observation. For the

scenarios with a prescribed ﬁll rate of 90% and 95%, the simulated ﬁll rate is indeed much

lower than those values. For low prescribed ﬁll rates, the simulated ﬁll rate is higher than

the prescribed value. Again we created an additional scenario by adjusting central reorder

points by trial and error to obtain a high central ﬁll rate scenario namend “new”, as similarly

done for Table 5.6.

In total we have 18,396 test cases. A test case contains all results of our experiments for

each combination of scenario, part number and warehouse. For example, the test case for

scenario 20%, part “a” and warehouse “1” contains the results of the simulation as well as

the a-priori calculated moments of the wait time for the four different approximations.

84 5 Experimental results

Table 5.10: Prescribed central ﬁll rate and average simulated value

20% 35.22%

40% 41.87%

70% 56.55%

90% 70.30%

95% 76.93%

new 93.73%

We use this big data set to test the following hypotheses, derived in the previous section for

random demand data. Note that our goal is not to conﬁrm these hypotheses by any statistical

means, but only to detect patterns in the data that either conﬁrm or reject those ﬁndings.

H1. The KKSL approximation is suitable if the central ﬁll rate is medium to high.

H2. The KKSL approximation performs best if differences between local warehouses are

small and if local order quantities are in a medium range.

H3. The NB approximation overestimates the standard deviation signiﬁcantly if

Qi /μi > 25, but it has a good accuracy if Qi /μi ≤ 25.

H4. The NB approximation is good if the central ﬁll rate is medium to high and when local

warehouses are heterogeneous.

H5. The error of the NB approximation of the standard deviation reduces if the network

becomes larger.

H6. The BF approximation performs good if local order quantities and central lead time

are small.

H7. The AXS approximation performs good if the network is homogeneous, the central ﬁll

rate is low or the variance of demand is high.

Hypothesis H1: KKSL is suitable if the central ﬁll rate is medium to high While

hypothesis H1 explicitly states that KKSL is suitable for medium to high central ﬁll rates,

this also implies that it is not as suitable for low central ﬁll rates. We will therefore analyze

both situations, the accuracy of KKSL for medium to high central ﬁll rates, corresponding

to the 4 scenarios 70%, 90%,, 95% and “new” in Table 5.10, and its accuracy for low central

ﬁll rates, corresponding to scenarios 20% and 40%.

Table 5.11 shows the ranking of the KKSL approximation compared to the other approxim-

ations. The columns “mean” and “standard deviation” show the respective percentages of

the test cases, for which KKSL produced the best, at least the second best, or the worst of

the 4 approximations for the value of the mean or the standard deviation of the wait time

that was observed in the simulation. The column “combined” shows the percentages of the

5.2 Comparison of wait time approximations 85

test cases, where KKSL produced the best, at least second best, or worst values for both,

mean and standard deviation.

For medium to high central ﬁll rates, KKSL is at least the second best approximation for

the mean in about 50% of the cases, but in 21.49% of the cases it is also the worst method.

The accuracy is better for the standard deviation. In more than 22% percent of the cases,

KKSL is best in approximating the mean and the standard deviation than any other method,

as shown in the combined ranking. Only in less than 6% of all cases it is the worst overall

method. Looking at the results for low central ﬁll rates, we see that KKSL performs even

better.

Table 5.11: Relative accuracy of the KKSL approximation of mean and standard deviation (sd) for

scenarios with medium to high ﬁll rate relative to the other approximations

Measurement Mean Sd Combined Mean Sd Combined

best 28.80% 42.06% 22.19% 25.38% 48.25% 14.66%

2nd or better 51.37% 71.33% 40.93% 56.80% 92.14% 49.77%

worst 21.49% 9.17% 5.72% 10.36% 0.61% 0.18%

Table 5.12 shows the absolute error of the approximated values from the values observed in

the simulation for the different methods, averaged over all test cases. Also these numbers

support our observation that KKSL is even better for low central ﬁll rates than for medium

to high central ﬁll rates. For medium to high central ﬁll rates, the accuracy of KKSL for the

mean is comparable to other methods, but its accuracy for the standard deviation is better.

Table 5.12: Average simulated values and absolute error of the approximations for test cases in

different scenarios

Mean Standard deviation Mean Standard deviation

Simulation 5.22 4.27 16.72 10.35

KKSL 4.43 4.94 8.85 7.11

NB 4.30 11.93 9.08 43.92

AXS 4.37 6.36 9.86 15.11

BF 4.86 5.72 11.29 6.93

In order to see if certain approximation tend to generally over- or underestimate the mean

or the standard deviation of the wait time, we also analyze the average positive or negative

error of the approximations in Table 5.13 instead of the the averaged absolute errors. This

additionally conveys information about the direction of the inaccuracies. For medium to

high central ﬁll rates, the mean is underestimated by all methods and KKSL seems to per-

form especially bad. For the standard deviation, there is a general overestimation tendency

in all approximations and KKSL is very accurate.

86 5 Experimental results

Mean Standard deviation Mean Standard deviation

KKSL -2.31 1.83 -1.56 4.61

NB -0.94 11.23 3.03 43.83

AXS -2.11 4.33 3.25 14.92

BF -2.01 2.73 -5.21 2.99

In total, KKSL seems to be a suitable choice for medium to high central ﬁll rates. There are

high inaccuracies, but these are not worse than with the other methods. For low central ﬁll

rates KKSL was very accurate compared to the other methods for the real-world data. We

ﬁnd that KKSL is suitable across all central ﬁll rates and seems to be a good general choice

relative to the other approximations.

Hypothesis H2: KKSL performs best if differences between local warehouses are

small and if local order quantities are in a medium range In order to evaluate hy-

pothesis H2, we have to more formally deﬁne what a small difference between local ware-

houses means. By talking about how different local warehouses are, we refer to the differ-

ence in order quantities as well as demand, i.e., the mean and variance of demand per time

unit. We formally deﬁne how much the value xi of an individual local warehouse differs

from the mean of all xi as

|xi − 1/n ∑nj=1 x j |

δ xi := , (5.5)

1/n ∑nj=1 x j

where xi may be the order size Qi , the mean demand μi or the variance of demand σi2 .

First, we analyze the case where the difference between warehouses is small only with

respect to a measure and unrestricted with respect to the other measure, i.e., the three cases

where δ Qi ≤ 20% for all i = 1, ..., n, δ μi ≤ 20% for all i = 1, ..., n or δ σi2 ≤ 100% for all i =

1, ..., n. The reason for the latter 100% is that we have to use a larger difference to have

enough test cases in this class.

Afterwards, we consider the case where the difference among the warehouses is small with

respect to all three measures. This combined evaluation of all three metrics, we have to

use larger differences for δ Qi and δ μi as well. There we consider all test cases for which

δ Qi ≤ 40%, δ μi ≤ 40% and δ σi2 ≤ 100% for all i = 1, ..., n.

The second part of hypothesis H2 is that a local order quantity is in a medium range relative

to the size of demand. Insights from Section 5.2.1 show that the ratio of Qi /μi should be in

the area of 20.

We start by evaluating test cases where differences in order quantity and demand of local

warehouses are small as deﬁned above compared to test cases where those differences are

large. For the large differences, we only evaluate test cases where δ Qi > 20% for all i =

5.2 Comparison of wait time approximations 87

1, ..., n for the order quantity and analogously for the other two metrics. Therefore, in

those test cases the respective metric of each local warehouse differs substantially from

its mean.

Table 5.14 summarizes the results of this comparison. Again, it shows the proportion of

cases for which KKSL was best, second or better, or worst of all approximations for the

mean, the standard deviation or both measures combined for different subsets of our test

cases. With these results, the ﬁrst part of Hypothesis H2 cannot be conﬁrmed. For our real-

world data sets, the KKSL approximation does not perform signiﬁcantly better or worse if

differences between local warehouses are larger or smaller. There is even some indication

that the relative accuracy of KKSL approximation does improve if differences are larger.

The combined measure of mean and standard deviation is better for large differences of

order quantity and mean demand while it is slightly worse for the variance of demand. Es-

pecially for the mean demand, KKSL is the best approximation in 19.83% of the cases for

large differences compared to 12.65% for small cases.

Table 5.14: Relative accuracy of KKSL approximation compared to other approximations for

test cases with small and large differences between local warehouses, with difference

deﬁned as in eq. (5.5)

Diff. of . . .

for all Mean Standard deviation Combined

i = 1, ..., n Best 2nd or bet. Worst Best 2nd or bet. Worst Best 2nd or bet. Worst

δ Qi

≤ 20% 33.95% 54.76% 25.19% 38.42% 71.19% 3.77% 17.47% 39.78% 1.88%

> 20% 26.68% 52.93% 16.62% 45.01% 79.37% 6.71% 20.03% 44.51% 4.18%

δ μi

≤ 20% 24.38% 45.37% 34.88% 42.28% 70.06% 7.41% 12.65% 32.10% 5.86%

> 20% 27.73% 53.34% 17.42% 44.16% 78.44% 6.29% 19.83% 44.12% 3.83%

δ σi2

≤ 100% 28.88% 53.60% 18.02% 45.88% 79.12% 6.02% 20.54% 44.18% 3.68%

> 100% 24.28% 52.01% 17.10% 39.25% 75.91% 7.14% 17.31% 43.03% 4.41%

δ Qi ≤ 40%,

δ μi ≤ 40%, 26.25% 45.80% 32.41% 37.66% 67.45% 7.22% 13.39% 29.92% 4.72%

δ σi2 ≤ 100%

Regarding the order quantity, Figure 5.7 shows the average error from the simulated mean

for different ratios of order quantity and mean demand. Figure 5.8 shows similar results

for the standard deviation. Again, it cannot be concluded that the KKSL approximation

performs better or worse for order quantities in a medium range.

Based on the ﬁndings using the real-world data, we cannot conﬁrm hypothesis H2. However,

note that we could not intensively test the combination of only small differences between

local warehouses and of order quantities in a medium range, because the number of test

cases satisfying both conditions was too small.

88 5 Experimental results

!!"# $% &" %'

Figure 5.7: Absolute error of mean of the different approximations for different classes

antly if Qi /μi > 25, but it has a good accuracy if Qi /μi ≤ 25 To test hypothesis H3,

we ﬁrst compare all test cases for which Qi /μi > 25 to all test cases for which Qi /μi ≤ 25.

We want to emphasize that this overapproximation only occurs for the standard deviation

and not for the mean. We look at the errors of the approximation, i.e., the difference

between approximated standard deviation and simulated standard deviation, for these res-

ults. Table 5.15 shows the absolute average error, separately for cases where Qi /μi > 25 or

Qi /μi ≤ 25. Additionally, we state how many of the results fall in each category and what

the average standard deviation of these results was. We also supply the results for the other

approximations to conﬁrm or reject that this is solely an issue for the NB approximation.

Table 5.15: Absolute error of approximated standard deviation (sd) of wait time for test cases with

different values of Qi /μi

Case #test cases in simulation NB KKSL AXS BF

> 25 15450 6.35 22.97 5.73 9.37 6.18

≤ 25 306 3.80 3.55 2.59 4.73 3.31

For the majority of our test cases, we have Qi /μi > 25. Note that the number of test cases

does not add up to the 18,396 sets in total, because we have to exclude the sets for the central

warehouse, where wait time does not apply. The results from the simulation with real-world

data allows us to conﬁrm hypothesis H3: The error is much larger for the standard deviation

of the wait time if the NB approximation is used in the case Qi /μi > 25. For Qi /μi ≤ 25,

5.2 Comparison of wait time approximations 89

NB performs similar to the other approximations. Note that the other approximations also

get worse if Qi /μi is large, but they still perform much better than NB then.

A second interesting question is if the border is indeed rightly drawn at 25. Figure 5.8

shows the accuracy of the standard deviation predicted by the four approaches for different

classes of Qi /μi . Detailed results including the number of observations in each class are

shown in Table A.3 in the Appendix. The error for the NB approximation gets large once

Qi /μi > 30. In all classes with Qi /μi < 30, the quality of NB is comparable to the other

approximations. So, 25 seems to be a good threshold value to decide whether NB should be

used or not. Interestingly, the accuracy of AXS also becomes worse for increasing values

of Qi /μi , although at a much higher threshold. For KKSL and BF, we only see a light

deterioration in the approximation quality as Qi /μi increases.

!!"# $% &" %'

Figure 5.8: Absolute error of standard deviation of the different approximations for different

classes

We now focus on test cases where Qi /μi ≤ 25 and try to establish if the NB approximation

can be recommended in these circumstances. Table 5.16 offers several insights: For the NB

approximation, the approximation of the standard deviation is indeed much more accurate

for Qi /μi ≤ 25 than it is in general. The accuracy of KKSL and AXS does also signiﬁcantly

beneﬁt from excluding results with high local order quantity compared to mean demand.

These three approximations all perform better than in the general case shown in Table 5.13.

The NB approximation performs on average worse than KKSL. It is also better than AXS

for the standard deviation, but worse than AXS for the mean.

A similar analysis was done also with the average absolute error instead of the average

positive and negative error in order to compare the absolute sizes of the approximation errors.

The results of this analysis, which are shown in Table A.5 in the Appendix, conﬁrm the main

observations from Table 5.16. The remarkably good accuracy of AXS in approximating the

mean for medium to high central ﬁll rate is due to cancel out effects.

90 5 Experimental results

Table 5.16: Average simulated values and error of the approximations of mean and standard devi-

ation (sd) for different scenarios, for test cases with Qi /μi ≤ 25

Mean Sd Mean Sd Mean Sd

Simulation 5.01 3.80 3.23 2.87 8.57 5.66

KKSL -1.72 -0.16 -1.47 -0.24 -2.23 0.00

NB -2.34 1.09 -2.12 -0.40 -2.79 4.07

AXS 0.81 4.24 0.09 3.16 2.24 6.39

BF -4.26 0.73 -2.58 1.42 -7.62 -0.67

Additionally, we repeat the analysis done in Table 5.11 for the NB approximation and the

limited subset of results with Qi /μi ≤ 25. Table 5.17 shows the results. Overall, the NB

approximation seems to be suitable if Qi /μi ≤ 25. As we have already pointed out, KKSL

also beneﬁts from this condition and therefore may be an overall better choice. Table 5.18

replicates the results for KKSL. The share of cases where KKSL is best or second best is

much lower than for NB. We report the results for AXS in Table A.4 in the Appendix. AXS

does not perform better than NB. In particular, it has very little middle ground, especially

for the mean: It is either the best approximation or the worst with a roughly even split.

Table 5.17: Relative accuracy of the NB approximation for the mean, the standard deviation (sd)

and a combined (comb.) compared to other approximations. Evaluated for different

scenarios regarding the central ﬁll rate (fr), only for test cases with Qi /μi ≤ 25

Mean sd Comb. Mean sd Comb. Mean sd Comb.

Best 36.60% 38.89% 27.45% 41.18% 43.14% 29.90% 27.45% 30.39% 22.55%

2nd or better 61.11% 60.78% 48.69% 61.76% 70.10% 55.88% 59.80% 42.16% 34.31%

Worst 9.80% 17.32% 5.88% 11.27% 12.75% 7.35% 6.86% 26.47% 2.94%

Table 5.18: Relative accuracy of the KKSL approximation for the mean, the standard deviation

(sd) and a combined (comb.) compared to other approximations. Evaluated for differ-

ent scenarios regarding the central ﬁll rate (fr), only for test cases with Qi /μi ≤ 25

Mean sd Comb. Mean sd Comb. Mean sdn Comb.

Best 14.38% 25.16% 8.17% 14.22% 25.98% 8.33% 14.71% 23.53% 7.84%

2nd or better 56.54% 68.95% 38.89% 58.82% 69.12% 41.18% 51.96% 68.63% 34.31%

Worst 11.11% 14.71% 9.80% 13.24% 15.69% 11.27% 6.86% 12.75% 6.86%

Hypothesis H4: The NB approximation is good if the central ﬁll rate is medium to high

and when local warehouses are heterogeneous To evaluate hypothesis H4 we have to

repeat the ﬁrst part of the analysis of H2 and for the subset of scenarios with medium to high

ﬁll rates, i.e., scenarios 70%, 90%, 95% and “new” in Table 5.10. Based on the ﬁndings

5.2 Comparison of wait time approximations 91

for hypothesis H3, we only consider cases with Qi /μi ≤ 25. Because of these restrictions,

however, we can rely only on a relatively small number of test cases in the following.

One could argue that by only considering cases with Qi /μi ≤ 25, we exclude all test cases

where the NB approximation has a bad accuracy. We would then naturally get good res-

ults. We have chosen to do so to exclude the distorting effect of these test cases where the

accuracy for the standard deviation of the NB approximation is really bad as established in

the analysis of hypothesis H3. As we are looking at trends, i.e., does the accuracy improve

if warehouses are more heterogeneous and if the ﬁll rate is higher, we are still able to gain

general insights. However, we also repeated the analysis done here for all test cases. All

ﬁndings reported in the following are supported by this data as well.

The results are summarized in Table 5.19, which has the same structure as Table 5.14. There

is indication that the NB approximation should be used if differences between local order

quantities δ Qi are large as it is more often more accuracte than the other approximations for

mean and standard deviation of the wait time.

For the differences in the mean of demand δ μi the relative accuracy is worse for larger

differences. For the variance of demand results are mixed and there is no clear indication if

NB approximation proﬁts from larger differences.

Table 5.19: Relative accuracy of NB approximation compared to other approximations for test

cases with small and large differences between local warehouses, for medium to high

central ﬁll rate scenarios and Qi /μi ≤ 25 and with difference deﬁned as in eq. (5.5)

for all Best 2nd or Worst Best 2nd or Worst Best 2nd or Worst

i = 1, ..., n better better better

δ Qi

≤ 20% 37.50% 64.29% 19.64% 41.96% 69.64% 17.86% 25.00% 58.04% 13.39%

> 20% 45.65% 58.70% 1.09% 44.57% 70.65% 6.52% 35.87% 53.26% 0.00%

δ μi

≤ 20% 50.00% 78.13% 7.81% 42.19% 81.25% 9.38% 31.25% 70.31% 6.25%

> 20% 37.14% 54.29% 12.86% 43.57% 65.00% 14.29% 29.29% 49.29% 7.86%

δ σi2

≤ 100% 34.38% 68.75% 9.38% 45.31% 67.19% 10.94% 25.00% 59.38% 3.13%

> 100% 44.29% 58.57% 12.14% 42.14% 71.43% 13.57% 32.14% 54.29% 9.29%

All in all, we have some supporting evidence that the NB approximation is good if the

central ﬁll rate is medium to high, when local warehouses have a large difference in order

quantity while the difference between the mean demand should be rather small. This seems

to hold also for the low central ﬁll rate scenarios, but here also the share of cases where NB

is the worst approximation increases, as shown in Table 5.20.

Hypothesis H5: The error of the NB approximation of the standard deviation reduces

if the network becomes larger This hypothesis is based on the ﬁndings in Figure 5.4.

There, we see a decreasing error of the NB approximation of the standard deviation. We

92 5 Experimental results

Table 5.20: Relative accuracy of NB approximation compared to other approximations for test

cases with small and large differences between the order quantity of local warehouses,

for low central ﬁll rate scenarios and Qi /μi ≤ 25

for all Best 2nd or Worst Best 2nd or Worst Best 2nd or Worst

i = 1, ..., n better better better

δ Qi

≤ 20% 21.43% 53.57% 0.00% 28.57% 50.00% 21.43% 16.07% 35.71% 0.00%

> 20% 34.78% 67.39% 15.22% 32.61% 32.61% 32.61% 30.43% 32.61% 6.52%

have only limited means to test this in the real-world data, where we have 8 local warehouses.

However, we do not have demand for all parts at all warehouses and, therefore, have results

for which only a lower number of warehouses than 8 was considered.

Table 5.21 shows the error of the different approximations for all parts that ship to at most

3 local warehouses and for all parts that ship to all 8 local warehouses. Table A.6 in the

Appendix shows the absolute error with similar ﬁndings. There is hardly a difference in

the accuracy of the NB approximation between parts with at most 3 or exactly 8 local

warehouses. We also checked if the share of cases where the NB approximation is the

best or second best approximation changes (not presented here), but without ﬁnding any

signiﬁcant results.

Therefore, we cannot conﬁrm hypothesis H5. The only approximation that seems to suffer

from fewer warehouses in the real-world data is AXS. We suspect that this is caused by

the fact that this approximation considers only the mean across all warehouses and that

variability is higher for fewer warehouses.

Table 5.21: Mean of simulation and error of approximations for the standard deviation for results

with different number of local warehouses

#Local warehouses 8 ≤3

Simulation 5.59 8.61

KKSL 2.35 2.98

NB 21.23 21.20

AXS 6.34 10.17

BF -4.13 -0.90

Hypothesis H6: The BF approximation performs good if local order quantities and

central lead time are small Unfortunately, we cannot evaluate the two conditions of hy-

pothesis H6 at the same time, as all test cases with a relatively low local order quantity also

have a relatively low central lead time. As the minimal mean central lead time in our data set

is about 45 days, it is questionable if we see effects at all. Nonetheless, we can evaluate the

two conditions separately and compare mean central lead times of about 45 days to about

100 days.

5.2 Comparison of wait time approximations 93

Table 5.22 shows the relative accuracy of the BF approximation for results with the two

different lead times. The relative accuracy of the BF approximation is indeed better for the

shorter central lead time.

Table 5.22: Relative accuracy of the BF approximation for the mean and standard deviation (sd),

comparison for high and low mean of central lead time compared to other approxima-

tions

Measurement Mean sd Combined Mean sd Combined

Best 23.78% 29.49% 12.99% 11.11% 35.42% 9.38%

2nd or better 40.51% 62.41% 30.63% 30.56% 57.99% 22.57%

Worst 41.75% 9.63% 7.65% 47.57% 9.38% 7.99%

To compare results for low and high local order quantities, we proceed as for hypothesis H3.

In Table 5.15, we have already seen an improvement in the approximation of the standard

deviation of BF for smaller local order quantities. Repeating the same analysis for the

mean, we ﬁnd that the quality of the BF approximation does not change. In fact, it stays

remarkably constant. However, our limit of Qi /μi ≤ 25 is already quite high compared to

the ratios in the artiﬁcial data used in the previous section. Considering Figure 5.8 again,

we see that the accuracy already deteriorates if the ratio is above 10, but it is quite good for

the ratio being between 0 and 10.

Thus, we have at least a good indication that BF proﬁts from small order quantities and

from smaller central lead times.

Hypothesis H7: The AXS approximation performs good if the network is homogen-

eous, the central ﬁll rate is low or the variance of demand is high Hypothesis H7

consists of three conditions that we analyze separately. The statement concerning homogen-

eous networks is closely related to hypothesis H2. We therefore repeat the analysis that we

did for Table 5.14 in Table 5.23 for the AXS approximation. We see that the accuracy of

AXS relative to the other approximations does not improve for the real-world data. In fact,

and in contrast to to the hypothesis, the relative accuracy of AXS decreases for more homo-

geneous networks. This indicates that the other approximations, especially KKSL, beneﬁt

even more from similar warehouses.

A comparison of the different approximations for a low central ﬁll rate is presented in

Tables 5.16 and A.5 in the Appendix. These results show that AXS does not perform

signiﬁcantly better than other approximations if central ﬁll rates are low.

Table 5.24 shows the average error of the approximation from the simulated values for

different ratios of variance to mean of demand. Table A.7 in the appendix shows the absolute

errors. It seems that AXS performs best if the ratio σ 2 /μ is between 1 and 5 and its accuracy

decreases for higher variance.

In total, we did not ﬁnd evidence to conﬁrm hypothesis H7.

94 5 Experimental results

Table 5.23: Relative accuracy of AXS approximation compared to other approximations for test

cases with small and large differences between local warehouses, with difference

deﬁned as in eq. (5.5)

for all Best 2nd or Worst Best 2nd or Worst Best 2nd or Worst

i = 1, ..., n better better better

δ Qi

≤ 20% 21.19% 53.63% 18.08% 13.32% 28.06% 31.50% 6.69% 17.42% 7.06%

> 20% 21.28% 48.72% 24.33% 18.71% 43.28% 8.92% 7.05% 26.02% 4.44%

δ μi

≤ 20% 18.21% 44.44% 21.30% 11.42% 29.01% 19.44% 3.09% 12.35% 5.25%

> 20% 21.33% 49.49% 23.53% 18.12% 41.49% 11.81% 7.08% 25.12% 4.78%

δ σi2

≤ 100% 21.16% 50.98% 22.21% 17.96% 41.45% 10.34% 6.74% 25.06% 3.15%

> 100% 21.58% 44.97% 27.01% 18.06% 40.61% 16.48% 7.73% 24.31% 9.34%

δ Qi ≤ 40%,

δ μi ≤ 40%, 20.47% 49.74% 20.87% 10.76% 28.48% 23.36% 3.02% 12.07% 3.94%

δ σi2 ≤ 100%

Table 5.24: Average simulated values and errors of the approximations for the mean and standard

deviation (sd), for test cases with different values of σ 2 /μ

σ 2 /μ < 1 σ 2 /μ < 5 σ 2 /μ ≥ 1 σ 2 /μ ≥ 5

Mean sd Mean sd Mean sd Mean sd

Simulation 7.58 5.81 8.78 6.18 9.20 6.35 10.47 6.93

KKSL -0.44 3.65 -1.73 3.08 -2.23 2.67 -3.77 1.06

NB 2.16 24.52 0.78 23.01 0.20 21.86 -1.67 17.38

AXS 1.37 8.77 0.05 8.23 -0.49 7.77 -2.25 6.00

BF -0.67 3.87 -2.45 3.20 -3.32 2.71 -6.34 0.88

Summary of the analysis based on real-world data For our real-world data, the results

concerning the accuracy of the wait time approximations compared to the simulation are

sobering. We often see large errors of all considered approximations. We have (at least

some) supporting evidence for hypothesis H1, H3, H4, and H6, but we can neither clearly

conﬁrm nor clearly reject the other hypotheses H2, H5, and H7.

Our experiments show that there is no generally “best” approximation, none of the wait time

approximations has a better accuracy than the others in all situations. In fact, it depends

heavily on the characteristics of demand and the network which approximation performs

best. In our experiments, we also observed rather large errors of the wait time approxima-

tions in some speciﬁc situations. This clearly shows room for improved models, at least for

these situations, and the need for further research in this area.

5.3 2-echelon results 95

Nevertheless, we were able to derive a few simple guidelines that describe which approxim-

ation is likely to perform accurate in which situation: The BF approximation should only

be used if the local order quantities and central lead times are low. The AXS approximation

is suitable for homogeneous networks, i.e., networks in which local warehouses are very

similar with regards to demand structures and order quantities. The KKSL and NB are

generally good choices in all other circumstances. For the NB approximation, however, the

ratio Qi /μi is critical for the quality of the approximation of the standard deviation of wait

time. NB should be used only if this ratio is smaller than 25.

In this section, we apply and test the algorithms for 2-echelon distribution networks de-

veloped in Section 4.1. This section is part of a joint work together with Andreas Bley

[GBS18]. We analyze the structure of the optimal solution with artiﬁcial as well as real-

world data given the different wait time approximations. We test the performance of our

original algorithm and then investigate how much faster the heuristics developed in Sec-

tion 4.1.6 are and how much the solutions differ from the optimal values.

As a starting point for our analysis, we consider a network consisting of one central and

eight local warehouses. The characteristics of the random demands are shown in Table 5.25.

Here, Di is a random variable describing the size of an individual customer demand at

the respective local warehouse, while μ and σ refer to the demand per time unit. The

variable Di is relevant only for the KKSL approximation, where the size of the individual

customer orders is taken into account. We assume that the central lead time follows a gamma

distribution with the given means and variances.

house

0 - - 500 - - - 60 30 0.5

1 2 4 50 1 2 0.9 5 3 1

2 3 6 50 1 2 0.9 5 3 1

3 4 8 100 1 2 0.9 5 3 1

4 5 10 100 1 2 0.9 5 3 1

5 6 12 150 1 2 0.9 5 3 1

6 7 14 150 2 4 0.9 5 3 1

7 8 16 200 2 4 0.9 5 3 1

8 9 18 200 2 4 0.9 5 3 1

96 5 Experimental results

In the following, we will vary these problem parameters and analyze the effect this has on

the optimal solutions obtained for the four different wait time models. On one hand, we are

interested in seeing how changes in these parameters affect the values of the central reorder

point, the central ﬁll rate, and the total stocking cost for each wait time model. On the other

hand, we want to analyze the inﬂuence of the chosen wait time model on the results. Due to

the requirements that order quantities must be at least one and that the central reorder point

is at least zero, we implicitly have a lower bound on the central ﬁll rate.

Figures A.1a to A.6a show the results when varying the parameters from Table 5.25, while

Figures A.7a and A.7b and Figures A.8a and A.8b illustrate the changes observed in the

results when varying the local ﬁll rate targets and the number of local warehouses, respect-

ively.

Main results

Already Axsäter points out that the optimal central ﬁll rate is typically much lower than most

practitioners would expect, with only few exceptions [Axs15]. Our ﬁrst observation is that

his statement holds true even for the newer, more sophisticated wait time approximations.

While the central ﬁll rate is well above 90% in many practical applications, our results

suggest that the optimal ﬁll rate is much lower for many scenarios, often in the range of 70%.

Figure 5.9 shows a boxplot of the central ﬁll rates for the different approximations. More

precisely, our results based on the BF approximation suggest central ﬁll rates close to 0 and

those based on the NB approximation suggest slightly higher central ﬁll rates than those

based on the KKSL and the AXS approximations. All models, however, lead to central ﬁll

rates well below the industrial standards in these scenarios. These results emphasize that it

is very important to study the wait time in a distribution network and to choose a suitable

approximation. As the central ﬁll rate in an inventory-optimized network is relatively low,

the probabilities for stock-outs and orders that need to wait are signiﬁcant.

Figure 5.9: Boxplot of central ﬁll rates for different wait time approximations

5.3 2-echelon results 97

A striking second observation for all graphs in this section is that the solutions obtained

using the BF approximation are considerably different from those obtained with the other

wait time models. In general, the BF approximation leads to very low central ﬁll rates

and central reorder points in the optimal solution. Moreover, this wait time approximation

leads to very little overall stock in the network, so the objective values are considerably

lower than those obtained with the other wait time approximations. This either implies that

solutions obtained for the BF approximation stock too little and will subsequently violate

the ﬁll rate targets, or that the solutions for the other wait time approximations overstock.

Increasing the central reorder point, the wait time decreases signiﬁcantly faster in the BF

approximation than in all other models. In Section 5.3.2, where we report on the results of

our simulations with real-world data, we see that the ﬁll rate constraints are heavily violated

for the optimal solutions based on the BF approximation. Thus, we conclude that the BF

wait time approximation largely underestimates the real wait time and, consequently, leads

to too little stock in the network (at least) in our test cases.

For most instances, the overall stock in the network obtained for the NB approximation is

comparable to that obtained with the AXS or the KKSL approximation and all wait time

models render objective values in the same range as Figures A.1c to A.8c in the Appendix

show.

Finally, the AXS and the KKSL approximations lead to overall very similar results regard-

ing the central ﬁll rate and the central reorder points. However, the local reorder points

obtained with these two wait time approximations differ signiﬁcantly. The AXS approxim-

ation leads to the same wait time for all local warehouses and this wait time is lower than the

one obtained using the KKSL approximation. Consequently the solutions obtained using

the AXS approximation have lower objective values than those for the KKSL approxima-

tion, except for very low ﬁll rate targets (see Figure A.7c).

Next, we discuss the inﬂuence of the different input parameters on the results. Note that our

algorithm supplies an optimal solution, but the optimal solution is not necessarily unique. In

the graphs (shown in Appendix A.4), we sometimes observe little irregularities in otherwise

consistent patterns. We believe that these are mainly due to the fact that the algorithm

returned optimal solutions from different areas of the solution space in these cases.

In Figure A.1a, where we vary the expected value of the local demands, there is a general

downward trend of the ﬁll rate when the expected demand increases, except for the KKSL

approximation. Increasing the variance of local demand has very little effect on the results,

except for an initial increase in the ﬁll rate for the KKSL approximation. The central reorder

point in Figures A.1b and A.2b increases with increasing local expected demand, while it

stays essentially constant when increasing the variance of the local demands.

The latter phenomenon can be explained by an implicit “buffering effect”: As the local

order quantities are large compared to the lead time demand, the order quantities already

buffer most of the demand variance. Larger variances increase the local reorder points

98 5 Experimental results

only slightly, so also the objective value as shown in Figure A.2c increases very little. The

objective value and the central reorder point in Figure A.1c both depend almost linearly on

the expected local demands.

Figure A.3 shows the effect of varying the central order quantity. As expected, increasing the

central order quantity causes the central ﬁll rate to increase too. Due to the non-negativity

constraint for the central reorder point, the central ﬁll rate cannot be lower than a certain

threshold value. For high central order quantities, the central ﬁll rate is high by default.

The results for varying local order quantities, which are presented in Figure A.4, show

an interesting mechanism. For all wait time approximations except AXS, the local order

quantity is an input parameter for the wait time calculation. Nevertheless, the central ﬁll

rate is only slightly decreasing for very large local order quantities. Both the central ﬁll

rate as well as the reorder point essentially stay in the same range. The objective value in

Figure A.4c is decreasing until the local order quantity reaches approximately 2Qi , which is

caused by the increased ﬁll from higher order quantities. Increasing the local order quantity

further beyond this value, the objective value only reduces marginally. For the NB wait

time approximation, it even starts to increase, as additional central stock is added.

Increasing the stocking price at the central warehouse and, thereby, changing the ratio of

the central versus the local stocking price strongly affects the central ﬁll rate and reorder

point of the optimal solutions in Figures A.5a and A.5b. Already at x = 2, where a part has

the same stocking price at the local and at the central level, the central ﬁll rate and reorder

points decrease drastically. Only with the NB approximation we see a more moderate and

more steady decrease in central ﬁll rate and central reorder point. For all other wait time

approximations, almost no stock should be kept at the central warehouse if there is no cost

beneﬁt compared to stocking at the local warehouses. Recall, that we consider demand

with a relatively high variability (Table 5.25) as is common for the practical applications we

are concerned with. Previous research suggests that the beneﬁts of risk pooling by shifting

stock from the local to central warehouses decrease as demand variability increases, see for

example the work by Berman et al. and references therein [BKM11]. Our results clearly

conﬁrm these ﬁndings.

Increasing the central lead time leads to increasing central reorder points and slightly de-

creasing central ﬁll rates for the optimal solutions based on the NB and the AXS wait time

approximations, as shown in Figures A.6a and A.6b. The risk of a longer central lead time

is offset by a higher central reorder point.

The results for varying local ﬁll rate targets in Figures A.7a to A.7c show a consistent

picture. We see an increase in the central ﬁll rate, in the reorder point, and in the objective

value when increasing the local ﬁll rate targets. Increasing the local ﬁll rate targets naturally

also leads to increasing local reorder points and costs.

Finally, we analyze the inﬂuence of the number of local warehouses, considering networks

with several identical copies of “warehouse 1” and varying the number of these warehouses.

Here, we see a number of different trends. The central ﬁll rate slightly decreases when the

number of warehouses increases for all wait time approximations except BF. Our results

5.3 2-echelon results 99

show that, for all approximations except BF, it is optimal to increase the central reorder

points with increasing network size in order to keep the central ﬁll rate and the wait time

stable, compare Figure A.8b. With the BF approximation, on the other hand, the central

ﬁll rate is strongly decreasing when the number of warehouses increases. Surprisingly, here

also the local reorder points decrease slightly, while the central reorder point increases only

very little.

Considering different parts in a real-world network with one central and eight local ware-

houses, we evaluated the practicability of our optimization algorithm and the quality of the

four wait time approximations described above. In the ﬁrst step, we determine the optimal

reorder points using our algorithm for each of the four different wait time approximations,

ﬁtting each approximation as good as possible to historical demand data. Then we compare

the optimal solutions for the different approximations among each other and, in addition,

with the approach most commonly used in practice, which is to ﬁrst prescribe a central ﬁll

rate target and then, in a second step, compute all reorder points for this ﬁxed central ﬁll

rate target.

In the second step, the optimized networks have been simulated with the real historical de-

mand data. The goal of this simulation was two-fold: First, we wanted to examine if the ﬁll

rate constraints assumed in the optimization with the respective wait time approximations

were indeed satisﬁed also in the simulation with the real demands. Secondly, we wanted to

analyze and compare the performance of the different wait time approximations.

For our study, we selected 440 parts that form a representative sample of the total stocking

assortment in our industrial partner’s real network. For each of these parts, we were given ﬁll

rate targets, stocking prices, order quantities, and historical demand data for about two years.

For the majority of the parts, the variance of demand is high compared to the mean demand.

Stocking prices were the same at the central and local level. In the simulations, the initial

stock at hand was set to Ri + 1, where Ri is the optimal local reorder point at warehouse

i. The ﬁrst 100 days were used as a warm-up period for the simulation and the results

from those days were discarded. Lead times were generated from gamma distributions with

given means and variances. For all scenarios considered here, the same random seed was

chosen.

We used forecast data from the productive inventory planning system as input data for the

wait time approximations and our optimization algorithm. We considered a time period of

one year and tried to replicate a real-world scenario. In order to mimic the actual productive

system, the reorder points have been re-optimized periodically every three months based on

updated forecast data, order quantities, ﬁll rate targets, and prices.

As one could argue that the forecasting error and the volatility introduced into the system by

the re-optimization may cause distortions, we also considered a more stable “retrospective”

setting. Here, we derive the necessary demand data directly from the historical demand data

100 5 Experimental results

!" #$% "&'&

Figure 5.10: Deviations of simulation results from ﬁll rate targets, real-world scenario

and use a longer simulation period of two years. The results obtained for this experiment

are summarized in Appendix A and conﬁrm the ﬁndings we report in the following.

Figure 5.10 shows the deviation of the simulated ﬁll rates of local warehouses from the

overall ﬁll rate target. The overall ﬁll rate target was computed as the weighted average of

the individual ﬁll rate targets of all parts, weighted with the forecast value. All ﬁll rates

observed in the simulation are below ﬁll rate targets. The large deviations of the BF ap-

proximation are mainly due to the underestimation of the wait time. We believe that the

deviations of the other approximations are caused by two factors: First, demand in reality

is not stationary, and we recomputed the optimal reorder points only every three months.

Second, we considered only a small subset of the total assortment of parts.

The reorder points based on the AXS approximation achieve the highest ﬁll rate, but reorder

points and average inventory values are also highest, as can be seen in Tables 5.26 and 5.27.

The average inventory value during the simulated period is shown in Table 5.26, where the

total inventory value of the KKSL result is indexed at 100 and all other values are set in

relation to this. Again, we see that it is optimal to have more stock at the central level

when using the NB or the AXS wait time approximations, while the BF approximation

understocks dramatically. Similar results are seen for the value of the reorder points in

Table 5.27.

Overall, a clear recommendation which wait time approximation is most suitable in this

environment cannot be given and requires further analysis and simulations.

5.3 2-echelon results 101

Local 86.22 76.47 62.90 77.83

Central 13.78 25.71 6.57 27.45

Total 100.00 102.17 69.47 104.78

KKSL NB BF AXS

107.78 111.75 73.80 113.02

A common approach in industry is to prescribe a central ﬁll rate and then calculate central

and local reorder points to satisfy the prescribed central ﬁll rate and the local ﬁll rate targets.

In order to gauge the beneﬁt of our optimization algorithm, we also prescribed different

central ﬁll rates, calculated the corresponding reorder points, and then simulated the result-

ing network with historical demand data. Finally, we compared the results obtained by this

computation and simulation with those obtained by our optimization method. The central

reorder point is computed using a binary search on eq. (2.13) and the lead time demand is

calculated using eqs. (3.4) and (3.5). This implies that the central reorder point for a given

central ﬁll rate is equal across all wait time approximations for each part. Table 5.28 shows

the deviation of the average daily inventory value from the optimal value in the real-world

scenario. We ﬁnd that prescribing any central ﬁll rate leads to signiﬁcant overstocking com-

pared to the optimal solution. In particular, prescribing ﬁll rates above 90%, which is very

common in industry, leads to a substantially higher stock. Optimizing the network based on

the individual parts’ demand statistics, signiﬁcant savings are possible in practice.

Table 5.28: Deviation of the average daily inventory value from the optimal solution given a pre-

scribed central ﬁll rate

central ﬁll

rate

20% 7.77% 71% 31.60% 30.59%

40% 7.65% 51% 34.50% 23.72%

70% 8.40% 25% 44.38% 14.41%

90% 17.03% 19% 66.32% 14.54%

95% 23.21% 23% 78.90% 20.99%

102 5 Experimental results

In this section we analyze the accuracy of the different wait time approximations given the

optimal solutions as an addition to Section 5.2. The numbers presented in the following

are a by-product of the computations described above. Note, that the optimal solutions

vary depending on the used approximation and the results presented here are not as directly

comparable, as the ones in Section 5.2. Nevertheless, the results presented below give some

additional indication of the quality of the different approximations and in principle conﬁrm

our previous ﬁndings.

Table 5.29 shows the mean and the standard deviation of the wait time we observed in the

simulation in comparison to those obtained by the corresponding wait time approximation

in the a-priori calculation, which has been used in the optimization. The numbers shown

are averages over all parts and warehouses. Table A.8 in the Appendix A.3.3 shows the

same results for the retrospective scenario. The mean is dramatically underestimated by

the BF approximation, while all other methods overestimate it. The standard deviation is

only slightly underestimated by the BF approximation. All other methods overestimate the

standard deviation, especially the NB approximation.

Note that the BF approximation assumes a constant central lead time, which is not the case

in our example. This, however, cannot fully explain the large deviation of the mean values.

We believe that the main reason for the large deviation is the assumption of a very smooth,

nearly steady demand at the central warehouse. Berling and Farvid state that this assumption

is valid only for small values Qi . In the numerical analysis presented in their paper, increas-

ing Qi causes an increased deviation. In our data, many parts have a sporadic demand and

the ratio Qi /μi is much larger than what Berling and Farvid considered. Moreover, our

central lead times are much longer, so the system is not reset as frequently and the error of

the linear approach supposedly adds up. Also note that the aggregation over all warehouses

Table 5.29: Approximation and simulation results for the mean and standard deviation of the wait

time, real-world scenario

Method Simulation Approximation Simulation Approximation

KKSL 11.83 8.06 8.28 9.27

NB 4.81 3.59 4.04 10.15

BF 14.77 6.48 9.82 8.79

AXS 7.89 4.09 5.93 8.27

and parts done in Table 5.29 implicitly favors the AXS approximation. Axsäter’s method is

based on Little’s law and therefore only computes average and standard deviation combined

for all warehouses.

In order to compare how well the different wait time approximations perform in a hetero-

geneous setting, we have chosen 55 out of the 440 parts and two out of the eight warehouses

for further analysis. These parts and warehouses are chosen in such a way that demand sizes

5.3 2-echelon results 103

and order quantities differ substantially between the two warehouses, which are in the fol-

lowing referred to as large and small warehouses. For all chosen parts, the order quantities

differ by more than a factor of 5 between the two warehouses.

Table 5.30 shows the means and the standard deviations of the wait times observed in the

simulations and computed by the corresponding wait time models for the two chosen ware-

houses individually and as a combined value. Analogous results for the retrospective scen-

ario are reported in Table A.9.

Table 5.30: Approximation and simulation result for the wait time for big differences in replenish-

ment order quantity, real-world scenario

Method Warehouse Simulation Approximation Simulation Approximation

KKSL Large 6.55 5.73 6.94 7.59

Small 3.29 4.53 4.50 9.23

Combined 5.12 5.20 5.87 8.31

NB Large 1.75 1.04 2.60 6.34

Small 0.69 0.37 1.22 11.46

Combined 1.28 0.74 1.99 8.59

BF Large 12.21 3.53 11.94 9.05

Small 8.44 1.56 7.41 6.30

Combined 10.56 2.67 9.95 7.84

AXS Large 3.80 2.15 5.29 6.25

Small 2.21 2.10 2.77 6.07

Combined 3.10 2.13 4.18 6.17

The AXS approximation underestimates the actual wait time of the warehouse with the

large demand and replenishment order quantities. Vice versa, the AXS approximation over-

estimates the value for the warehouse with the smaller demands. Note, that the values are

not identical for large and small warehouses, as they should be for the AXS approximation,

because not all parts have demand and are stocked in the small warehouse.

The KKSL wait time model performs better for the individual warehouses. The KKSL

approximation is rather accurate for the large warehouse, while it overestimates mean and

standard deviation for the small warehouse. For the small warehouse, the standard deviation

is overestimated by all models except the BF approximation. This indicates that the other

methods perform better, when order quantities are large. The order quantity for most parts

at the small warehouse is only 1.

To our knowledge, our algorithm is the ﬁrst one to efﬁciently compute optimal reorder

points based on a generic wait time approximation. No exact solution method for this prob-

lem existed before. For the KKSL approximation and the BF approximation, no optimiz-

104 5 Experimental results

ation method at all has been published until now. For the AXS approximation and similar

METRIC-type wait time approximations, only heuristics (for different objective functions

and different problem assumptions) have been proposed so far.

The optimization and the complementary simulation software, that is used to evaluate and

verify the solutions of the optimization in a stochastic simulation, have been implemented

in Java using the Stochastic Simulation in Java library [LMV02].

An enumerative solution approach or an a-priori discretization of the ﬁll rate constraints is

computationally prohibitive for realistic problem sizes. The number of calls of the binary

search, that is needed to compute the reorder points for given ﬁll rate targets, quickly gets

too large and computation times get too long. For many sample problems we tested, our

algorithm was able to compute optimal reorder points within minutes while enumeration

would take hours.

We tested our algorithm on a laptop computer equipped with an Intel Core i5-4300 1.9 GHz

processor with 8 GB of RAM for a sample set of 445 instances based on real-world data.

The observed runtime was less than one second for almost 70% of the instances. For 95.68%

of the instances, runtime was less than 50 seconds. The runtime gets long if the expected

demand and the variance of the demand are very high. Only 19 instances needed more than

100 seconds, only one instance needed more than 10,000 seconds (i.e., almost 14,000s).

These runtimes refer to the NB approximation, the greatest delta approach, and an optimal-

ity gap of 0%. For the other wait time approximations, runtimes are comparable. Note that

the required optimality gap has a signiﬁcant impact on the runtime. If we allow a gap of 1%,

i.e., if we terminate our algorithm if over- and underestimating ILP deviate by less than 1%,

the runtimes reduce by 30%, while the objective values of the computed solutions increase

by only 0.15% over all instances.

We recommend to add new breakpoints with the naive approach if local warehouses face

similar demand and use similar order quantities. The more order quantities and demands

at local warehouses differ, the better the greatest delta approach performs. The ﬁll rate

violation approach was always slowest in our experiments, but using it could be beneﬁcial

if the ﬁll rate targets between the local warehouses differ a lot.

In this subsection we analyze the heuristics introduced in Section 4.1.6 and focus on two

main factors: Firstly, what is the difference in runtime compared to the original algorithm1

and secondly, what is the optimality loss, i.e., in how many cases did we not end up in the

optimal solution and what is the associated deviation from optimality in these cases? In

Section 4.1.6 we introduced two different heuristics. One possibility is to add additional

constraints based on the step size (cp. Algorithm 24), while the second one is to modify the

1 Experiments were run on an updated hardware compared to Section 4.1.6. The processor now is an Intel

Core i5-6300U 2,4 GHz.

5.3 2-echelon results 105

slope of the original constraints with the help of the step size (cp. Algorithm 25). We will

refer to the ﬁrst one as STEP - ADD and the latter one as STEP - SLOPE.

If we compare the runtime and the objective values for the same set of instances as in

Section 5.3.3 to Algorithm 21, which is the original algorithm that guarantees optimality, we

get a mixed picture. Table 5.31 summarizes the results. For STEP - ADD the average runtime

increases by almost 28% (second column), while for STEP - SLOPE it decreases by about a

third. The average deviation in objective value is small for both heuristics but it can be

large for individual instances. Recall, that the original optimization algorithm was already

really fast for the large majority of the instances (less than a second for almost 70% of the

instances). Here, we only get deviations within milliseconds. The overall average is heavily

inﬂuenced by few instances with a long runtime. In fact, the longer runtime for STEP - ADD

is caused by only one instance were it ran signiﬁcantly longer. If we exclude this instance

from the analysis, runtime is about 15% faster compared to the original algorithm.

For the STEP - SLOPE we are still optimal in 74.16% of the cases despite the huge perform-

ance gains in runtime. On average the objective value is only 0.63% higher than the optimal

value. Nevertheless, there are cases with a deviation of up to 12.5% from the optimal value.

Interestingly enough, if we consider the sum of the objective values of all instances, we

only get a deviation of 0.51% between the results of STEP - SLOPE and the optimum (value

not shown in the table). As expected, the STEP - ADD heuristic more often is optimal and in

general closer to the optimal values, as the risk of a violation of the assumption of underes-

timation is lower.

Table 5.31: Comparison of runtime, objective values and share of optimal results of heuristics and

original optimization algorithm

Heuristic Average dev. Average dev. Maximum dev. Optimal

STEP - ADD 27.70% 0.23% 6.25% 92.13%

STEP - SLOPE -33.22% 0.63% 12.5% 74.16%

The analysis above shows that there is room for potential runtime improvements with ob-

jective values that differ only little from the optimal values. However, for the majority of

the instances we can ﬁnd the optimal solution very fast with our original optimization al-

gorithm, guaranteeing optimality. For all instances that can be solved within milliseconds,

there is hardly a need for heuristics. Furthermore, the above analysis is heavily inﬂuenced

by a few instances with long runtime.

To verify the results, we do an additional comparison for instances where we expect our

algorithms to require a long runtime. We therefore have selected 89 additional instances

with high demand in the network. We expect the potential solution space to be large because

of the high demand and therefore to be challenging for our algorithms.

Indeed, with our original optimization algorithm the fastest of those instances took 40

seconds to solve, while the slowest took more than 10 hours. On average the algorithm

needed about 35 minutes to solve until optimality. Table 5.32 summarizes the results for

106 5 Experimental results

these difﬁcult instances. The STEP - SLOPE heuristic is almost 60% faster than the original

algorithm and reduces total runtime from about 52 to 21 hours. The STEP - ADD heuristic

only decreases runtime by about 7%.

Comparing the objective values, STEP - ADD is optimal more often than STEP - SLOPE but,

surprisingly, on average deviates more from the optimal value. STEP - SLOPE is only optimal

in about 9% of the cases, but the average deviation from the optimal objective value is

less than 1%. If we again consider the sum of the objective values of all 89 instances, the

deviation of STEP - SLOPE from the sum of optimal objective values is also less than 1%.

Overall, STEP - SLOPE offers vast runtime improvements and we only deviate little from the

the optimal objective. The results for the STEP - ADD heuristic are not as good.

Table 5.32: Comparison of runtime, objective values and share of optimal results of heuristics and

original optimization algorithm for high demand instances with long runtime

Heuristic Average dev. Average dev. Maximum dev. Optimal

STEP - ADD -6.87% 1.90% 14.26% 24.72%

STEP - SLOPE -59.19% 0.91% 10.15% 8.99%

During the analysis of Tables 5.31 and 5.32, we have made an interesting observation, that

can potentially be used to decrease runtime even further. In cases where STEP - ADD per-

forms worse than the original algorithm, the optimal central reorder point R0 often was 0.

If the optimal central reorder point is 0, the original algorithm often only needs one or two

iterations and we observe the following pattern: It solves the initial ILPs, obtains 0 as a solu-

tion and then, if necessary, reﬁnes the breakpoints at 0. After resolving (second iteration) it

again obtains 0 as a solution and terminates with the optimal solution. It is easy to adjust

the STEP - ADD for this special case: Before determining the step size and adding additional

or modiﬁed constraints to the underestimating ILP, we solve the ILP once and check if the

solution of R0 was 0. If the breakpoints at 0 are not based on a prematurely ﬁnished binary

search, R0 = 0 is the optimal solution. If they are based on a prematurely ﬁnished binary

search, we reﬁne and resolve. If R0 = 0 again is the solution of the underestimating ILP, we

terminate with this optimal solution. If not, we determine the step sizes and run the heur-

istics as originally intended. The cost of this procedure can be approximated by the cost

of solving up to two additional ILPs and a binary search. Solving one ILP is usually done

within seconds for most instances. However, even if optimality is not proven for R0 = 0

after two iterations, the results obtained can be reused in the following iterations to supply

good starting solutions and strengthen the models. The potential runtime savings are high.

The problem is by design not relevant for the STEP - SLOPE heuristic.

If we exclude the two instances with optimal R0 = 0 from the analysis in Table 5.32, the

runtime of the STEP - ADD heuristic is about 17% faster than the original optimization al-

gorithm. This is an indication of how much runtime savings can be obtained if this special

case is handled as described above, but still the STEP - ADD heuristic is much slower than the

STEP - SLOPE heuristic, while the average deviation from the optimal value is larger.

5.4 n-echelon results 107

In total, we recommend using the STEP - SLOPE heuristic if runtime is an issue with the

original algorithm. The heuristic obtains results at least close enough to the optimal solution

while being dramatically faster. Especially, if reorder points for large networks with many

instances should be obtained, this may be the only feasible approach.

our network with three echelons including the respective transportation times is shown in

Figure 5.11. We have two warehouses on the second echelon and on the third echelon 4 and

8 local warehouses respectively. Recall, that the index of a warehouse is denoted as (i, j),

where i is the echelon and j an enumeration of the warehouses on the respective echelon.

We refer to the left sub-network in Figure 5.11 with 8 local warehouses as sub-network 1

and to the right sub-network with 4 local warehouses as sub-network 2. This differentiation

is solely for referencing purposes to simplify the analysis of the results. In the optimiza-

tion, no division into sub-networks was done but the network was optimized in its entirety.

The expected lead time from the supplier to the master warehouse does vary substantially

from part to part based on the required production process at the supplier and the distance

from supplier to master warehouse. We have one intermediate warehouse which can be sup-

plied in relatively short time and one intermediate warehouse which has long transportation

times. The transportation time to local warehouses varies between local warehouses. For

the 4 local warehouses in sub-network 2, the transportation time is about 2 days for 3 out

of the 4 warehouses and 11.5 days for the remaining warehouse. For the sub-network 2

the transportation time does vary between 8 and 23 days depending on the location of the

warehouse.

We have calculated reorder points for 662 different parts in this network using the algorithm

described in Section 4.2. We stopped optimization when the optimality gap was less than

0.01 and therefore did not let the optimization algorithm necessarily run until optimality

was guaranteed. One immediate outcome of the optimization is that the runtime of our

optimization algorithm is a challenge for a large network with large demand like this. There

is a need for faster methods for large networks and if reorder points have to be calculated

for many parts. The 663 parts are a mere subset of more than 100,000 parts that are stored

in the real world network.

The given price parameter pi, j is constant within the two sub-networks for each part, but

it varies between master warehouse and the two sub-networks and between the two sub-

networks. pi, j is on average signiﬁcantly lower at the master warehouse. For most parts pi, j

in the sub-network 1 is higher than pi, j in the sub-network 2. pi, j and the ratio between the

master warehouses and the sub-networks varies from part to part.

108 5 Experimental results

Optimized network

1-190 days

1,1

2,1 2,2

8-23 days

3,1

3,9

3,2 3,7

3,3 3,6 3,11

3,4 3,5

3,10 3,12

Figure 5.11: Real-world 3-echelon distribution network with expected transportation times

The wait time approximation used for the n-level optimization is the KKSL approximation

described in Section 3.2.2.

We start by having a general look at the overall distribution of stock. Table 5.33 shows the

share of reorder points and the share of reorder points and order quantities on the respective

echelons and within the sub-networks. The share of reorder points at a warehouse is the sum

of all reorder points of all parts at that warehouse divided by the sum of all reorder points of

all parts in the network. It is evident that only very little stock is kept at the second echelon

in this setting, while the majority of the stock is at the master warehouse and in the local

warehouses. This is in line with results from MRP literature with regards to placing buffers

[LA93]. For the intermediate network in the ﬁrst sub-network, the share of reorder points

on the second echelon, i.e., in warehouse (2, 1), is only 0.79% while 41.29% are kept at a

local level. The share for the intermediate warehouse of the second sub-network is similar.

The reorder points at the master warehouse are 23.34% of the sum of total reorder points.

Recall, that we have a signiﬁcantly lower price at the master warehouse compared to the

intermediate warehouses, whereas the intermediate warehouses have the same price as the

local warehouses.

Table 5.33 could indicate that the intermediate echelon may not be necessary in this case.

However, if we look in more detail at the results for individual parts, we notice large differ-

5.4 n-echelon results 109

Warehouse(s) reorder points reorder points and order quantities

(1, 1) 23.34% 36.10%

(2, 1) 0.79% 4.70%

∑8j=1 (3, j) 41.29% 34.15%

(2, 2) 0.89% 1.82%

j=9 (3, j)

∑12 33.69% 23.23%

ences between parts and the need for a more nuanced analysis. In Table 5.34, it is shown

that the solution for many parts was to select the lowest reorder point possible, which is

0 for the intermediate warehouses. The same general tendency can be seen for the mas-

ter warehouse. The last row in Table 5.34 shows the combined results, i.e., where all

Ri, j > 0, for all (i, j) ∈ {(1, 1), (2, 1), (2, 2)}, or = 0 respectively.

Warehouse Ri, j > 0 Ri, j = 0

(1, 1) 165 464

(2, 1) 83 546

(2, 2) 95 534

At all non-local warehouses 48 165

If we only look at the parts where all Ri, j > 0, for all (i, j) ∈ {(1, 1), (2, 1), (2, 2)}, we nat-

urally see a higher stock at the second echelon. But even within this subset we see large

variations of how stock is distributed in the network. Figure 5.12 shows a boxplot of the

results. For the ﬁrst and second echelon, it shows the share and sum of reorder points and

order quantities, respectively, that is kept on each echelon for all results of the non-zero

reorder points subset.

Similarly to the overall results, we see that the share on the ﬁrst echelon is consistently

higher than on the second echelon. The (combined) share of the two reorder points on the

second echelon is below 10% for the majority of the parts, even for the subset where we

only consider results with reorder points larger 0.

The results shown in Figure 5.12 are decoupled for the individual parts. It could be the case

that a high stock on the ﬁrst echelon leads to a low stock on the second echelon and vice

versa. A simple scatter plot of the share of stock on the two echelons is shown in Figure 5.13.

There, we see at least some indication of a linear relationship, but also a signiﬁcant number

of parts where the share of stock on the second echelon is very low.

The leading question in the following therefore is if we can detect a pattern for which parts

it is beneﬁcial to keep stock at an intermediate (and the master) warehouse and for which it

is not beneﬁcial.

110 5 Experimental results

Figure 5.12: Boxplot of the share of stock at different echelons for results with non-zero non-local

reorder points

An obvious measure for determining if more stock should be kept at the ﬁrst or second ech-

elon would have been the ratio of the pi, j at the echelons. Surprisingly enough, we see only

a coefﬁcient of correlation of −0.10 (−0.12, respectively) between p1,1 /p2,1 (p1,1 /p2,2 )

and R2,1 (R2,2 ). Therefore we cannot conclude that there is a linear relationship between the

share of stock at the second echelon and the ratio of prices.

Figure 5.13: Scatter plot of the share of reorder points on ﬁrst and second echelon for results with

non-zero non-local reorder points

Our initial intuition was that the lead time of a respective warehouse itself does play a

signiﬁcant role in determining how high the share of stock at a certain echelon should be.

We expected that warehouse (2, 1) does have a higher share of reorder points than warehouse

(2, 2) because of the signiﬁcant longer lead time. Our analysis in Table 5.33 shows little

5.4 n-echelon results 111

difference in the share of the reorder points on the respective warehouse but some difference

if we additionally consider the order quantities. This difference gets smaller if we only

consider parts where the respective Ri, j > 0. In Table 5.34, it is shown that the number

of parts with Ri, j > 0 is even larger for warehouse (2, 2) but again the difference is small.

Therefore, we do not have sufﬁcient rationale to conﬁrm a signiﬁcant role of the difference

in lead time between the two warehouses.

A number of other factors possibly inﬂuence how high the share of stock at the second

echelon should be: The lead time of the master warehouse, the demand structure and size

of (local) customer demand.

We start by looking at the inﬂuence of the transportation time of the master warehouse on

the share of reorder points kept at the second echelon. In the data, the standard deviation of

transportation time is some constant fraction of the expected transportation time. Therefore,

all results in the following are presented for the expected transportation time of the master

warehouse, but apply due to the given parametrization equally for the standard deviation of

transportation time. If we look at the correlation between expected transportation time and

share of reorder points at the warehouses on the second echelon, we see signiﬁcantly higher

values compared to the correlation with the transportation time of the warehouse itself, as

shown in Table 5.35. Nonetheless, correlation coefﬁcients of 0.37 and 0.32 are too low to

indicate a clear linear relationship.

Table 5.35: Pearson correlation coefﬁcients of different values with the share of reorder points kept

at respective warehouses

Correlation with

p

Warehouse (l,m) E[T1,1 ] ∑ j∈Cl,m μ3, j ∑ j∈Cl,m σ3,2 j cv l,m

p1,1

(2, 1) 0.37 0.10 0.04 -0.03 -0.10

(2, 2) 0.32 -0.04 -0.03 0.00 -0.11

Next, we look at the mean and variance of local demand. ∑ j∈Cl,m μ3, j in Table 5.35 is

the sum of the expected demand per time unit of all child nodes of warehouse (l, m), i.e.,

all succeeding warehouses that order from warehouse (l, m). The next column shows the

same summation for the variance of demand per time unit. The correlation coefﬁcient for

both warehouses on the second echelon are close to 0, indicating no linear relationship

between the two variables. We repeated the analysis for the mean and variance of demand

during lead time with very similar outcome. The coefﬁcient of variation also does have

a correlation coefﬁcient close to 0. Scatter plots also indicate no pattern. Therefore, we

have no indication for a connection between the sum of expected local demand as well

as the sum of variance of local demand. The correlation coefﬁcient with the coefﬁcient

of variation cv = (∑ j∈Cl,m σ3, j )/(∑ j∈Cl,m μ3, j ) is close to 0 and therefore we also have no

indication for a relationship with the variability of local demand. In addition to the results

shown here, we also analyzed if there is any relationship of the distribution of stock to how

concentrated demand at the local warehouses is, i.e., does the distribution of stock changes

112 5 Experimental results

for cases where demand is highly concentrated in one local warehouse only in each sub-

network and all other local warehouses face only little demand. For this, we also could not

ﬁnd any pattern.

As a last measure, we had a closer look at the input data of the parts with the highest share of

reorder points on the second echelon and tried to detect a common pattern but were unable

to do so.

Overall, we can conclude that it is optimal to keep the majority of stock at the local ware-

houses, which clearly conﬁrms our 2-level results (cp. Section 5.3). Additionally, for the

vast majority of the parts the results indicate that it is within 1% of optimality to keep little

stock at the second echelon. Only for a limited number of parts have the reorder points on

the second echelon been even larger than 0. We were unable to detect a pattern when this is

the case. We, however, only analyzed if there is a linear relationship between an input factor

and the distribution of stock in the network. Further research is needed, but computations

for a realistically sized network like the one considered here are demanding.

One could argue that, because we have an optimality gap of 1%, the difference in the value

of the objection function is relatively small for a number of different possible solutions.

Then we had only picked those non-zero reorder points by chance in those cases, because

the result was within our optimality gap. However, as our results are so consistent for a

large number of parts, this is unlikely. Nevertheless, we have solved some of the relevant

parts until optimality. Results did not change substantially and we could not observe a shift

away from non-zero reorder points at the second echelon.

6 Real-world application

In this section, we present an approach how the optimization algorithms of Chapter 4 and

the analysis of the numerical experiments in Chapter 5 can be used to implement a multi-

echelon optimization in a real world application. Much literature has already been written

on the implementation of inventory planning methods in general, for example the books of

Silver et al. [SPT16] and Axsäter [Axs15]. Here, we focus solely on the optimization in

multi-echelon networks with many warehouses and where an optimization has to be done

for many instances.

The industrial partner, who supplied the real world data used in Chapter 5, has a supply

chain network of about 100 warehouses and several 100,000 spare parts. The size alone is

a challenge if the network should be optimized in its entirety. While we have developed an

optimization model for the general n-level case, more work in this area is needed: Additional

insights into the structure and further simulations to determine the impact of an optimization

in an n-level network would be of great value and the runtime needs to be improved (cp.

Sections 4.2 and 5.4).

Therefore, our recommendation for practical applications and the focus of this section is on

2-level networks only. A more elaborated network could be decoupled and optimized by,

for example, regions with only 2-level networks. In many organizations, different regions

are usually managed separately, so this would at least ﬁt the processes within companies.

The stocking levels of master warehouses above those regions then have to be determined

separately. This could for example be done by a simulation of different options in which the

consequences on the stock in the entire network are evaluated for a representative sample of

parts.

The following part of this section is based on the assumption that we are tasked with optim-

izing reorder points in a 2 level network of about 10 warehouses and for 50,000 to 100,000

different spare parts. The optimization then has to be done in a nightly batch or on the

weekend, which sets a limit on the available runtime.

We ﬁrst give guidelines on how suitable wait time approximations can be selected and then

on which algorithm should be used for which subset of the assortment.

As is evident from Section 5.2, the selection of a suitable approximation of the wait time

has decisive consequences on the optimal reorder points and on the ﬁll rate.

The ﬁrst check should be if lead times and order quantities are really low, as only in this case

the BF approximation is suitable. If the demand structure and order quantities are similar

for each part in all local warehouses, the AXS approximation is worth considering.

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_6

114 6 Real-world application

If both conditions do not hold, these two approximations can be excluded from further ana-

lysis. Our analysis indicates that a promising approach then is to use the NB approximation

if Qi /μi < 25 for all local warehouses i and the KKSL approximation in all other cases.

Nevertheless, we recommend to do at least some simulations as described in Section 5.1

to verify the ﬁndings reported here and analyze if they apply for the network in question.

A simulation with historic demand data is preferable if this data is available. Otherwise,

randomly generated data can be used.

analysis (cp. for example [FW87]). It ranks the importance of parts in inventory using

certain criteria. For example, it ranks the parts which are responsible for 80% of the sales

as most important A-parts, parts which are responsible for 15% of the sales as B-parts and

parts which are responsible for 5% of the sales as C-parts. Typically, only 20% of the parts

are responsible for 80% of the sales, while the majority of the parts are responsible for only

a small share of the sales and therefore are C-parts [SPT16].

In order to select a suitable algorithm for each part and keep runtime low, we suggest clas-

sifying the data along the two dimensions of demand and price. Then, for each of the 9

classes, either the original optimization algorithm is used, the STEP-SLOPE heuristic or

a simple enumeration. We do not consider the STEP-ADD heuristic, as the results for the

STEP-SLOPE heuristic were superior in our numerical experiments (cp. Section 5.3.4).

We suggest using a simple enumeration for items with low demand. The demand for many

items is usually very low, such that only very low reorder points are sensible for this type

of items. It is then not worth setting up the optimization algorithm. The solutions for a

central reorder point between [0; k] with k being a small number can just be evaluated and

compared.

In Sections 5.3.3 and 5.3.4, we report on the runtime of the original optimization algorithm

and the heuristics and evaluate how much worse the objective values of the heuristics are.

The objective is to obtain a solution for the whole assortment which is as good as possible,

while keeping runtime low.

In the following, we will illustrate the proposed approach with an example. We consider a

real world assortment of spare parts. This assortment has a very typical distribution, which

we have seen quite often in different supply chain networks of our industrial partner. The

assortment was clustered in an ABC-analysis with the categories price (from A - expensive

to C - cheap) and demand (A - high to C - low). In Table 6.1, we show what percentage

of parts is within each class and how high the revenue share of the respective classes is.

While, for example, only 0.6% of the parts are in the high demand - cheap price class (AC),

38.09% of the revenue was generated by this class. Only about 3% of the parts generate

roughly 75% of the revenue, i.e., the parts in the classes AC, AA and BC.

6.3 Key takeaways for implementation in real world applications 115

Share of

No. Parts Revenue No. Parts Revenue No. Parts Revenue

C 0.60% 38.09% 1.09% 25.95% 5.12% 2.00%

Price

B 19.55% 7.47% 23.12% 1.43% 43.76% 0.49%

A 1.31% 11.67% 2.25% 4.30% 3.20% 20.27%

A B C

Demand

Table 6.1: Percentage share of number of parts and revenue in the classes of the multi-criteria abc

analysis

Now, we select a suitable algorithm for each class to balance the two goals of optimality

and runtime. Table 6.2 shows how such a system may look like for the example on hand.

Price

A Original Optimization STEP-SLOPE Enumeration

A B C

Demand

Table 6.2: Multi-criteria ABC analysis and selection of suitable algorithms

For the small number of parts which are responsible for the majority of the revenue, i.e.,

the parts in AC, AA and BC, we use the original optimization algorithm to obtain optimal

results. For the about 45% of the parts in the classes AB, BA and BB we use the STEP-

SLOPE heuristic. For all items with low demand a simple enumeration is used as demand

is very low and it is better to just use a simple enumeration as described above. We want

to emphasize that this decisions are rule of thumb and always contain the trade-off between

runtime and quality of results. Some allocations of the algorithms in Table 6.2, could have

been done different if the focus was more on runtime or quality of results.

To implement the ﬁndings of this work into a real world application, several steps have to

be considered and we now summarize the key takeaways in this section.

First, suitable (sub-)networks should be selected if the considered supply chain has more

than two echelons. A strategy should be determined how much stock is kept at the upper

echelons and especially at the master warehouses.

Then, a suitable wait time approximation or several approximations should be determined

for the 2-level network. Selecting a suitable approximation is essential to determine reason-

able reorder points and ensure that the ﬁll rate targets are met.

116 6 Real-world application

Finally, the assortment should be clustered with a multi-criteria ABC analysis and then

appropriate algorithms should be assigned to each class.

7 Conclusion

In this work, we undertook a comprehensive look at the modeling and planning of invent-

ory in multi-echelon distribution networks where all warehouses use (R,Q)-order policies.

Our contributions can be divided in three main areas: the approximation of wait time, the

optimization of reorder points in 2-echelon networks and the generalization to n-level net-

works.

Wait time is the central mechanism that connects the different echelons in the network. The

modeling of the wait time is complex and several approximations exist. Besides introdu-

cing our own novel wait time approximation (the NB approximation), we compared three

well known approximations from the literature and our own approximation in an extensive

numerical study. Hereby, we used random data as well as real-world demand data. There

is no “best” approximation and it does depend on the characteristics of the demand and

the network which approximation has the highest accuracy. Nevertheless, we were able to

establish a few guidelines that describe which approximation is likely to perform good in

which circumstances.

The core of our work is the joint optimization of reorder points. We have developed ded-

icated algorithms for the 2-level case and the n-level case. The algorithms simultaneously

optimize all reorder points in distribution networks. Our algorithms are the ﬁrst ones that are

able to determine optimal reorder points for inventory distribution systems using complex

wait time approximations.

The 2-level optimization algorithm is applicable for generic wait time approximations and

only relies on very mild assumptions. It was tested with real-world data and performs fast

enough to be effectively used in practical applications. To cope with the non-linear ﬁll

rate constraints, it over- and underestimates the original non-linear constraints by piecewise

linear functions. Using a truncated binary search to compute initial over- and underestimates

for the non-linear functions, it then iteratively reﬁnes and resolves over- and underestimating

integer linear programs until a globally optimal solution is found. As the algorithm produces

both lower and upper bounds, it can also be terminated when a desired optimality gap is

reached. Stock can be drastically reduced using our algorithm compared to the commonly

used approach of prescribing ﬁxed central ﬁll rates. Our experimental results indicate a

possible decrease in the range of 20% in many real-world cases. Our results also conﬁrm

previous studies, which suggest that in many situations the optimal central ﬁll rate is much

lower than what is commonly used in practice. The precise values of the optimal central ﬁll

rates, however, depend heavily on the used wait time approximations.

Additionally, we have introduced two heuristics which are especially suitable for large net-

works with large demand. We determine a potential strong lower bound on the effect of an

increase of the central reorder point on each local reorder point. We use this information

to either introduce additional constraints or strengthen the constraints of the original model.

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4_7

118 7 Conclusion

The runtime of those algorithms was up to 60% faster than the original optimization al-

gorithm in our numerical experiments. We cannot guarantee optimality anymore with those

algorithms, as our lower bound is only an estimate, but our experiments show that in many

cases they still end up in the optimum or at least close to the optimum.

The complexity rises signiﬁcantly in the n-level case. Structurally we do not have a direct

relationship between the reorder point central warehouse and the reorder point of each local

warehouse anymore. Instead, we have a relationship between the reorder point of every

local warehouse and all its predecessors. Worse, the relationship of reorder points of a local

warehouse and a preceding warehouse additionally depends on the reorder points of all

other warehouses that precede the local warehouse. Only the approximation by Kiesmüller

et al. is readily applicable for the general n-level case. By assuming each increase of the

reorder point in a non-local warehouse imposes a maximal decrease of wait time for all

succeeding local warehouses and isolating this relationship from the rest of the network,

we constructed an underestimating piecewise-linear step function that relates the reorder

points of each local and non-local warehouses. The underestimating integer linear program

is then gradually reﬁned in two distinct phases and can be solved even until optimality. Due

to the larger solution space, solving problems until optimality is computationally difﬁcult.

Our algorithm supplies bounds in every iteration of the second reﬁnement phase and can

therefore be also used to compute solutions that are within a given range of optimality. We

have also sketched possible heuristics based on step sizes, which we expect to be able to

solve even larger instances. We computed results for real-world problem instances of a 3-

echelon network. Surprisingly for us, almost no stock should be kept at the intermediate

echelon for the majority of cases.

Finally, we gave advice on how to apply our ﬁndings in real-world applications and presen-

ted the key takeaways for an implementation. While our research enhances the possibilities

to ﬁnd optimal reorder points in divergent multi-echelon systems, it also makes clear how

many research opportunities still exist.

Optimization algorithms that can handle more general objective functions than our model

or simultaneously optimize reorder points and order quantities would be of great theoretical

and practical interest. In the n-level case more analytical insight in the problem structure

as well as in the structure of an optimal solution is needed. Here, even heuristics that can

supply good solutions fast for realistically sized networks and demand structures would be

of great value. While we sketched how those heuristics could look like, they still have to

be tested in numerical experiments and additional theoretical groundwork is needed. Fur-

thermore, the sometimes large errors of the wait time approximation clearly show room

for improved models and the need for further research in this area. Rather than validat-

ing approximations by “artiﬁcial” simulations using random demand generators, we want

to emphasize the importance of using real-world demand data and realistic settings to test

approximations.

Bibliography

multiechelon inventory control. Production and Operations Management,

7(4):370–386, 1998.

[AJ96] Sven Axsäter and Lars Juntti. Comparison of echelon stock and installation

stock policies for two-level inventory systems. International Journal of Pro-

duction Economics, 45(1):303–310, 1996.

[AJ97] Sven Axsäter and Lars Juntti. Comparison of echelon stock and installation

stock policies with policy adjusted order quantities. International Journal of

Production Economics, 48(1):1–6, 1997.

[AM00] Jonas Andersson and Johan Marklund. Decentralized inventory control in

a two-level distribution system. European Journal of Operational Research,

127(3):483–506, 2000.

[AR93] Sven Axsäter and Kaj Rosling. Notes: Installation vs. echelon stock policies

for multilevel inventory control. Management Science, 39(10):1274–1280,

1993.

[Axs93] Sven Axsäter. Continuous review policies for multi-level inventory systems

with stochastic demand. Handbooks in operations research and management

science, 4:175–197, 1993.

[Axs00] Sven Axsäter. Exact analysis of continuous review (r, q) policies in two-

echelon inventory systems with compound poisson demand. Operations Re-

search, 48(5):686–696, 2000.

[Axs03] Sven Axsäter. Approximate optimization of a two-level distribution inventory

system. International Journal of Production Economics, 81:545–553, 2003.

[Axs07] Sven Axsäter. On the ﬁrst come-ﬁrst served rule in multi-echelon inventory

control. Naval Research Logistics (NRL), 54(5):485–491, 2007.

[Axs15] Sven Axsäter. Inventory Control. Springer Intern. Publ, 3 edition, 2015.

[BF14] Peter Berling and Mojtaba Farvid. Lead-time investigation and estimation

in divergent supply chains. International Journal of Production Economics,

157:177–189, 2014.

[BKM11] Oded Berman, Dmitry Krass, and M. Mahdi Tajbakhsh. On the beneﬁts of

risk pooling in inventory management. Production and Operations Manage-

ment, 20(1):57–71, 2011.

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4

120 Bibliography

inventory systems using induced backorder costs. Production and Operations

Management, 15(2):294–310, 2006.

[BM13] Peter Berling and Johan Marklund. A model for heuristic coordination of real

life distribution inventory systems with lumpy demand. European Journal of

Operational Research, 230(3):515–526, 2013.

[Bur75] T. A. Burgin. The gamma distribution and inventory control. Operational

Research Quarterly (1970-1977), 26(3):507–525, 1975.

[CJS11] Kyle D. Cattani, F. Robert Jacobs, and Jan Schoenfelder. Common invent-

ory modeling assumptions that fall short: Arborescent networks, poisson de-

mand, and single-echelon approximations. Journal of Operations Manage-

ment, 29(5):488–499, 2011.

[CS60] Andrew J. Clark and Herbert Scarf. Optimal policies for a multi-echelon

inventory problem. Management Science, 6(4):475–490, 1960.

[CS62] Andrew J. Clark and Herbert Scarf. Approximate solutions to a simple multi-

echelon inventory problem. Studies in applied probability and management

science, pages 88–110, 1962.

[CZ97] Fangruo Chen and Yu-Sheng Zheng. One-warehouse multiretailer systems

with centralized stock information. Operations Research, 45(2):275–287,

1997.

[DdK98] Erik Bas Diks and Ton de Kok. Optimal control of a divergent multi-echelon

inventory system. European Journal of Operational Research, 111(1):75–97,

1998.

[DdKvH09] Mustafa K. Doğru, Ton de Kok, and Geert-Jan van Houtum. A numer-

ical study on the effect of the balance assumption in one-warehouse multi-

retailer inventory systems. Flexible Services and Manufacturing Journal,

21(3-4):114–147, 2009.

[dKGL+ 18] Ton de Kok, Christopher Grob, Marco Laumanns, Stefan Minner, Jörg Ram-

bau, and Konrad Schade. A typology and literature review on stochastic multi-

echelon inventory models. European Journal of Operational Research, 2018.

[DS81] Bryan L. Deuermeyer and Leroy B. Schwarz. A model for the analysis of

system service level in warehouse-retailer distribution systems: The identical

retailer case. In Leroy B. Schwarz, editor, Multi-level production/inventory

control systems: theory and practice, volume 16, pages 163–193. North Hol-

land, 1981.

[ES81] Gary Eppen and Linus Schrage. Centralized ordering policies in a multi-

warehouse system with lead times and random demand. Studies in the Man-

agement Sciences, 16:51–67, 1981.

Bibliography 121

inventory systems under uncertainty. In S.C Graves, A.H.G. Rinnooy Kan

and P.H. Zipkin, editors, Logistics of Production and Inventory, volume 4

of Handbooks in Operations Research and Management Science, pages 133–

173. Elsevier, 1993.

[FR14] Mojtaba Farvid and Kaj Rosling. Customer waiting times in continuous re-

view (nq,r) inventory systems with compound poisson demand: Working pa-

per. Working paper, 2014.

[FW87] Benito E. Flores and D. Clay Whybark. Implementing multiple criteria abc

analysis. Journal of Operations Management, 7(1):79–85, 1987.

[GB18a] Christopher Grob and Andreas Bley. Comparison of wait time approxima-

tions in distribution networks using (r,q)-order policies. ArXiv e-prints, 2018.

[GB18b] Christopher Grob and Andreas Bley. On the optimality of reorder points -

a solution procedure for joint optimization in 2-level distribution networks

using (r,q) order policies: Working paper. Working paper, 2018.

[GBS18] Christopher Grob, Andreas Bley, and Konrad Schade. Joint optimization

of reorder points in n-level distribution networks using (r,q)-order policies:

Working paper. Working paper, 2018.

[GG07] Alev Taskin Gümüs and Ali Fuat Güneri. Multi-echelon inventory manage-

ment in supply chains with uncertain demand and lead times: Literature re-

view from an operational research perspective. Proceedings of the Institu-

tion of Mechanical Engineers, Part B: Journal of Engineering Manufacture,

221(10):1553–1570, 2007.

[GKR07] Guillermo Gallego, Kaan Katircioglu, and Bala Ramachandran. Inventory

management under highly uncertain demand. Operations Research Letters,

35(3):281–289, 2007.

[Gra85] Stephen C. Graves. A multi-echelon inventory model for a repairable item

with one-for-one replenishment. Management Science, 31(10):1247–1256,

1985.

[GW03] Stephen C. Graves and Sean P. Willems. Supply chain design: safety stock

placement and supply chain conﬁguration. Handbooks in operations research

and management science, 11:95–132, 2003.

[Har13] Ford W. Harris. How many parts to make at once. Factory, The Magazine of

Management, 10(2):135–136, 152, 1913.

[KdKSvL04] Gudrun P. Kiesmüller, Ton de Kok, Sanne R. Smits, and Peter J.M. van Laar-

hoven. Evaluation of divergent n-echelon (s,nq)-policies under compound

renewal demand. OR-Spektrum, 26(4):547–577, 2004.

122 Bibliography

[KM10] Steffen Klosterhalfen and Stefan Minner. Safety stock optimisation in distri-

bution systems: a comparison of two competing approaches. International

Journal of Logistics: Research and Applications, 13(2):99–120, 2010.

[LA93] A. G. Lagodimos and E. J. ANDERSON. Optimal positioning of safety

stocks in mrp. International Journal of Production Research, 31(8):1797–

1813, 1993.

[LL03] Hon-Shiang Lau and Amy Hing-Ling Lau. Nonrobustness of the normal ap-

proximation of lead-time demand in a (q, r) system. Naval Research Logistics

(NRL), 50(2):149–166, 2003.

[LM87a] Hau L. Lee and Kamran Moinzadeh. Operating characteristics of a two-

echelon inventory system for repairable and consumable items under batch

ordering and shipment policy. Naval Research Logistics (NRL), 34(3):365–

380, 1987.

[LM87b] Hau L. Lee and Kamran Moinzadeh. Two-parameter approximations for

multi-echelon repairable inventory models with batch ordering policy. IIE

Transactions, 19(2):140–149, 1987.

[LMV02] P. L’Ecuyer, L. Meliani, and J. Vaucher. Ssj: A framework for stochastic sim-

ulation in java. In E. Yücesan, C.-H. Chen, J. L. Snowdon, and J. M. Charnes,

editors, Proceedings of the 2002 Winter Simulation Conference, pages 234–

242. IEEE Press, 2002.

[LT08] Christian Larsen and Anders Thorstenson. A comparison between the order

and the volume ﬁll rate for a base-stock inventory control system under a

compound renewal demand process. Journal of the Operational Research

Society, 59(6):798, 2008.

[Muc73] John A. Muckstadt. A model for a multi-item, multi-echelon, multi-indenture

inventory system. Management Science, 20(4):472–481, 1973.

[MY08] Chumpol Monthatipkul and Pisal Yenradee. Inventory/distribution control

system in a one-warehouse/multi-retailer supply chain. International Journal

of Production Economics, 114(1):119–133, 2008.

[Ros14] Sheldon M. Ross. Introduction to Probability Models. Academic Press, 2014.

[RÜ11] Manuel D Rossetti and Yasin Ünlü. Evaluating the robustness of lead time de-

mand models. International Journal of Production Economics, 134(1):159–

176, 2011.

[SDB85] Leroy B. Schwarz, Bryan L. Deuermeyer, and Ralph D. Badinelli. Fill-rate

optimization in a one-warehouse n-identical retailer distribution system. Man-

agement Science, 31(4):488–498, 1985.

[She68] Craig C. Sherbrooke. Metric: A multi-echelon technique for recoverable item

control. Operations Research, 16(1):122–141, 1968.

Bibliography 123

indenture, multi-echelon availability models. Operations Research,

34(2):311–319, 1986.

[She13] Craig C. Sherbrooke. Optimal Inventory Modeling of Systems: Multi-Echelon

Techniques, volume 72 of International Series in Operations Research &

Management Science. Springer, 2 edition, 2013.

[Sim58] Kenneth F. Simpson. In-process inventories. Operations Research, 6(6):863–

873, 1958.

[SLZ12] David Simchi-Levi and Yao Zhao. Performance evaluation of stochastic

multi-echelon inventory systems: A survey. Advances in Operations Re-

search, 2012, 2012.

[SPT16] E. A. Silver, D. F. Pyke, and D. J. Thomas. Inventory and Production Man-

agement in Supply Chains. Taylor & Francis, 4 edition, 2016.

[SZ88] Antony Svoronos and Paul Zipkin. Estimating the performance of multi-level

inventory systems. Operations Research, 36(1):57–72, 1988.

[Tem99] Horst Tempelmeier. Material-Logistik. Springer, 4 edition, 1999.

[TH08] Ruud H. Teunter and Willem K. Klein Haneveld. Dynamic inventory ra-

tioning strategies for inventory systems with two demand classes, poisson

demand and backordering. European Journal of Operational Research,

190(1):156–178, 2008.

[TO97] John E. Tyworth and Liam O’Neill. Robustness of the normal approxima-

tion of lead-time demand in a distribution setting. Naval Research Logistics,

44(2):165–186, 1997.

[VAN10] Juan Pablo Vielma, Shabbir Ahmed, and George Nemhauser. Mixed-integer

models for nonseparable piecewise-linear optimization: Unifying framework

and extensions. Operations Research, 58(2):303–315, 2010.

[VKN08] Juan Pablo Vielma, Ahmet B. Keha, and George L. Nemhauser. Nonconvex,

lower semicontinuous piecewise linear optimization. Discrete Optimization,

5(2):467–488, 2008.

[Wag02] Harvey M. Wagner. And then there were none. Operations Research,

50(1):217–226, 2002.

[Wah99] Christoph Wahl. Bestandesmanagement in Distributionssystemen mit dezent-

raler Disposition. PhD thesis, Difo-Druck GmbH, 1999.

[Whi82] Ward Whitt. Approximating a point process by a renewal process, j: two

basic methods. Operations research : the journal of the Operations Research

Society of America, 30(1):125–147, 1982.

Appendix

A.1 Proofs

A.1.1 Proof of Lemma 19 for (order) ﬁll rate targets and compound Poisson demand

In this section, we proof Lemma 19 for the special case of order ﬁll rate targets.

First, we show that the lemma holds true for local reorder points. Assume that we use

eq. (2.13) to calculate the ﬁll rate. Let β Ri be the ﬁll rate for reorder point Ri and β Ri +1 be

β

the ﬁll rate for Ri + 1, all else equal. We want to derive the difference δi := β Ri +1 − β Ri .

Ri +Qi +1 Ri +Qi +1

1 Ri +Qi +1

βi (Ri + 1) = ∑ ∑ pdf Ki (k)

Qi l=R∑ Pr(Di(Lie f f ) = l − j)

k=1 j=k i +2

Ri +Qi +1 Ri +Qi +1 Ri +Qi

1

=

Qi ∑ ∑ pdf Ki (k) ∑ pdf Di (l − j) + pdf Di (Ri + Qi + 1)

k=1 j=k l=Ri +2

=

Qi k=1∑ ∑ pdf Ki (k) ∑ pdf Di (l − j) + pdf Di (Ri + Qi + 1)

j=k l=Ri +1

− pdf Di (Ri + 1)

1 Ri +Qi +1 Ri +Qi +1 Ri +Qi

=

Qi k=1∑ ∑ pdf Ki (k) ∑ pdf Di (l − j)

j=k l=Ri +1

1 Ri +Qi +1 Ri +Qi +1

+

Qi k=1∑ ∑ pdf Ki (k)(pdf Di (Ri + Qi + 1) − pdf Di (Ri + 1)) (A.1)

j=k

In eq. (A.1) the last element of the ﬁrst two sums, Ri + Qi + 1, is 0 because l − j is always

negative in this case and the probability for negative demand is 0 by deﬁnition. Therefore,

βi (Ri + 1) = ∑

Qi k=1 j=k∑ pdf Ki (k) ∑ pdf Di (l − j)

l=Ri +1

1 Ri +Qi +1 Ri +Qi +1

+

Qi k=1∑ ∑ pdf Ki (k)(pdf Di (Ri + Qi + 1) − pdf Di (Ri + 1))

j=k

1 Ri +Qi +1 Ri +Qi +1

= β Ri +

Qi k=1∑ ∑ pdf Ki (k) pdf Di (Ri + Qi + 1 − j) − pdf Di (Ri + 1)

j=k

β

which provides us with the desired δi .

C. Grob, Inventory Management in Multi-Echelon Networks,

AutoUni – Schriftenreihe 128, https://doi.org/10.1007/978-3-658-23375-4

126 Appendix

Corollary 31. The increase in ﬁll rate if we increase a local reorder point Ri by 1 is

β 1 Ri +Qi +1 Ri +Qi +1

δi (Ri ) =

Qi k=1∑ ∑ pdf Ki (k) (DLi (Ri + Qi + 1 − j) − DLi (Ri + 1)) .

j=k

β

Corollary 32. δi from Corollary 31 is strictly greater than 0 and therefore βi (Ri ) is in-

creasing in Ri .

And now we consider non-local reorder points. The idea of this proof relates back to the

relation shown in Figure 3.2. Consider a decrease of mean and variance of lead time demand

at a local warehouse i (which is caused by an increase of a non-local reorder point). This

decrease affects Di in eq. (2.13):

Ri +Qi Ri +Qi

βi (Ri ) := βiord (Ri ) = ∑ ∑ pdf Ki (k) · Pr(I lev = j)

k=1 j=k

Ri +Qi Ri +Qi

1 Ri +Qi

= ∑ ∑ pdf Ki (k) ∑ Pr(Di(Lie f f ) = l − j)

Qi l=R

(A.2)

k=1 j=k i +1

To consider the inﬂuence recall the idea of eq. (A.2): An order of size k arrives at a point

in time t and we sum up the probabilities for all possible states in which this order can

be fulﬁlled weighted by the probability for an order of size k to occur. The order can be

fulﬁlled if the inventory level at time t is greater or equal than k. The inventory position

can be expressed as IiLev (t) = IiPos (t) − D(Li ) and the probability subsequently as Pr(Iilev ≥

i +Qi

k) = (1/Qi ) ∑Rl=R i +1

Pr(D(Li ) ≤ l − k). The probability is increased if mean and variance of

demand decrease and therefore the probability that an order of any size k can be fulﬁlled is

higher. The ﬁll rate is increasing in non-local reorder points and the proof is completed.

The construction is somewhat different if we start with the assumption of negative binomial

lead time demand. The demand process can then be modeled as compound Poisson with

logarithmic order size distribution. In this case, mean and variance of the order size distri-

bution K are also changed by a change in lead time demand. This is counter-intuitive and

an artifact of the demand modeling in this case. The parameter θ of the logarithmic distri-

bution (Appendix A.2.3) is computed as θ = 1 − p with p being a parameter of the negative

binomial distribution (Appendix A.2.4). The relation is then not as obvious anymore and

depends on the ratio of the decrease of mean and variance of lead time demand. We did not

observe any situation where the ﬁll rate was not increasing in non-local reorder points.

A.2 Distributions

In this section we deﬁne the required distributions in alphabetical order as their are some-

times ambiguities in the deﬁnitions. We refer to the random variable as X, the probability

mass function as pdf X (x) and the distribution function as cdf X (x).

Distributions 127

Let N be a Poisson distributed random variable and X1 , X2 , X3 , ... i.i.d random variables that

are also independent of N. Then

N

Y = ∑ Xi (A.3)

i=1

The gamma distribution has the shape parameter α > 0 and the rate parameter λ > 0. The

density function is

λ α xα−1 e−λ x

pdf X (x) = , for x > 0 (A.4)

Γ(α)

" ∞ α−1 −t

where Γ(α) = 0 t e dt is the gamma function.

The cumulative distribution function is the regularized gamma function

γ(α, λ x)

cdf X (x) = , for x > 0 (A.5)

Γ(α)

" λ x α−1 −t

where γ(α, λ x) = 0 t e is the lower incomplete gamma function.

Let θ ∈ (0, 1) be the shape parameter of the logarithmic distribution. The probability mass

function is

−θ x

pdf X (x) = , for x = 1, 2, 3, ... (A.6)

xln(1 − θ )

−1 x

θi

cdf X (x) = ∑

ln(1 − θ ) i=1 i

, for x = 1, 2, 3, ... (A.7)

128 Appendix

The density function of the negative binomial distribution with parameters n > 0 and 0 <

p < 1 is

Γ(n + x) n

pdf X (x) = p (1 − p)x , for x = 0, 1, 2, ... (A.8)

Γ(n)x!

where Γ is the gamma function as deﬁned in Appendix A.2.2. The probability distribution

function is

Let λ > 0 be the mean of the Poisson distribution. The probability mass function is

e−λ λ x

pdf X (x) = , for x = 0, 1, 2, ... (A.10)

x!

and the probability distribution function is

x

λj

cdf X (x) = e−λ ∑ , for x = 0, 1, 2, ... (A.11)

j=0 j!

A.3.1 Heuristics

Table A.1: Data that was used to create the example in the 2-level heuristics section. pi is equal for

all warehouses.

0 1410 44.69 55.8

1 0.99 140 5 8 2.25 2.21 31.18

2 0.99 210 9 10.88 2.56 3.4 65.57

3 0.99 70 6 8.12 2.4 0.99 11.07

4 0.99 310 14 5 0.64 5.48 599.77

5 0.99 200 6 12 3.61 3.29 60.64

6 0.99 340 7 16 6.76 5.45 76.25

7 0.99 40 7 18.29 6.25 0.54 12.17

8 0.99 130 6 7.5 1.51 2.02 143.68

Additional data 129

In this section details on the determination of the number of instances needed and supple-

mentary tables are given.

We determined how many instances of each variation need to be run in the simulation to

obtain dependable results. For this, we ran a number of tests for all variations of one scen-

ario. The results, which are summarized in Table A.2 for the mean simulated ﬁll rate as

an example, are quite stable. Even if we look at mean simulated ﬁll rates for an individual

instance, we get only small deviations after 100 instances. We therefore decided that 100

instances are sufﬁcient. We repeated the analysis with the mean and the standard deviation

of the wait time, and also regarding these performance measures 100 instances seemed to

be sufﬁcient to get reliable results.

Table A.2: Mean simulated ﬁll rates over all variations and instances for local warehouses

10 87.534%

25 87.421%

50 87.299%

100 87.393%

200 87.350%

500 87.414%

1000 87.391%

Supplementary tables

The tables in this section supplement and support the results presented in Section 5.2 and

are referenced there.

Table A.8 and Table A.9 summarize the results of the retrospective scenario, which is de-

scribed in Section 5.3.2. Table A.8 compares mean and standard deviation as approximated

by the different methods with the values observed in the simulation. Table A.9 shows the

same comparison for a subset, in which differences in demand and order quantity between

two warehouses were large.

130 Appendix

Table A.3: Absolute error of the standard deviation for test cases with different ratios of order

quantity and mean daily demand, Analysis of H3

Standard deviation

0 to 10 30 1.40 1.30 2.82 1.27 2.23

10 to 20 156 3.63 3.16 2.89 4.77 3.38

20 to 30 258 4.48 4.38 2.45 5.21 3.49

30 to 40 348 4.80 7.00 3.32 5.53 4.22

40 to 50 390 6.14 10.83 3.50 5.37 4.41

50 to 60 804 7.27 15.06 4.43 5.91 4.96

60 to 70 4416 7.26 22.26 4.49 6.65 5.37

70 to 80 2772 7.59 23.94 4.87 7.74 5.68

80 to 90 1902 6.83 24.85 5.17 8.60 5.94

90 to 100 306 7.40 23.24 4.65 7.98 5.63

100 to 200 2580 5.98 27.55 6.12 10.62 6.55

200 to 300 528 3.66 26.77 8.98 15.40 9.37

300 to 400 312 2.25 27.03 11.34 19.05 10.96

400 to 500 210 2.72 26.49 11.87 19.55 11.05

500 to 600 132 0.80 24.98 12.98 22.61 11.86

600 to 700 126 1.53 28.15 13.34 21.71 12.35

700 to 800 54 0.58 21.39 14.81 29.23 11.53

800 to 900 90 0.65 25.57 12.95 23.90 9.95

900 to 1000 30 2.79 19.61 14.87 30.54 8.92

>1000 312 0.41 20.44 16.11 31.26 9.23

Table A.4: Relative accuracy of the AXS approximation regarding the mean, standard deviation

(sd) and combined (comb.) for different central ﬁll rates (fr) and test cases with Qi /μi ≤

25

Mean sd Comb. Mean sd Comb. Mean sd Comb.

Best 38.24% 19.61% 17.32% 35.29% 20.59% 18.14% 44.12% 17.65% 15.69%

2nd or better 43.46% 30.39% 28.43% 42.65% 31.86% 29.41% 45.10% 27.45% 26.47%

Worst 48.37% 51.63% 39.54% 47.55% 51.47% 36.76% 50.00% 51.96% 45.10%

Table A.5: Average simulated values and absolute errors of mean and standard deviation (sd) of the

approximations for different scenarios, for test cases with Qi /μi ≤ 25

All senarios Medium to high central ﬁll rate Low central ﬁll rate

Mean sd Mean sd Mean sd

Simulation 5.01 3.80 3.23 2.87 8.57 5.66

KKSL 3.18 2.59 2.49 2.32 4.56 3.14

NB 3.04 3.55 2.41 2.09 4.31 6.46

AXS 2.51 4.73 1.81 3.85 3.90 6.49

BF 4.70 3.31 3.13 3.44 7.84 3.03

Additional data 131

Table A.6: Mean of simulation and absolute error of the approximations for the standard deviation

for test cases with different number of local warehouses

#Local warehouses 8 ≤3

Simulation 5.59 8.61

KKSL 5.00 5.76

NB 21.67 21.73

AXS 7.46 11.78

BF 5.61 5.48

Table A.7: Average simulated values and absolute error of mean and standard deviation (sd) of the

approximations, for test cases with different values of σ 2 /μ

σ 2 /μ < 1 σ 2 /μ < 5 σ 2 /μ ≥ 1 σ 2 /μ ≥ 5

Mean sd Mean sd Mean sd Mean sd

Simulation 7.58 5.81 8.78 6.18 9.20 6.35 10.47 6.93

KKSL 5.24 5.83 5.78 5.80 5.97 5.65 6.57 4.94

NB 5.77 24.83 5.85 23.44 5.90 22.37 6.09 18.21

AXS 5.94 9.78 6.14 9.55 6.23 9.23 6.51 7.88

BF 6.40 6.33 6.82 6.22 7.06 6.11 7.93 5.67

Table A.8: Approximation and simulation results for the mean and standard deviation of the wait

time, retrospective scenario

Method Simulation Approximation Simulation Approximation

KKSL 10.60 13.79 9.14 11.95

NB 3.01 3.66 2.78 9.50

BF 14.34 5.55 12.06 9.11

AXS 4.61 6.18 4.71 9.16

132 Appendix

Table A.9: Approximation and simulation results for the wait time for large differences in replen-

ishment order quantities, retrospective scenario

Method Warehouse Simulation Approximation Simulation Approximation

KKSL Large 11.22 11.06 11.20 10.55

Small 3.67 6.97 5.39 11.31

Combined 7.45 9.01 8.29 10.93

NB Large 1.79 1.22 3.49 6.52

Small 0.46 0.36 1.14 11.54

Combined 1.12 0.79 2.31 9.03

BF Large 16.24 2.13 14.80 6.91

Small 5.36 0.60 8.59 3.58

Combined 10.80 1.36 11.69 5.25

AXS Large 4.22 2.84 7.25 7.16

Small 0.83 2.84 2.02 7.16

Combined 2.53 2.84 4.63 7.16

Figures on impact of different parameters 133

!" ##"$ %& &

! "# #$

! " # $%& &%

134 Appendix

!" ##"$ %& &

!" "#

!
"
#
$% %
$

Figures on impact of different parameters 135

!" ##"$ %& &

! "# #$

!" #!$% &'( ('

136 Appendix

!" ##"$ %& &

! !"

!"# $%& &%

Figures on impact of different parameters 137

! "# #

! !"

!"# $%& &%

138 Appendix

! ""!# $% %

!" "#

! " # $%& &%

Figures on impact of different parameters 139

! !

!" "#

!"# $%& %&

140 Appendix

!"# $$#% & &

! ""!# $ $%

!"# "#!

- BLP+Zusammenfassung.pdfBLP+Zusammenfassung.pdfHans
- Mikroprozessortechnik FH GiessenMikroprozessortechnik FH GiessenAmarok81
- Mondello 2017Mondello 2017amiche Lounis
- 1993_Bookmatter_CICS (3)1993_Bookmatter_CICS (3)raj3456789
- Information Retrieval SoSe10Information Retrieval SoSe10Friederich Engel
- 9783958453777_Leseprobe.pdf9783958453777_Leseprobe.pdfich ich
- Shadura [2012], Input and Output with XQuery and XML DatabasesShadura [2012], Input and Output with XQuery and XML Databasesmrnovoa
- it-alignment-it-architektur-und-organisation-germa (3)it-alignment-it-architektur-und-organisation-germa (3)Ali Haddadou
- bergmann-WorkflowEngine-DiplomaThesisbergmann-WorkflowEngine-DiplomaThesisapi-19621344
- Technisches Heftje IITechnisches Heftje IIItalianroulette
- Modelling and Implemention of a Microscopic Traffic Simulation System Johannes BrugmannModelling and Implemention of a Microscopic Traffic Simulation System Johannes Brugmannrambo_style19
- Cost and Revenue Olympic GamesCost and Revenue Olympic GamesAlonso Valles
- 12.13684.BA-DE0112.13684.BA-DE01Todor Stoichkov
- c Programmierungc ProgrammierungAlex
- richter_latex_tipps_1.2richter_latex_tipps_1.2ali
- 12.13685.BA-DE0112.13685.BA-DE01Todor Stoichkov
- BA Sebastian Diedrich-SPRINGBA Sebastian Diedrich-SPRINGErnest Tamagni
- Skript Math GrundlagenSkript Math GrundlagenLuki Setiarini
- sadowski_torsten.pdfsadowski_torsten.pdfc3pox2
- Masterarbeit K. SzusterMasterarbeit K. SzusterErnest Tamagni
- document.pdfdocument.pdfLucas Cheim
- DIN_ISO_28000_E__2021-04DIN_ISO_28000_E__2021-04KARTHIK NAIKAR
- c# wtp 2017.pdfc# wtp 2017.pdfLambert Strong
- SMath Studio Con Maxima - KraskaSMath Studio Con Maxima - Kraskapuan
- Sappress Web Dynpro Abap HandbuchSappress Web Dynpro Abap HandbuchGovind Gautam
- BCLR-2015-40BCLR-2015-40Mandy Moalem
- Linux Live DerivatLinux Live Derivat8Scartheface8
- Lecture Notes Becker Wy St UpLecture Notes Becker Wy St Uperereretzutzutzu
- 978-3-86880-060-9_005-006978-3-86880-060-9_005-006Mohamed Jaafar Khechine
- validierungCFDvalidierungCFDNashrif Karim
- vhdlvhdlPFE
- 61 Bericht Handlungskonzept 2016 Web61 Bericht Handlungskonzept 2016 Webhansi123asdfasdf
- Grundlagen der BWL II - ZusammenfassungGrundlagen der BWL II - Zusammenfassungking-fly
- Attacks-simulation-thesisAttacks-simulation-thesisAlejandro González Sánchez
- 2018 Book AdditiveSerienfertigung2018 Book AdditiveSerienfertigungYizz K.
- Bergmann WorkflowEngine DiplomaThesisBergmann WorkflowEngine DiplomaThesisjjmad
- Intensivkurs C++.pdfIntensivkurs C++.pdfclimberjc
- BA Wohlgethan 2176410BA Wohlgethan 2176410Carlos Hidalgo
- anlaufmanagement_glossar_ebookanlaufmanagement_glossar_ebooknajib
- Machine learning in credit default riskMachine learning in credit default riskĐức Vũ Văn
- 3D-Konstruktionen Mit Autodesk Inventor 2021 Und Inventor LT 2021 Der Umfassende Praxiseinstieg by Detlef Ridder (Z-lib.org)3D-Konstruktionen Mit Autodesk Inventor 2021 Und Inventor LT 2021 Der Umfassende Praxiseinstieg by Detlef Ridder (Z-lib.org)Kiran .m
- XML_Loesungen_programmieren.pdf_-_XML_Loesungen_programmieren.pdf_-_pedagope
- Globalisierung (Enquete Kommission)Globalisierung (Enquete Kommission)Matt11001
- EBook_Abschlussarbeiten Im SAP- FinanzwesenEBook_Abschlussarbeiten Im SAP- Finanzwesentutorialtutorial
- SystemdynamikSystemdynamikMaid Dzambic
- sappress_elektonischer_kontoauszug_in_sapsappress_elektonischer_kontoauszug_in_sapRavi
- Deutsche Kommission Elektrotechnik Die deutsche Normungs-Roadmap Industrie 4.0. Frankfurt am Main, 2013Deutsche Kommission Elektrotechnik Die deutsche Normungs-Roadmap Industrie 4.0. Frankfurt am Main, 2013Serhat Kaya
- Th-Inf-IITh-Inf-IIHans-Jürgen Müller
- Doktorat.compressedDoktorat.compresseditzgaya
- Diss_A.pdfDiss_A.pdfMarcArnon
- JAVA Skript 2012 FebJAVA Skript 2012 FebNeonomen
- Galileocomputing HibernateGalileocomputing HibernateVikash Gupta
- Dissertation Frederic BallaireDissertation Frederic BallaireMiguel Silva
- C++ Das KompendiumC++ Das KompendiumRicardo Cescon
- Sistema Kochbuch1 DeSistema Kochbuch1 DeErdinc Senman
- GGV Kieler FördeGGV Kieler FördeOstsee Schleswig-Holstein
- 9. Materiallagerung9. MateriallagerungDenis Brezovan
- Maintenance OM501-502LAMaintenance OM501-502LAReinaldo Zorrilla