For full functionality of ResearchGate it is necessary to enable JavaScript.
Here are the
instructions how to enable JavaScript in your web browser .

See all ›

4 Citations

See all ›

81 References

See all ›

3 Figures

Download full-text PDF

Dynamic Multi-Linked Negotiations in Multi-Echelon Production Scheduling Networks

Conference Paper (PDF Available)  ·  January 2006with83 Reads

DOI: 10.1109/IAT.2006.56 · Source: DBLP
Conference: Conference: Proceedings of the 2006 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, Hong Kong, China, 18-22 December 2006

Hoong Chuin Lau at Singapore Management University
Hoong Chuin Lau
  • 27.08
  • Singapore Management University

Guan Li Soh
Guan Li Soh

Wee Chong Wan
Wee Chong Wan

In this paper, we are concerned with scheduling resources in a multi-tier production/logistics system for multi-indenture goods. Unlike classical production scheduling problems, the problem we study is concerned with local utilities which are private. We present an agent model and investigate an efficient scheme for handling multi-linked agent negotiations. With this scheme we attempt to overcome the drawbacks of sequential negotiations and negotiation parameter settings. Our approach is based on embedding a credit-based negotiation protocol within a local search scheduling algorithm. We demonstrate the computational efficiency and effectiveness of the approach in solving a real-life dynamic production scheduling problem which balances between global production cost and local utilities within the facilities.

Discover the world's research

  • 15+ million members
  • 118+ million publications
  • 700k+ research projects

Join for free

Fig. 1. Framework for early warning system in food supply networks  

Framework for early warning system in food supply networks
Fig. 1. A simple case of the distribution network under investigation  

A simple case of the distribution network under investigation
Fig. 1. Diagram of JAIST Nanatsudaki Model  

Diagram of JAIST Nanatsudaki Model
VEAM IFIP Working Group 7.6 Workshop on Virtual
Environments for Advanced Modelling
4th US-European Workshop on Logistics and Supply
Chain Management – An International Research
June 6-9, 2006; Hamburg, Germany
Stefan V
Imke Sassen
Institute of
of Hamburg
In partnership and
supported by Deutsche
Post World Net
All rights preserved. No part of this book may be reproduced, stored in a
retrieval system, or transmitted, in any form or by any means, without the
prior written permission of the publisher.
The publisher is not responsible for the use which might be made of the
information contained in this book.
Published by:
Institute of Information Systems
University of Hamburg
Von-Melle-Park 5
20146 Hamburg, Germany
Printed in Germany
“Hamburg — Das Tor zur Welt.”
This volume includes contributions submitted for two workshops held back-
to-back in June 6-9, 2006 in Hamburg, Germany. The workshops are
VEAM IFIP Working Group 7.6 Workshop on Virtual Environments for
Advanced Modelling
4th US-European Workshop on Logistics and Supply Chain Management
– An International Research Perspective
In this volume we have put together the accepted contributions for those two
workshops. They constitute work of researchers and practitioners from more
than a dozen countries from all over the world. To ensure a fruitful collabo-
ration and to keep the workshop character with open discussion we have not
strived towards unifying the contributions regarding, e.g., style, length, etc.
They are provided in alphabetical order of the authors.
We greatly appreciate the financial support from Deutsche Post World Net
which especially helped us to organize the 4th US-European Workshop on
Logistics and Supply Chain Management, and to have it in Germany for the
first time. We also like to mention the Hamburger Hafen und Logistik AG
(HHLA), who made it possible to visit one of their container terminals within
this week. Moreover, we would like to acknowledge the assistance of everybody
at the Institute of Information Systems (IWI) at the University of Hamburg
for their valuable support.
We wish everybody successful and enjoyable days in Hamburg.
Stefan V
Imke Sassen
Hamburg, June 2006
Part I Contributions VEAM Workshop
Events in Modeling of Complex Systems
Janusz Granat …………………………………………… 3
SimConT: A Tool for Quick Layout and Equipment Portfolio
Evaluation and Simulation of Hinterland Container Terminal
Manfred Gronalt, Thouraya Benna, Martin Posset ………………. 7
Dynamic Multi-Linked Negotiations in Multi-Echelon
Production Scheduling Networks
Hoong Chuin Lau, Guan Li Soh, Wee Chong Wan ………………. 10
Modelling Classification Analysis for Competitive Events
with Applications to Sports Betting
Stefan Lessmann, Johnnie Johnson, Ming-Chien Sung ……………. 14
Structured Modeling Technology: Recent Developments and
Open Challenges
Marek Makowski …………………………………………. 16
Production Planning with Load Dependent Lead Times and
Julia Pahl, Stefan Voß, David L. Woodruff …………………….. 19
Guided Online Decision Making
orn Sch¨onberger, Herbert Kopfer ……………………………. 22
Scalability of Three Parallel Direct Search Methods in
Simulation-Based Optimization
Frank Thilo, Manfred Grauer ……………………………….. 25
X Contents
Nanatsudaki Model of Knowledge Creation Processes
Andrzej P. Wierzbicki, Yoshiteru Nakamori ……………………. 31
The Use of Reference Profiles and Multiple Criteria
Evaluation in Knowledge Acquisition from Large Databases
Andrzej P. Wierzbicki, Jing Tian, Hongtao Ren ………………… 34
Convex Envelope for Medical Modeling
Fadi Yaacoub, Yskandar Hamam, and Charbel Fares ……………… 36
Applying Data Mining for Early Warning and Proactive
Control in Food Supply Networks
Li Yuan, Mark R. Kramer, Adrie J.M. Beulens …………………. 38
Part II Contributions Logistics and SCM Workshop
Optimizing Inventory Decisions in a Multi-Stage Supply
Chain Under Stochastic Demands
Ab Rahman Ahmad, M. E. Seliaman …………………………. 45
Impact of E-Commerce on an Integrated Distribution
Daniela Ambrosino, Anna Sciomachen ………………………… 46
An Interval Pivoting Heuristic for Finding Quality Solutions
to Uniform-Bound Interval-Flow Transportation Problem
Aruna Apte, Richard S. Barr ……………………………….. 49
Managing the Service Supply Chain in the US Department of
Defense: Opportunities and Challenges
Uday Apte, Geraldo Ferrer, Ira Lewis, Rene Rendon ……………… 50
Analysis of Heuristic Search Methods for Scheduling
Automated Guided Vehicles
Thomas Bednarczyk, Andreas Fink …………………………… 52
Exact and Approximate Algorithms for a Class of Steiner
Tree Problems Arising in Network Design and Lot Sizing
Alysson M. Costa, Jean-Fran¸cois Cordeau, Gilbert Laporte ………… 54
Supply Chain Management in Archeological Surveys,
Excavations and Scientific Use
Joachim R. Daduna, Veit St¨urmer …………………………… 55
Real-World Agent-Based Transport Optimization
Klaus Dorer …………………………………………….. 56
Contents XI
Scheduling of Automated Double Rail-Mounted Gantry
Ren´e Eisenberg ………………………………………….. 57
Solving Real-World Vehicle Scheduling and Routing Problems
Jens Gottlieb ……………………………………………. 59
Exact and Heuristic Solution of the Global Supply Chain
Problem with Transfer Pricing and Transportation Cost
Pierre Hansen, S´ebastien Le Digabel, Nenad Mladenovi´c, Sylvain Perron 60
Planning Problems for Combined Pick-up Point Allocation,
Transportation, and Production Processes with Time-Varying
Processing Capacities
Christoph Hempsch……………………………………….. 62
Paradigm Shift in the Supply Chain – Is it Really Happening?
Britta Kesper, Yuriy Kapys ………………………………… 64
Support of Bid-Price Generation for International Large-Scale
Plant Projects
Dirk Mattfeld, Jiayi Yang ………………………………….. 65
Bid Querying Policies in Combinatorial Auctions for
Collaborative Transportation Planning
Giselher Pankratz ………………………………………… 68
Application of HotFrame on Tabu Search for the Multiple
Freight Consolidation Problem
Filip Rychnavsk´y …………………………………………. 71
Simulation Metamodeling of a Perishable Supply Chain
M.E. Seliaman, Ab Rahman Ahmad ………………………….. 74
Non-Cooperative Games in Liner Shipping Strategic Alliances
Xiaoning Shi, Stefan V ………………………………….. 75
Container Terminal Operation and Operations Research
Dirk Steenken, Stefan Voß, Robert Stahlbock ……………………. 77
Mixed Integer Models for Optimized Production Planning
Under Uncertainty
David L. Woodruff ……………………………………….. 78
XII Contents
Part III Contributions Not Presented
Simulation Optimization of the Cross Dock Door Assignment
Uwe Aickelin, Adrian Adewunmi …………………………….. 81
Heuristics for the Multi-Layer Design of MPLS/SDH/WDM
Holger H¨oller, Stefan Voß ………………………………….. 84
Part I
Contributions VEAM Workshop
Events in Modeling of Complex Systems
Janusz Granat
National Institute of Telecommunications, Szachowa 1, 04-894 Warsaw and
Institute of Control and Computation Engineering, Warsaw University of
Technology, 00-665 Warsaw, Poland, [email protected]
Management and modeling of complex systems is a challenging area of re-
search. There are various approaches for modeling these systems. One of the
approaches is event driven modeling and management of complex systems.
The concept of application of events to systems modeling is not a new one.
It has been applied for modeling of the discrete systems, stochastic systems
etc. However, most of the existing modeling approaches use only information
about the type of event and the time when an event occurs. The information
systems store much richer information about events. This information might
be structured as well as unstructured. The structured information is stored
in databases in the form of tables. The unstructured information is stored in
various forms of textual information. It can be considered to use more infor-
mation about the events what advances events driven modeling approaches.
Figure 1 shows the basic components of the event driven modeling frame-
work: the system that is influenced by external as well as internal events, data
and textual information about the system as well as about the events, models,
algorithms, event detection algorithms, knowledge representation, description
of decision maker behavior and actions.
External events
Data & text
Fig. 1. Basic components of the modeling framework
4 Janusz Granat
In order to build models or algorithms we have to store the data about
the system and the events. The existence and the proper quality of data
are crucial to any further steps. We can distinguish primary data that are
stored in relational databases and preprocessed data that are prepared for
specific modeling tasks. The data can be stored in one central database or
can be stored in distributed databases. Moreover, the designers apply the
event based system design approach which leads to well structured databases
that contain information about events. There is also increased importance
of using textual information about events. Recently, the video sequences are
becoming important source of data for event discovery.
The models use mathematical formulas to describe the behavior of the
system. In case of the presented framework the models describe dependencies
between events and observable variables. Various models can be considered
like stochastic models, temporal relationships, temporal sequence associations
etc. The algorithms in Figure 1 are understood as algorithms that work with
analytical models as well as algorithms for event mining or event processing.
A key to understanding events is knowledge of what might have caused them
and having that knowledge at the time the events happen. Event mining is
one of the key approaches. Event mining can be defined as a process of finding:
the frequent events, the rare events, unknown events (its occurrence can be
deduced from observation of the system), the correlation between events, the
consequences of the event and what caused the event. There is a special class
of algorithms for event detection. We distinguish two classes of algorithms,
event detection based on numerical and categorical data analysis and event
detection by analysis of the textual information. The results of algorithms,
data and textual information go to the block called Knowledge representa-
tion. In this block there is unifying representation of the results. However, the
results are a very simple form of the knowledge. Here, there is a place for in-
troducing contextual knowledge and more advanced algorithms that support
knowledge creation and management. Also the knowledge about the conse-
quences of events will be represented. The ability to track event causality
and consequences is an essential step toward on-line decision support and an
important challenge for new algorithms for event mining. The models and al-
gorithms as well as data provide the decision maker with important knowledge
about the system. Then the decision maker can specify various actions that
will be applied in the system and reduce the influence of events on the system.
The information about actions should be stored in computerized form. That
will help later in the evaluation of consequences of the chosen actions. In some
cases the results of the algorithms can be directly applied to the system (for
example the event based control algorithms).
Recently, the focus is on real-time decision support what requires a new
class of the data processing, the analytical algorithms as well as modeling ap-
proaches. The actions have to be taken immediately after the event occurred.
The delay may cause the fault of the system or significant losses. It should be
stressed that we can distinguish a broad spectrum of various types of events. It
Events in Modeling of Complex Systems 5
will often require dedicated algorithms and approaches. However, the frame-
work will help in the generalization of the specified methods and algorithms.
Moreover, this framework may help in the integration of achievements in event
based modeling in different scientific disciplines. At this time there are sepa-
rate developments in temporal data mining, stochastic systems, event based
control etc. The combination of these approaches might significantly improve
the results of new algorithms.
The presented approach has various applications in business monitoring,
network management, intrusion detection, fault detection etc. In this section
we will present selected examples of event driven modeling: events monitoring,
event processing networks, events in environmental scanning, event based con-
trol, temporal sequence associations for rare events, event mining and events
in alerting systems. There is research on events monitoring in given environ-
ment. Sensor networks are applied for events monitoring. Sensor networks are
systems of many sensing elements endowed with computation, communication
and motion that can work together to provide information about events in an
environment. In this case we have information about the type of event, the
time and location of events. The control algorithms are used for positioning
mobile sensors in response to a series of events. Many monitoring problems
can also be stated as the problem of detecting a change in the parameters of
a system called event detection. Another important concept are Event Pro-
cessing Networks (EPN). Such networks consist of Event Processing Agents
called event sources, event processors and event viewers. EPN have been ap-
plied for computer network monitoring. The events sources were middleware
sniffers. The aggregated information about events has been displayed by view-
ers and additionally has been used for event mining. This concept has also
been applied for solving business problems. The organizations are working on
improvement of the analysis of the external environment and influence of this
environment on the performance of the organization. Environmental scan-
ning is a new term and it means the acquisition and use of the information
about events, trends, and relationships in an external environment. In this
case the methods of dealing with unstructured information about events are
especially important. In event based control the sampling is event-triggered
instead time-triggered. The event-based PID controller can be built. Such an
approach reduces CPU utilization. The event-triggered PID controller is a
nonlinear system of hybrid nature. In many cases we have to monitor and
analyze rare events like credit card frauds, network faults etc. However, if we
store the data about the system in the database it is very difficult to identify
rare events. In this case the events are characterized by the type of event and
the time of occurrence of the event. Temporal sequence associations for rare
events can be applied to solve this problem.
There are new opportunities that come from the large amount of data
that is stored in various databases. Event mining becomes a challenging area
of research. In this subsection we will focus on formulating the event mining
tasks that consider observations of the system as well as internal and external
6 Janusz Granat
Fig. 2. Events and observations
events. Figure 2 shows interrelations between events, observation of the system
that is given in form of time series and alarms. Sometimes, it is impossible
to observe the events directly. In such cases the data are stored in databases
in form of time series. This data represents observations of the system in
selected points. The observations are analyzed by the system and alarms are
generated in case of abrupt changes in the values of observations. In the next
step another algorithm finds the events that caused the changes in the system.
The following algorithms can be considered:
For significant change of observation find events that are the reasons of
this change
Prediction of future events by analyzing the changes of observations
Prediction of changes of observations after the event occurs
There are various applications of event based modeling. These approaches
use various methodologies. The presented modeling framework might help in
developing future event driven modeling. We have stressed the new direction
of research called event mining.
SimConT: A Tool for Quick Layout and
Equipment Portfolio Evaluation and Simulation
of Hinterland Container Terminal Operations
Manfred Gronalt, Thouraya Benna, and Martin Posset
University of Natural Resources and Applied Sciences Vienna, Department of
Economic and Social Science, Institute of Production and Logistics,
Feistmantelstraße 4, A-1180 Vienna, Austria,
Hinterland container terminals (HCTs) are important hubs in modern logistic-
networks that ensure efficient and frictionless intermodal (rail, truck, ship)
container turnover which has to be planned and coordinated. The increasing
number of HCTs along the Danube and within vital industrial regions shows
their significant and leading role in the development of European hinterland
container traffic. In contrast to open sea container terminals, hinterland con-
tainer terminals face other challenging optimization issues. Open sea container
terminals typically handle mainly two types of containers (20 feet and 40 feet).
Different container types can be stored in separated storage blocks which can
furthermore be separated into import and export blocks. Although HCTs are
usually constrained in their storage capacity, they are faced with a bigger
container diversity, nearly no predictable delivery and pickup time windows
and smaller turnover. Consequently containers have to be stored within mixed
yard blocks. In addition, most of the operation activities are triggered by rail-
way processes.
Keeping these characteristics in mind, efficiently planning of extensions
and rebuilding of HCTs has to be done very carefully. Therefore, dynamic
analyses of the maximum storing positions as well as modelling of the in-
bound and outbound flows are necessary in order to determine the resulting
capacity requirements and the infrastructure needed (railways and road in-
frastructure). Numerous restrictions must be met and kept in mind during
the planning of new and extended inland terminals to avoid costly problems
in daily terminal operation. Hence a comprehensive analysis of equipment uti-
lization and detailed terminal infrastructure planning is becoming necessary
to ensure an efficient HCT operation. The simulation of functions of container
terminals is an approach for efficient resource-planning and effective capacity
analysis of HCTs, which is based on modern simulation techniques. By means
of simulation different material handling technologies, shift patterns, resource
8 Manfred Gronalt, Thouraya Benna, and Martin Posset
scheduling and infrastructure capacity are analysed. Further, optimization is
used in order to find optimal configuration parameters.
The goal of our research is to minimize the risk of bad investments and
stranded costs when planning and (re)building the infrastructure and capacity
of HCT. The SimConT Simulation environment is based on a modular concept
which supports a potential user with on-time available and reliable results
and reports for efficient planning of capacity and infrastructure for inbound
and outbound flows of a hinterland container terminal. The integration of
inbound and outbound flows, which enables the evaluation of infrastructure
requirements for train, trucks and vessels is a further essential facet of our
research. SimConT was developed in a modular design including terminal-
configuration, simulation and report generator. The modules of the simulation
environment have to be passed through in a sequential way and end up in a
clear and comprehensive reporting.
Simulation Report
Fig. 1. Sequential components of SimConT
The terminal configuration enables the creation of an artificial image of
a potential terminal layout by determining all parameters and consists of
three modules, including system configuration, order configuration and data
generator. All interfaces for parameter entry are designed in a self-explanatory
way to avoid extensive training measures and additional consulting. System
and order configuration modules include the configuration of all necessary
layout, operation and order related data and offer intelligent input data error
avoiding interfaces. Hereafter the defined parameters are transmitted to the
data generator, where lists of inbound and outbound containers (which will
be used as input data for the simulation) are produced and edited.
The subsequent simulation works with exact number and identification of
containers and records all container movements, equipment allocations and
storage positions exactly. The scheduling of the terminal equipment is done
according to shortest path, predefined priority rules and the availability of con-
tainer related information. The goal of the simulation is to provide guidelines
for improving HCTs layout and equipment configuration with consideration
of transportation lead time reduction and number of container lifting. Finally,
the simulation results are processed by the report generator to offer a clear
and comprehensive overview of the main performance indicators of HCTs. All
SimConT: A Tool for Quick Layout and Equipment Portfolio Evaluation 9
results can be accessed within a cockpit in an intuitive way with extensions
of graphical support and logical aggregations.
Our work is done in close cooperation with an Austrian HCT operat-
ing company and an Austrian rail infrastructure operator. This ensures the
integration of practice based data and know-how. Typically, HCT provide ca-
pacity in a range of of 600 to 1.500 TEU with a medium turnover of 100.000
to 150.000 TEU per year. They serve two to six tracks and the operation is
predominantly done by transtainers and reach stackers.
The results are a preliminary work for creating a prototype of a dedicated
HCT simulation-environment. Within our research distinctive functions of
HCTs are investigated and set in a modular relation, which can be calibrated
and scaled, to create the ability of simulating HCTs in many possible shapings
and sizes.
Keywords: Intermodal terminals, hinterland terminals, container terminal
management, simulation, optimization
Dynamic Multi-Linked Negotiations in
Multi-Echelon Production Scheduling
Hoong Chuin Lau1, Guan Li Soh2, and Wee Chong Wan2
1School of Information Systems Singapore Management University, Singapore,
2The Logistics Institute – Asia Pacific National University of Singapore,
Singapore, [tlisgl, tliwwc]
In a multi-agent system, agents negotiate to find a mutually acceptable so-
lution to a problem. More often than not, agent negotiations are performed
independently of one another. In this paper, we apply the concept of multi-
linked negotiations [1] (where negotiations exert influences on one another)
in a dynamic system that needs to generate solutions quickly to satisfy de-
mand across a multi-echelon production scheduling network. Multi-linked ne-
gotiations occur in situations where a task requires further sub-tasks to be
completed, and also when the existence of many such tasks results in compe-
tition for a common resource. An example of this is in a manufacturing supply
chain network that consists of entity nodes and linkages defining contractor-
contractee relationships. Very often such relationships do not extend beyond
a direct relationship. Suppliers upstream usually have no information about
their customer’s customer, and the converse is true. Agent implementations
usually simulate a single tier commodity market without such multi-tier rela-
tionships. Applying single-tier agent negotiation strategies to multi-tier sys-
tems brings up the questions of negotiation ordering and the parameters of
the negotiations.
We consider a dynamic (as opposed to anticipatory) multi-echelon pro-
duction scheduling network involving the production, assembly and trans-
portation of multi-indenture goods that arrive dynamically. A finished good
undergoes component production and different levels of assembly at different
facilities. Different facilities have different capabilities and capacities in pro-
viding the various operations required. A request is composed of a number
of goods at a number of locations by a certain time. It is handled by a man-
agement (contractor) agent who generates a schedule that optimizes the total
production cost. The facilities and transportation services are represented by
Dynamic Multi-Linked Negotiations 11
contractee agents who negotiate for the available jobs according to their lo-
cal utilities. Contractor and contractee agents have no visibility over each
other’s agenda. The problem is to generate a schedule that maximizes the
contractee agents’ local utilities while minimizing the total production cost.
Requests are processed one after another as they enter the system, and no
decommitment is allowed. Due to the directly and indirectly linked relation-
ships [1] brought about by a multi-echelon network and multi-indenture goods,
approaches based on single independent negotiations will not work well. To
overcome this shortcoming, we apply the concept of negotiation ordering and
the feature assignment as described in [1]. Our work differs from [1] in the
problem solved; [1] outputs a negotiation ordering and corresponding feature
assignments for a subsequent negotiation phase. In our work, we assume that
the actual time of negotiations is negligible, hence enabling us to embed the
negotiations within the scheduling algorithm. This integrated approach allows
us to do away with the uncertainties of negotiation outcomes and generate a
final production schedule efficiently.
Problem Formulation and Modeling
We define our problem in a military context. We view a request as a mission
order (MO) and a production schedule is needed to fulfill the MO. A request
comprises a list of finished goods required at specified locations by a stipulated
time. This list of finished goods requires different levels of assembling from
components at different locations, and also their transportation between these
locations. An MO can hence be defined as a hierarchy of mjobs and sub-jobs,
both assembly and transportation, that need to be completed to fulfill the
request. A production schedule that fulfills the MO is defined by (a) the
facility where each job is executed; (b) the start and finish times of the jobs;
(c) transport assignment at pickup and drop locations. For simplicity in this
paper, we will ignore production and transportation, and focus on assembly.
Our model comprises two different types of agents: the management agent
(ma) and nfacility agents (fa). The ma receives the MO and seeks a schedule
that minimizes the sum of production costs Pifor all jobs i=1 to m. Each
fa represents a facility that maintains its own local schedule of jobs assigned.
Each fa jwishes to maximize its local utility function Ujthat models internal
preferences (that may be in conflict with the ma’s objective). In our problem,
Ujmodels the facility’s foreknowledge and experience in handling future as-
signments (bearing in mind that mission orders arrive dynamically but are
fulfilled one after another). The goal in this paper is to find a schedule χthat
minimizes the objective function Z=
Uj(χ). We propose a
negotiation scheme where a series of negotiations will proceed sequentially in
a given order (to be explained below), each involving the ma with a partic-
ular fa on a particular sub-job, and the outcome of an early negotiation will
12 Hoong Chuin Lau, Guan Li Soh, and Wee Chong Wan
become a constraint for later negotiations. All fas are assumed to be truth-
telling and non-collusive. The challenge is to find a negotiation ordering such
that negotiations proceeding along that ordering will produce a schedule that
minimizes Z.
Three-Phase Solution Approach
Our algorithm is a nested search comprising of three sub-phases: (a) find
a facility assignment, (b) find a negotiation ordering, and (c) generate the
best schedule that maximizes Z. The initial facility assignment is generated
greedily based on capacity, utilization, and distance while the initial negotia-
tion ordering is generated in a bottom-up lexicographical manner of the job
Facility Assignment ϕ
This phase is concerned with allocating a fa to each job. Here we apply a
heuristic local search algorithm to generate different facility assignments.
Negotiation Ordering φ
Given a fixed facility assignment ϕ, this phase finds the best negotiation order-
ing. Using simulated annealing, successive orderings are generated. For each
φ, a project scheduling algorithm that considers only the total production cost
is then used to determine an optimal schedule χPwhich consists of the start
time and deadline for each job in the MO. Each (ϕ,φ,χP) triplet is the input
for the next phase.
Agent Negotiation
For each triplet (ϕ,φ,χP), negotiations between ma and the respective fa’s in
ϕproceeds in the order defined in φ. The timings in χPis used as a proposed
timing from ma to the fa. We propose a reward scheme where units of credits
flow among fa’s through the ma. Initially, the ma possesses all credits. Based
on its utility function Uj,fa jgenerates an internal local schedule χC(i.e., its
own set of timings for the assigned job i). The local schedule χCand current
credit standing will be used to negotiate with the ma as follows:
i. fa accepts χPunconditionally, if Uj(χP)Uj(χC)
ii. fa counter-proposes χCand gives up Uj(χC)Uj(χP) credit units, if
Uj(χP)< Uj(χC) and fa has sufficient credits
iii. fa accepts χPwith an increase of Uj(χC)Uj(χP) units credited, if
Uj(χP)< Uj(χC) and fa has insufficient credits
Dynamic Multi-Linked Negotiations 13
In case of ii, the ma will give a rough commitment if through accommodat-
ing χC, it is able to generate a feasible schedule, followed by a full commitment
if the total production cost is increased by an amount no more than the net
number of credits received from all fas at the end of all negotiations. In case
of iii, the ma accedes if it has enough credits. Otherwise, the negotiation
is unsuccessful. The result of a successful negotiation will in turn become a
constraint for the subsequent negotiations in the ordering. A new schedule
is formed when all negotiations are successful. The fitness of this schedule χ
is measured with the objective function Z. Our algorithm seeks to find the
triplet (ϕ,φ,χ) that minimizes Z.
Experimental Results
This will be provided in the full-length version of the paper.
This work is supported by the Singapore Ministry of Defense.
1. Zhang, X., Lesser, V., Abdallah, S.: Efficient Management of Multi-Linked Ne-
gotiation Based on a Formalized Model. Autonomous Agents and Multi-Agent
Systems 10(2), 165–205 (2005).
Modelling Classification Analysis for
Competitive Events with Applications to
Sports Betting
Stefan Lessmann1, Johnnie Johnson2, and Ming-Chien Sung2
1University of Hamburg, Institute of Information Systems, Von-Melle-Park 5,
20146 Hamburg, Germany, [email protected]
2Centre for Risk Research, School of Management, University of Southampton,
Classification analysis involves interfering a functional relationship between
independent variables and a discrete target variable from a set of example
patterns. Subsequently, the captured relationship facilitates predicting the
value of target variables when only the values of the independent variables are
known. While multivariate statistical methods like logistic regression or dis-
criminant analysis are well established several empirical benchmarks give rise
to the suspicion that novel machine learning techniques like artificial neural
networks or support vector machines are capable of providing more accurate
predictions. Hence, such techniques are predominantly used in contemporary
application of classification analysis; e.g., the support of managerial decision
making, medical diagnosis, speech and image recognition or text mining.
Competitive events differ from ordinary classification analysis in the sense
that certain patterns compete against each other for a specific target value in
a certain context. That is, the functional relationship the classifier or learn-
ing machine has to derive from the set of example patterns does not only
depend on the independent variables of one pattern but as well on those of
some interlinked patterns. We refer to this linking as contextual information.
Consider the case of horseracing as an example. A large number of indepen-
dent variables can be used to build a prediction model facilitating winner
versus non-winner classification, e.g., measurements of the horses and jockeys
past performance. However, the likelihood of a particular horse winning a race
does not only depend on its individual skill and past performance but also on
those of its competitors in a given race. In fact, this information seems crucial
and omitting it can be expected to be highly detrimental for predictive per-
formance. As a result, the literature on modelling competitive events focuses
on statistical methods that are based on maximum likelihood estimation and
capable of taking this contextual information into account.
Modelling Classification Analysis for Competitive Events 15
We strive to adapt machine learning methods to competitive settings by
finding an appropriate representation of the data. The idea to account for
competition purely by data modelling is appealing since it requires no algo-
rithmic modification of the classifier therewith facilitating the application of
several standard learning techniques for comparison purpose. Therefore, we
develop three different modelling techniques of difference to best coding, pair-
wise matching and race to example modelling and compare them with a stan-
dard classification setting. Preliminary results are derived for a horseracing
data set using the support vector machine classifier. The horseracing domain
is selected due to the large body of literature within this field and the fact that
the varying number of competitors within a race imposes additional constrains
on the applicability of traditional classification analysis. The selection of the
support vector machine is motivated by its excellent empirical performance
in several benchmark studies and its solid mathematical underpinnings.
Keywords: Classification, competitive events, horseracing, support vector
Structured Modeling Technology: Recent
Developments and Open Challenges
Marek Makowski
International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria,
[email protected],
Mathematical modeling of a complex problem is actually a network of ac-
tivities involving interdisciplinary teams collaborating closely with experts in
modeling methods and tools; often however new methods and/or software
need to be developed, especially in the case of:
Models with a complex structure using large amounts of diversified data,
possibly from different sources.
The need for robust strategies to account for a proper treatment of spatial
and temporal distributional aspects, vulnerabilities, inherent uncertainty
and endogenous risks affecting large communities and territories.
Demand for integrated model analysis, which should combine different
methods of model analysis for supporting a comprehensive examination of
the underlying problem and its alternative solutions.
Stronger requirements for the whole modeling process, including quality
assurance, replicability of results of diversified analyses, and automatic
documentation of modeling activities.
Requirement of a controlled access through the Internet to modeling re-
sources (composed of model specifications, data, documented results of
model analysis, and modeling tools).
Demand for large computing resources (e.g., large number of computa-
tional tasks, or large-scale optimization problems, or large amounts of
Use of established modeling methods and general-purpose modeling tools
cannot adequately meet requirements of such modeling activities. Thus we
need to advance modeling methodology to address these requirements.
Geoffrion presented in [2] a detailed specification of a modeling cycle. Here,
we discuss the modeling cycle composed of more aggregated elements which
correspond to the elements of the Structured Modeling Technology (SMT)
outlined below:
Structured Modeling Technology 17
Analysis of the problem, including the role of a model in the corresponding
decision-making process; and the development of the corresponding model
Collection and verification of the data to be used for the calculation of the
model parameters.
Definition of various model instances (composed of a model specification,
and a selection of data defining its parameters).
Diversified analyses of the instances.
Efficient use of computational grids for large volume of computations.
Documentation of the whole modeling process.
SMT is a Web-based application supporting the whole modeling process.
Users do not use any modeling language; model specification is done through
several simple forms composed of choice lists and text fields (only for spec-
ification of relations basic knowledge of LaTeX is required). All persistent
elements of the whole modeling process are stored in an automatically gener-
ated data warehouse. This approach supports modeling work by teams located
in distant locations.
While the basic functionality of SMT has been developed, and is used for
large and complex models (having more than a million variables and rather
complex indexing structure) there are several open challenging problems, solu-
tions of which are needed for extending functionality of SMT. These problems
an efficient implementation of handling of complex measurement units (a
key attribute of each SMT entity),
methods for effective numerical experiments (analysis of a large number of
solutions in order to automatically generate new, possibly also large, sets
of computations).
The current state of the SMT development, and of on-going research activ-
ities related to the open problems will be discussed. More information about
SMT can be found in [4], and at URL
Several ideas exploited in the SMT have resulted from many discussions
and joint activities of the author with A. Beulens, A. Geoffrion, J. Granat,
H. Scholten, H-J. Sebastian and A.P. Wierzbicki. The user and DBMS in-
terfaces of SMT has been designed and implemented by colleagues from
the National Institute of Telecommunications, Warsaw, Poland: M. Majdan,
C. Chudzian, B. Kozlowski.
18 Marek Makowski
1. Geoffrion, A.: An introduction to structured modeling. Management Science
33(5), 547–588 (1987).
2. Geoffrion, A.: Integrated modeling systems. Computer Science in Economics
and Management 2, 3–15 (1989).
3. Geoffrion, A.: Indexing in modeling languages for mathematical programming.
Management Science 38(3), 325–344 (1992).
4. Makowski, M.: Structured modeling technology, European Journal of Opera-
tional Research 166(3), 615–648 (2005).
5. Makowski, M., Wierzbicki, A.: Modeling knowledge: Model-based decision sup-
port and soft computations. In: Yu, X., Kacprzyk, J. (eds.) Applied Decision
Support with Soft Computing, Vol. 124 of Series: Studies in Fuzziness and
Soft Computing, Springer, Berlin, 3–60 (2003). Draft version available from
6. Wierzbicki, A., Makowski, M., Wessels, J. (eds.): Model-Based Decision Support
Methodology with Environmental Applications. Series: Mathematical Modeling
and Applications, Kluwer, Dordrecht (2000).
Production Planning with Load Dependent
Lead Times and Deterioration
Julia Pahl1, Stefan V1, and David L. Woodruff2
1University of Hamburg, Institute of Information Systems, Von-Melle-Park 5,
20146 Hamburg, Germany, [email protected],
2Graduate School of Management, UC Davis, Davis CA 95616, USA
Summary. As organizations move from creating plans for individual production
lines to entire supply chains it is increasingly important to recognize that decisions
concerning utilization of production resources impact the lead times that will be
experienced. In this paper we give some insights into why this is the case by looking
at queuing that results in delays. We use these insights to briefly survey and sug-
gest optimization models that take into account load dependent lead times. Related
“complications”consider the relationship and influence between deterioration or per-
ishable items and load dependent lead times in the framework of tactical production
Keywords: Supply chain management, lead times, tactical planning, deteri-
orating items, perishability, rework
The increased globalization forces companies to compete on an expanding set
of criteria. One key criterion is the lead time which is defined as the time
between the release of an order to the shop floor or to a supplier and the
receipt of the items. Lead time considerations are essential with respect to
the global competitiveness of supply chains, because long lead times impose
high costs due to rising work in process (WIP), inventory levels as well as
larger safety stocks caused by increased uncertainty about demand. Shorter
lead times permit the increase of efficiency by, e.g., enabling companies to
quote faster deliveries to customers and reducing the uncertainty of demand
forecasting. However, in intermittent production systems manufacturing lead
times tend to be long and variable with only a fraction of time being due to
value added processing times and the rest of the time being a result of wait-
ing in the system. Large planning models typically treat lead times as static
input data, but in most situations, the output of a planning model implies
capacity utilizations which, in turn, imply lead times. Despite this, consider-
20 Julia Pahl, Stefan Voß, and David L. Woodruff
ations about lead times dependent upon resource utilization (load dependent
lead times: LDLT) are rare in the literature. The same is valid for models
linking order releases, planning and capacity decisions to lead times, and tak-
ing into account factors influencing lead times such as the system workload,
batching, sequencing decisions, WIP levels or rework due to deterioration or
perishability of products. Furthermore, in Supply Chain Management (SCM)
and production planning models nonlinear dependencies, e.g., between lead
times and the workload of a production system or a production resource, are
usually omitted. This happens even though there is empirical evidence that
lead times increase nonlinearly long before resource utilization reaches 100%,
which may lead to significant differences in planned and realized lead times.
Certain characteristics of production materials, e.g., deterioration or per-
ishability can necessitate rework of production materials: when passed a spe-
cific maximum lifetime items have to be replaced or rerouted thus consum-
ing capacity and augmenting utilization. The perishability or deterioration of
goods is regarded as the process of decay, damage or spoilage of items in such
a way that they cannot be used for their original purpose anymore, viz. they
go through a change in storage and loose their utility partially or completely.
This could be a continuous process so that such items have a stochastic life-
time in contrast to perishable goods which are considered as items with a
fixed, maximum lifetime. The latter is true for products which become ob-
solete at some fixed point in time, because of various reasons, e.g., change
in style or technological developments. The higher the grade of deterioration,
the more rework time (if possible) and rework costs are necessary in order to
recover the item and to bring it back to a good quality state [1].
Rework can be economically attractive if rework times are much smaller
than the initial production times and if the value of the reworkable item is
substantial due to, e.g., expensive input materials. Additionally, producers can
be obliged by legislation and disposal bans to rework their defective products.
Other producers may want to take environmental responsibility and conse-
quently rework their defective, perished and/or returned products. According
to this, the integration of deterioration and perishability into models with
LDLT is very interesting especially in regard of lead time behavior. There-
fore, in this paper we extend our previous work on load dependent lead times
(see [2, 3, 4]) by considering them in the context of “complications” investigat-
ing the relationship and influence between deteriorating or perishable items
and those lead times in the framework of tactical production planning.
1. Inderfurth, K., Lindner, G., Rachaniotis, N.P.: Lot Sizing in a Production Sys-
tem with Rework and Production Deterioration. International Journal of Pro-
duction Research,43, 1355–1374 (2005).
2. Voß, S., Woodruff, D.L.: A Model for Multi-Stage Production Planning with
Load Dependent Lead Times. In: Sprague, R.H. (ed.) Proceedings of the 37th
Production Planning with LDLT and Deterioration 21
Annual Hawaii International Conference on System Science, IEEE Piscataway,
DTVEA03 1–9 (2004).
3. Pahl, J., Voß, S., Woodruff, D.L.: Load dependent lead times – From empir-
ical evidence to mathematical modeling. In: Kotzab, H., Seuring, S., M¨uller,
M., Reiner, R. (eds.) Research Methodologies in Supply Chain Management,
Physica, Heidelberg, 539–554 (2005).
4. Pahl, J., Voß, S., Woodruff, D.L.: Production Planning with Load Dependent
Lead Times. 4OR: A Quarterly Journal of Operations Research,3, 257–302
Guided Online Decision Making
orn Sch¨onberger and Herbert Kopfer
Chair of Logistics, Faculty of Business Studies and Economics
Wilhelm-Herbst-Straße 5, 28359 Bremen, Germany, [sberger,
The derivation and the formulation of formal decision models in terms of
mathematical expressions are necessary prerequisites for the successful ap-
plication of automatic algorithms to support decision making to manage the
flows of goods and resources through a logistical network. Numerous generic
and special models, often of optimization type, have been developed and inves-
tigated in this context as well as powerful algorithms to solve them. During
the last five decades, the quality as well as the quantity and scope of the
models have been successively extended by considering more and more rele-
vant problem parameters and possible decisions. However, in the last decade, a
new stream of interest has been established: the consideration of dynamic and
non-predictable future data. Within this contribution, we will apply state-of-
the-art methods as well as new ideas to model such a decision problem arising
from the reactive routing of a fleet of service teams which response quickly
after technical failures or machine breakdowns have been reported.
We consider the deployment problem of the dispatching unit of a fleet
of service teams visiting customer sites after these customers have reported
technical failures that require a solving by a technician who is able to rem-
edy the malfunctions or breakdowns. Each technician is equipped with a van
that carries all tools necessary to solve technical failures immediately at the
corresponding customer site. As soon as additional customers call for tech-
nical support, the associated customer site visits have to be integrated into
the existing schedule in order to meet service time windows at the customer
sites agreed between the dispatching unit and the customers (online decision
making). As long as the number of additionally incoming requests stays below
the maximal possible workload of the service fleet then nearly all requests can
be served without significant loss of punctuality and at reasonable costs. In
case that the number of additionally received demands for customer site visits
exceeds the maximal possible work-load of the service fleet (a demand peak)
then the quality of service decreases: the percentage of in-time customer site
visits declines and the amount of penalty payments for late arrivals increases.
Guided Online Decision Making 23
Selected requests are allowed to be subcontracted to other service partners
that are more expensive but ensure an in-time task completion.
Initially, we investigate the application of a pure cost-based myopic deci-
sion strategy for routing the service teams. Every time a new request arises,
the so far followed schedule is replaced by a new one. This new schedule is
the solution of a scheduling problem in which the costs for the required new
schedule are minimized. This means only the costs for fulfilling the current
set of open and uncompleted customer site visits are considered. Since the
penalty amount payable for an arrival after the closure of a customer site
time window is relatively low compared to the costs for the incorporation of
an external service partner, out-of-time-window visits will be preferred espe-
cially in situations in which a spontaneous and unpredictable demand peak
occurs. Within some numerical simulations we show that the punctuality de-
clines dramatically and for a too long time if such a demand peak arises.
The fallen punctuality represents a loss of service quality and has nega-
tive impacts for the survivability of the service company. In order to ensure
a continuous and high service performance, the management of the service
company has defined and published a policy which specifies that at least 90%
of the customer sites must be visited within the agreed time windows on av-
erage. Such a policy can be understood as a guideline to which the repeated
schedule derivation has to adapt in the sense that the generated schedules
must fulfil the properties specified in the policy and if the properties are not
fulfilled then the re-achievement of the property fulfilment must be re-achieved
In the pure myopic dispatching strategy mentioned above the policy re-
quirements are not fulfilled at all. To remedy this problem, we propose to
decompose the dynamic decision problem into two separate but interacting
decision problems. One of these two problems is dedicated to ensure a high
punctuality even in situations with very high system load and is handled as
the superior problem. Solving the superior problems means to modify the
scheduling rules with respect to the current system punctuality. The resulting
scheduling rules are then used to solve the second, inferior, decision problem
in which the minimization of schedule costs is requested.
The adaptation of the scheduling rules is equivalent to the modification of
the corresponding short-term scheduling problem and therefore results in the
variation of a mathematical optimization problem of the current dispatching
instance consisting of an objective function, a set of constraints and a col-
lection of domains for the considered decision variables. The scheduling rule
adaptation can be realized by re-defining the objective function, one or more
constraints or by shrinking and relaxing the decision variable domains. We
report about our experiments and results with the modification of the search
direction and the decision variable domains.
We have investigated the adaptation of the search direction by modifying
the objective function of the mathematical schedule optimization problem. It
is intended to bias the search so that it becomes more promising to externalize
24 orn Sch¨onberger and Herbert Kopfer
requests if the punctuality is low and to prevent expensive externalization if
the punctuality is already high. We have introduced one coefficient for the
part of externalization costs and one coefficient for the part of travel costs in
the objective function. By adjusting the coefficients, we can shift the relative
weight from externalization to self-entry or vice versa and can therefore guide
the search in a particular direction. We report about numerical experiments
with this kind of scheduling rule adaptation. The observed results show that
the adaptation is possible but hardly controllable.
In order to keep as much control as possible about the adaptation of the
optimization model, we have decided to guide our attention to the adaptation
of the domains of the decision variables. In a preliminary step, we reformulate
the short-term decision model and introduce a binary decision variable for each
request that indicates whether this request has to be externalized or not. The
adaptation of the optimization model works as follows: In case that the average
punctuality of the most recently completed and scheduled requests declines
below the value indicated in the policy, the binary indicator decision variables
for all additionally released requests are set to “1” so that all additionally
released requests are externalized. As soon as the punctuality re-increases,
the binary indicator variables for the new requests are set to ‘0’ so that all
recently received requests are allowed to be sourced out as well as to be served
by an own team. Within several numerical experiments we prove the general
applicability and controllability of this approach to adapt the myopic decision
We terminate our contribution with the report of some numerical experi-
ments in which the costs for the application of the scheduling rule adaptation
are estimated.
Scalability of Three Parallel Direct Search
Methods in Simulation-Based Optimization
Frank Thilo and Manfred Grauer
Information Systems Institute, University of Siegen, H¨olderlinstr. 3, D-57068
Optimization tasks in multidisciplinary optimization include design problems
in manufacturing in the aircraft or automotive industry, alloy casting processes
and metal-sheet forming. Typically, solving these problems involves running
many complex simulations which are implemented in commercial software
packages. In general, the optimization algorithm has to treat the simulation
system as a black box. To solve such a simulation-based optimization prob-
lem, many hundreds or thousands of simulations are necessary, each of which
is computationally expensive. This results in extremely high computational
demands which can only be met by distributing the computation over many
Basically, there are two different approaches how parallel or distributed
computing can be used in this scenario: First, the time needed for a single
simulation run can be reduced by parallelizing the simulation software itself,
e.g., by partitioning a FEM mesh and assigning each partition to a different
CPU. Second, the optimization algorithm can request the evaluation of mul-
tiple scenarios at the same time, so that many (sequential) simulations are
executed simultaneously. It is also possible to combine the two approaches.
The simulation and optimization of complex products in the area of virtual
prototyping aims at reducing the time to market for new innovative products
and creates an ever-increasing demand for computational power which can
only be met by utilizing larger numbers of CPUs. This requires scalable hard-
ware architectures and algorithms. This paper focuses on algorithmic scala-
bility; in particular, three scalable optimization algorithms are presented. To
allow a meaningful scalability analysis, a suitable performance and efficiency
metric for heuristic, parallel optimization is developed. This is then used to
analyze the characteristics of the algorithms for solving a benchmark problem
as well as real-life industrial problems from metal sheet forming [1] and cast-
ing processes [9, 4] on a compute cluster of up to 300 CPUs. Furthermore, the
26 Frank Thilo and Manfred Grauer
scalability of parallel versions of the simulation packages, which are used for
the optimization problems, is examined for different networking technologies.
The Scalability Concept
The term scalability is used in different contexts to express that a computer
system or an algorithm is able to solve a given problem faster or can cope with
an increased workload when resources are added. Here, we are concerned with
adding additional nodes to a compute cluster or a computational grid. Scal-
ability analysis can be divided into algorithmic and architectural scalability.
While the first focuses on attributes of an algorithm, i.e., the algorithm’s se-
quential portion, its inherit concurrency limits and synchronization costs, the
latter examines hardware related aspects as processing capacity, information
capacity and connectivity. To predict real runtimes of a parallel algorithm
on a given hardware architecture, both kinds of analyses must be taken into
The main metric to quantify scalability behavior is speedup, which com-
pares the computation times of a parallel algorithm for different numbers of
CPUs [7]. In the case of heuristic, parallel optimization algorithms, there is no
sharply defined goal for which each algorithm’s elapsed time can be compared.
Instead, both the time needed and the quality of the solution must be con-
sidered, e.g., by examining the progress of the achieved solution quality over
time. However, a target solution quality can be defined and the time needed
to reach this level be used as the basis to calculate speedup and efficiency
Distributed Simulation-Based Optimization
Optimization problems which arise in the field of computational engineering
usually cannot be formulated analytically because of their complexity. Instead,
a model of the real problem is created, typically in the form of a data set for
a simulation software package. For optimization, the model is parameterized
by some variables which can be chosen within lower and upper bounds. The
goal is to find the best set of ndecision variables as defined by some objective
function which has to be minimized. Furthermore, the set of possible solutions
can be limited by arbitrary constraints.
During the course of the optimization, thousands of solution candidates
must be evaluated, requiring a costly simulation run each time. While a sin-
gle simulation requires several minutes up to several hours or even days of
processing time, the amount of computation in the optimization algorithm
itself is several orders of magnitude lower. Also, the amount of data that is
exchanged between the search method and the simulations is very low. Thus,
the algorithms’ scalability is mainly limited by their inherent concurrency
limits, synchronisation points and drop in effectiveness when increasing the
degrees of parallelism and not by communication costs. Experiments show
Scalability of Three Parallel Direct Search Methods 27
that for this type of simulation-based optimization problems the relation of
local computation time to communication time is at least 1000 to 1.
The simulation-based nature of the problem means that in general no
derivative information is available and no assumptions can be made about
the nature of the problem space. This prohibits the use of linear program-
ming techniques or gradient-based optimization algorithms. One class of al-
gorithms which can be applied are so-called direct search methods [6]. There
is no exact definition of direct search, but important characteristics are that
these methods do not explicitly use derivative information nor build a model
of the objective function. Instead, the basic operation relies on direct com-
parison of objective function values. To utilize parallel computing resources,
parallel direct search methods are needed, which can evaluate several solu-
tion candidates simultaneously. Three such search methods are compared:
The Distributed Polytope Search (DPS), a parallel implementation (PSS) of
the meta-heuristic scatter search [8], and asynchronous parallel pattern search
(APPS) [3].
DPS belongs to the class of simplex-based search methods. It generates
new solution candidates by applying geometrical operations to a set of pre-
viously calculated solutions. The initial set of 2nfeasible solutions is gener-
ated by a parallel random search. During the main exploration phase, new
points are generated by reflecting or contracting existing solutions relative to
the weighted center of gravity. The number of operations in each iteration
depends on parameters which can be adjusted to the number of CPUs. In-
feasible points (i.e., points which violate the constraints) are modified by a
binary search repair strategy. The algorithm terminates when the standard
deviation of the objective values drops below a threshold.
PSS can be viewed as an evolutionary approach. At the heart of the al-
gorithm is the reference set which is initialized by a diversification algorithm.
The size of the set is fixed and between 10 and 20. In each iteration, all pairs
and 3-tuples containing a new solution are combined to create new candidate
solutions. The combination is a linear combination with a different random
factor for each decision variable. This typically results in several hundred new
points which are then evaluated simultaneously. The new reference set is built
by choosing both the best and most diverse solutions while moving infeasi-
ble solutions towards a known solution in a path relinking step. When the
standard deviation of the objective function values reaches a threshold, new
random solutions are created to increase diversity. After a fixed number of
these steps, the algorithm terminates.
The third algorithm, APPS, belongs to the group of pattern search op-
timization algorithms. By default, APPS uses two search directions for each
decision variable. Thus, 2nnew points are created and evaluated in parallel.
Infeasible points are discarded. If there is a best new point, it is chosen as
the new starting point for the next iteration. Otherwise the step length is de-
creased. The algorithm terminates when this length drops below a threshold.
APPS does not work iteratively, but asynchronously, i.e., it does not wait for
28 Frank Thilo and Manfred Grauer
all points to be evaluated before it creates new candidate solutions, but can
continue as soon as one evaluation has finished and has resulted in a new best
solution. In this respect, it differs from the synchronous approaches of both
DPS and PSS.
Computational Results
In the following, first results from solving multidisciplinary optimization prob-
lems are presented. The computations have been performed on the Rubens
SLES Linux cluster of the University of Siegen on up to 300 CPUs.
To allow an extensive scalability analysis over a wide range of problem di-
mensions and numbers of CPUs, a mathematical test problem is defined based
on the well-known Rosenbrock function. To mimic the temporal behaviour of
a real, simulation-based problem, an event-based simulation is used which
keeps track of virtual wall clock time where each evaluation of the function
is assigned a given amount of virtual time. To validate the results, a subset
is compared with those of a real distributed optimization on the compute
Figures 1 and 2 show the relative speedup of DPS and PSS for different
problem dimensions and numbers of CPUs. Figures 3 and 4 depict the algo-
rithms’ average solution quality over time for 10 and 100 CPUs, respectively.
The results indicate that each algorithm exhibits different scalability charac-
teristics. Scatter search has the highest efficiency for large numbers of CPUs,
in particular for problems with few decision variables. However, it takes more
absolute time to reach a given solution quality than the other two algorithms
when using only a small number of CPUs.
The algorithms have been used to solve several real-world optimization
problems. They have been integrated into the OpTiX optimization environ-
ment which presents an abstract interface of the problem to the algorithms
and handles the distribution and scheduling of the distributed simulation runs.
It is currently being transformed into a service-oriented architecture [2]. The
past problems include a design problem of an aircraft wing and several design
and hybrid control problems in groundwater and pollution management. We
plan to present scalability results for two new problem domains and compare
them to the findings of the Rosenbrock test problem: The first optimization
problem is a multi-stage problem in metal-sheet forming where an initial steel
blank repeatedly undergoes a deep drawing process with different tools and
forces in each stage. Possible decision variables are the geometrical parameters
(e.g., diameters, radii) of the tools or the blank, or other process parameters,
e.g., blank holder forces. In addition to these continuous variables, the algo-
rithm must also find the optimal number of stages. Thus, the task can be
classified as a mixed-integer, non-linear optimization problem. The INDEED
[5] software package is used to simulate the forming process and the CATIA
CAD software is interfaced for geometry generation.
Scalability of Three Parallel Direct Search Methods 29
Figure 1: Relative speedup of DPS for solving 10-
to 100-dimensional Rosenbrock problems on 1 to
200 CPUs (average of 200 runs)
Figure 2: Relative speedup of PSS for solving 10-
to 100-dimensional Rosenbrock problems on 1 to
200 CPUs (average of 200 runs)
Figure 3: Comparison of the solution quality over
virtual time; 10-dimensional Rosenbrock problem,
solved using 10 CPUs (average of 200 runs)
Figure 4: Comparison of the solution quality over
virtual time; 10-dimensional Rosenbrock problem,
solved using 100 CPUs (average of 200 runs)
The second problem is one in the domain of alloy casting processes. A hot,
liquid alloy is cast into a molding shell which defines the final shape of the
desired object. This is a multi-scale problem which covers the complex motion
of the liquid alloy as it flows into the shell and its thermodynamic behavior on
the large scale and effects of the solidification process of the material’s evolving
microstructure on the very small scale. Possible goals of the optimization is to
enhance the quality of solidified material and to decrease the time needed for
the whole process. Decision variables include temperatures, the filling speed
and geometrical parameters. The software packages CASTS [4] and MICRESS
are used to simulate the casting process and the microstructure attributes.
For both the casting and the metal-sheet forming problem, it is possible
to utilize parallelism for a single simulation. For the parallel INDEED vari-
ant FETI-INDEED as well as for CASTS, their scalability behavior is being
analyzed when utilizing different networking technologies from Fast Ethernet
to Myrinet. Furthermore, the combination of parallelization on the simulation
and optimization level is being examined. Based on the scalability analysis of
each, an optimal allocation of resources can be determined by estimating the
combined speedup for a given number of CPUs and allocation strategy.
30 Frank Thilo and Manfred Grauer
1. Grauer, M., Barth, T.: About Distributed Simulation-based Optimization of
Forming Procesess Using a Grid Architecture. In: Ghosh, S., Castro, J.M.,
Lee, J.K. (eds.) Materials Processing and Design: Modeling, Simulation and
Applications (NUMIFORM), Springer, 2097–2102 (2004).
2. Foster, I.: Service-Oriented Science. Science 308, 814–817 (2005).
3. Hough, P., Kolda, T.G., Torczon, V.: Asynchronous Parallel Pattern Search for
Nonlinear Optimization. SIAM Journal of Scientific Computing 23(1), 134–156
4. Jakumeit, J., Barth, T., Grauer, M., Reichwald, J.: Grid Computing for Casting
Simulations. In: Proc. Modeling of Casting, Welding and Advanced Solidifica-
tion Processes XI MCWASP, to be published (2006).
5. Kessler, L., Weiher, J., Roux, F.-X., Diemer, J.: Forming simulation of high-
and ultrahigh-strength steel using INDEED with the FETI method on a work-
station cluster. In: Mori, K.-I. (ed.) Simulation of Materials Processing, Proc.
of NUMIFORM 2001. Balkema Publishers, 399–404 (2001).
6. Kolda, T.G., Lewis, M.R., Torczon, V.: Optimization by Direct Search: New
Perspectives on Some Classical and Modern Methods. SIAM Review 45(3),
385–482 (2003).
7. Kumar, V., Grama, A., Gupta, A., Karpysis, G.: Introduction to Parallel Com-
puting. Benjamin Cummings (1994).
8. Marti, R., Laguna, M., Glover, F.: Principles of Scatter Search. European Jour-
nal of Operational Research 169(2), 359–372 (2006).
9. Stefanescu, D.M.: Computer simulation of shrinkage related defects in metal
castings – a review. International Cast Metals Research 18, 129–143 (2005).
Nanatsudaki Model of Knowledge Creation
Andrzej P. Wierzbicki12 and Yoshiteru Nakamori1
1Center for Strategic Development of Science and Technology, Japan Advanced
Institute of Science and Technology, Asahidai 1-1, Nomi, Ishikawa 923-1292,
2National Institute of Telecommunications, Szachowa 11, 04-894 Warsaw, Poland,
In the book Creative Space [1], we have shown that there are many spirals of
knowledge creation, some of them of organizational character, typical for mar-
ket innovations and practice-oriented organizations, some of normal academic
character, typical for research organizations.
The normal academic research combines actually three spirals: hermeneu-
tics (gathering scientific information and knowledge from literature, web
and other sources and reflecting on these materials), called by us the EAIR
(Enlightenment-Analysis-Immersion-Reflection)Spiral;debate (discussing in
a group research under way), called by us the EDIS (Enlightenment-Debate-
Immersion-Selection)Spiral;experiment (testing ideas and hypotheses by
experimental research), called by us the EEIS (Enlightenment-Experiment
Interpretation-Selection)Spiral. Since all of these spirals begin with having
an idea, called the Enlightenment (illumination,aha,eureka) effect, they can
be combined into a Triple Helix of normal knowledge creation, typical for
academic work.
These three spirals contained in the Triple Helix do not exhaustively de-
scribe all what happens in academic knowledge creation, but they describe
most essential elements of academic research: gathering and interpreting in-
formation and knowledge, debating and experimenting. However, these spirals
are individually oriented, even if a university and a laboratory should support
them; e.g., the motivation for and the actual research on preparing a doctoral
thesis is mostly individual. Moreover, the Triple Helix only describes what
researchers actually do, it is thus a descriptive model. Obviously, the model
helps in a better understanding of some intuitive transitions in these spirals
and makes possible testing, which parts of these spirals are well supported in
academic practice and which require more support; but it does not give clear
conclusions how to organize research.
However, there are also several other creative spirals described and ana-
lyzed in the book Creative Space. One is the ARME Spiral of revolution-
32 Andrzej P. Wierzbicki and Yoshiteru Nakamori
ary knowledge creation; however, revolutionary knowledge creation occurs
rarely and in unexpected places. But three others are important for prac-
tical knowledge creation, for innovations, particularly in industry and other
purpose-oriented organizations. These are the organizational creative spirals,
motivated by purposes of a group and aimed at using the creative power
of the group, while an individual plays here the role of a member of the
group, not of an individual researcher. One of them is the widely known SECI
(Socialization-Externalization-Combination-Internalization)Spiral; another,
actually older but formulated as a spiral only recently, is the brainstorming
DCCV (Divergence-Convergence-Crystallization-Verification)Spiral; still an-
other, the Occidental counterpart of the SECI Spiral (which is of Oriental ori-
gin), is the objective setting OPEC (Objectives-Process-Expansion-Closure)
Each of these spirals has a different role and can be applied for different
purposes, but all have their strengths. Unfortunately, they cannot be easily
combined into a multiple helix like the Triple Helix, because they do not share
the same elements. However, the main challenge is not only to combine these
spirals between themselves, but also with the spirals of academic knowledge
creation. This general challenge is difficult, but such a combination would be
important for several reasons:
Combining these spirals might strengthen academic knowledge creation,
because it would increase in it the role of the group supporting the indi-
vidual research;
Combining these spirals might strengthen also industrial innovation and
knowledge creation, because it always contains also some individual ele-
ments that should be explicitly accounted for;
Combining these spirals might help in the cooperation of industry with
academic institutions in producing innovations, because it could bridge
the gap between the different ways of conducting research in academia
and in industry.
With these purposes, we present in this paper the JAIST Nanatsudaki
Model – an exemplar (serving as an example to follow, a normative model) of
a process of knowledge and technology creation. It consists of seven creative
spirals; and each of these spirals might be as beautiful and unpredictable in its
creativity, as water whirls in the seven waterfalls (nanatsudaki) on Asahidai
close to JAIST. The seven spirals include the three academic and the three or-
ganizational mentioned above, but are supplemented by a planning roadmap-
ping spiral based on the I-System (the pentagram of Nakamori). The model
is build following the assumption that its applications will concern technol-
ogy or material science development, thus the application phase consists of
experimental work.
Although the model could start with any constitutive spiral, we assume
that it starts with objective setting (thus uses part or entire of the OPEC
Nanatsudaki Model of Knowledge Creation Processes 33
Spiral) and ends with the applications, experimental work, here represented
by the EEIS Spiral.
There can be two interpretations of the JAIST Nanatsudaki Model. One
is that each constitutive spiral of this septagram should be completed, i.e.,
at least one cycle of the spiral should be realized. This is, however, a rather
constraining interpretation, since creative spirals should start and end at any
of their elements, without a prescribed number of cycles. Thus, we describe
the model while using a different interpretation: we might use any number
of the elements (transitions) of the spirals, as necessary, sometimes without
completing even one cycle, sometimes repeating more than one cycle.
Beside the detailed description of the model, the paper presents its in-
tended applications and comments on the comparison of importance of all its
constitutive spirals, based on a survey of opinions conducted at JAIST.
Fig. 1. Diagram of JAIST Nanatsudaki Model
1. Wierzbicki, A., Nakamori, Y.: Creative Space, Springer, Berlin (2006).
The Use of Reference Profiles and Multiple
Criteria Evaluation in Knowledge Acquisition
from Large Databases
Andrzej P. Wierzbicki12 , Jing Tian1, and Hongtao Ren1
1School of Knowledge Science, Japan Advanced Institute of Science and
Technology (JAIST), Asahidai 1-1, Nomi, Ishikawa 923-1292 Japan
2National Institute of Telecommunications, Szachowa 11, 04-894 Warsaw, Poland,
When analyzing complex data sets in a large database, the problem of knowl-
edge acquisition can be posed as finding such data sets that either correspond
best to the expectations of a user (client, decision maker, etc.) or, contrari-
wise, correspond worst to such expectations. We suggest in this paper that
such expectations should be described by a set of criteria and by a reference
profile of the desired values of such criteria. The reference point method can
be applied then to find the data sets that correspond either best or worst to
the expectations.
The paper describes such an approach first in an abstract way and sug-
gests that this approach can be applied to diverse aims, such as finding critical
aspects of a complex logistics and supply chain management problem. How-
ever, the actual motivation of this approach was the interpretation of data
of a survey of conditions and problems of scientific creativity at JAIST. This
original application is presented in more detail in the paper.
The purpose of the survey was to find what aspects of knowledge creation
processes are evaluated by graduate students (preparing for a master or doc-
toral degree) as either most critical or most important. A long questionnaire
was prepared and answered by over 120 students; the questions were of three
types. The first type was assessing importance of a given subject; the most
important questions might be considered as those that correspond best to
a reference profile. The other type was assessing the situation between stu-
dents and at the university; the most critical questions might be selected as
those that correspond worst to a reference profile. The third type was testing
the answers to the first two types by indirect questioning revealing student
It was found that most critical questions of the second type (worst nega-
tively evaluated by students) are related to not good enough situations con-
The Use of Reference Profiles and Multiple Criteria Evaluation 35
1. Critical feedback, questions and suggestions in group discussions;
2. Organizing and planning research activities;
3. Preparing presentations for seminars and conferences;
4. Designing and planning experiments;
5. Generating new ideas and research concepts.
These are actually elements of four spirals of knowledge creation: Inter-
subjective EDIS (Enlightenment-Debate-Immersion-Selection) Spiral – items
1) and 3); Experimental EEIS (Enlightenment-Experiment-Interpretation-
Selection) Spiral – item 4); Hermeneutic EAIR (Enlightenment-Analysis-
Immersion-Reflection) Spiral – item 5); and Roadmapping (I-System) Spiral
of planning knowledge creation processes – item 2). The importance of these
spirals is also stressed by the positive evaluation of the importance of other
elements of these spirals in response to questions of the first type:
1. Learning and training how to do experiments;
2. Help and guidance from the supervisor and colleagues;
3. Frequent communication of the group.
The analysis also has shown that language barriers are considered most
critical for good research, which is an expected result, but also indicated some
unexpected results, such as that research competition and personal shyness
do not essentially prevent an exchange of ideas.
A general conclusion is that the use of a multiple criteria formulation and
reference profiles for knowledge acquisition from complex data sets gives very
promising results and should be applied more broadly.
Convex Envelope for Medical Modeling
Fadi Yaacoub, Yskandar Hamam, and Charbel Fares
ESIEE, Lab. A2SI, Cit´e Descartes, BP 99, 93162 Noisy-Le-Grand, France,
[f.yaacoub, y.hamam, c.fares]
The environment simulation is widely used nowadays. Training in many fields
such as medicine and architecture heavily depends on virtual reality tech-
niques. Since objects in real life do not have a deterministic shape it is not
possible to have a geometric equation that might model them. Convex en-
velopes are a must in simulation. The need of such envelopes rises with the
intention of having realistic scenes with collision detection between objects.
In this paper three methods for generating the convex envelope are compared.
Then a combination of those methods is shown in order to reduce the time of
execution yielding into a hybrid method for convex envelope generation.
The convex hull or convex envelope of a finite set Sof npoints in the Eu-
clidean space <dof dimension ddenoted as CH (S) is defined by the smallest
convex set containing all the points or simply the intersection of all half-spaces
containing the set S. The convex hull in <dis the set of solutions to a finite
system of linear inequalities in d-variables:
CH (S) = x∈ <d:Ax b(1)
where A∈ <ndand b∈ <n.
In this paper three methods for computing the convex hull are shown:
Brute Force [1], Gift Wrapping (Jarvis March) [2] and QuickHull [3]. Finally
a hybrid technique that combines those algorithms is shown.
The Brute Force algorithm begins by taking a random point piand con-
siders three other different points as a facet (pj,pk,pl). It checks if the point
piis counterclockwise with respect to this facet and continues with another
facet (pj,pk,pl+1) and so on by checking all the facets made by all points
other then pi. If piis counterclockwise with respect to all facets, it is on the
convex hull. Otherwise, if the point is clockwise with one of the facets, it is
not on the convex hull.
The Gift Wrapping algorithm acts as follows: first it finds a starting edge
(a,b) by using the 2D algorithm on the projection of the points on the XY
plane; it pivots a plane around the edge of the hull; it finds the smallest angle
of a plane picontaining the starting edge (a,b) and a point pi; it replaces piby
Convex Envelope for Medical Modeling 37
cand forms a triangular face containing (a,b,c). All points now lie to the left
of triangle (a,b,c). Finally the algorithm repeats the same process recursively
for the edges (a,c) and (b,c) by finding other triangles adjacent to those edges.
The QuickHull finds the convex envelope by recursively partitioning the
given set of points. It begins by dividing the set of points into two subsets
with respect to a plane formed by three points: the vertices corresponding
to the minimum (xmin) and maximum (xmax) abscise, and the vertex cor-
responding the maximum distance from the line joining (xmin,xmax ). From
this initial plane, QuickHull creates a cone of new facets (called visible facets)
by calculating the point that has the maximum distance with respect to the
plane. Therefore, QuickHull builds new sets of points from the outside set of
points of the visible facets. If a point is above multiple new facets, one of the
new facets is selected. If it is below all the new facets, the point is inside the
convex hull and can consequently be discarded. Partitioning also records the
furthest points of each outside set. Table 1 shows the calculation time for the
algorithms shown above when applied on three different wrist bones.
Since the running time of each algorithm depends on the number of points
in the object, our objective is to reduce the number of iterations to speed up
the algorithm. Therefore, a hybrid algorithm based on QuickHull and Gift
Wrapping is proposed. It consists first of using the QuickHull and then ap-
plies the Gift Wrap on each subset of points obtained. In the full paper the
algorithms will be discussed in detail and numerical simulations will be given.
3D Model Original Model Convex Hull Brute Force Gift Wrap QuickHull
#vertices/#facets #vertices/#facets time(sec) time(sec) time(sec)
3rdMeta- 675 1272 150 296 2330.7 0.26 0.21
Hamat 2812 5620 394 784 19231.2 0.89 0.62
Ulna 977 1864 312 620 12153 0.41 0.37
Table 1. Execution time for computing the 3D convex hull between Brute Force,
Gift Wrap and QuickHull
Keywords: Convex envelope, medical modeling, computational geometry,
bounding volumes, collision detection, virtual reality
1. Breg, M., Schwarzkopf, O., Kreveld, M., Overmars, M.: Computational Geome-
try: Algorithms and Applications, 2nd ed., Springer (2000).
2. O’Rourke, J.: Computational Geometry in C, Cambridge University Press, New
York (1994).
3. Barber, C., Dobkin, D., Huhdanpaa, H.: The QuickHull Algorithm for Convex
Hulls. ACM Transactions on Mathematical Software 22(4), 469–483 (1996).
Applying Data Mining for Early Warning and
Proactive Control in Food Supply Networks
Li Yuan, Mark R. Kramer, and Adrie J.M. Beulens
Information Technology Group, Wageningen University Dreijenplein 2, 6703 HB,
Wageningen, The Netherlands, [Yuan.Li, Mark.Kramer, Adrie.Beulens]
European consumers are highly conscious of food quality and safety. This
concern has been strengthened by a series of food safety crises in the recent
past such as Bovine Spongiform Encephalopathy (BSE), dioxin contamina-
tion, Foot and Mouth Disease (FMD), Nitrofen, etc. [1]. Recall announce-
ments can be found in newspapers almost weekly as a reaction to deficiencies
in food products. In response further, food supply networks implement sys-
tems to improve quality of food products and to guarantee food safety. In order
to prevent problems in food quality and improve efficiency and effectiveness of
operations, early warning and proactive control systems are required in food
supply networks.
Early Warning and Proactive Control
Early warning systems are well known in natural sciences. These systems,
based on historical monitoring, local observation or computer modelling, pre-
dict and help to prevent or reduce the impact of natural disasters. They are
typically used to monitor potential disasters relating to meteorology, geology
(e.g., earthquakes and volcanoes) [3] or technology (e.g., nuclear safety). Early
warning is being extended to other application areas as well. For example, [2]
presented a prototype sensor system for the early detection of microbially
linked spoilage in stored wheat grain. The early warning system we intend
to build should not only predict potential food quality problems, but also
help identify relations between determinant factors and quality attributes of
food products. Ultimately, the knowledge about these relations and the de-
cision varieties associated with these factors will enable proactive control to
prevent those problems. A proactive control system can adjust corresponding
determinant factors to prevent quality problems.
Applying Data Mining in Food Supply Networks 39
Data Mining
The application of early warning and proactive control requires predictive
models of the object system (i.e., the food supply network being controlled).
However, to construct such a model, we would normally require detailed in-
sight into processes involved. Processes that determine quality of food prod-
ucts are not completely understood and there are many unknown interactions
between quality attributes. However, in current food supply networks, large
amounts of data about business operations and transactions are recorded ev-
ery day. So an alternative approach is to use data mining to infer a model
from available data. Data mining (DM) is the process of extracting valid,
previously unknown, comprehensible and actionable information from large
databases and using it to make crucial business decisions [4]. Application
of data mining in food supply networks is cheap and flexible when domain
knowledge is scarce [5].
Framework for Early Warning System in Food Supply Networks
Based on the objectives of early warning and proactive control we designed a
framework for early warning system in food supply networks. An important
component of this framework is the knowledge base. This knowledge base
contains information needed to implement early warning and proactive control
in food supply networks. For each type of process that we intend to control,
the following information is stored: variables involved, control limits for all
variables, data availability and the time required to gather data, decision
variety and influence of decisions on subsequent processes.
Fig. 1. Framework for early warning system in food supply networks
The knowledge base also serves to extend the early warning and proactive
control systems. As new relations are discovered, these relations and variables
40 Li Yuan, Mark R. Kramer, and Adrie J.M. Beulens
involved will be recorded in the knowledge base. So it is necessary to accu-
mulate the knowledge we discovered along the way and organize it with a
systematic, ontological approach. Our knowledge base will contain
types of determinant factors,
types of deviations,
types of relations between determinant factors and performance,
suitable data mining techniques for discovering these types of relations,
and instantiations of those types of relations in food supply networks.
Managers can easily benefit from this knowledge base by either directly ap-
plying similar relations to their cases or by employing suggested data min-
ing techniques to solve their problems. The template approaches will provide
managers with a guidebook to help classify their problems into appropriate
types and to select proper data mining techniques for relation discovery and
Case Study
Our case study in a chicken supply network has already shown the advantage
of using data mining in food supply networks. In order to identify relations
between Death On Arrival (DOA) and its determinant factors, we used three
different types of data mining techniques (decision tree, neural networks, and
nearest-neighbours methods) in the large volume of data stored at various
stages of the supply network. Results of this research have already been con-
firmed by domain experts in food supply networks.
Concluding Remarks
The combination of serious effects of food safety problems, the abundance of
recorded data and potential benefits of preventing food quality problems in
food supply networks provided the motivation for this research effort. Our next
step in this ongoing research project is to construct a knowledge base with the
associated ontology using the knowledge we obtained in case studies. Further,
we also build an early warning system with the knowledge base and apply it
to novel cases in order to verify the usability and validity for predicting food
quality problems.
1. Beulens, A.J.M.: Transparency Requirements in Supply Chains and Networks:
Yet another challenge for the Business and ICT Community. Herausforderungen
der Wirtschaftsinformatik in der Informationsgesellschaft. Wissenschaftsverlag
Edition am Gutenbergplatz, Leipzig (2003).
Applying Data Mining in Food Supply Networks 41
2. De Lacy Costello, B.P.J., Ewen, R.J., Gunson, H., Ratcliffe, N.M., Sivanand,
P.S., Spencer-Phillips, P.T.N.: A prototype sensor system for the early detection
of microbially linked spoilage in stored wheat grain. Measurement Science &
Technology 14(4), 397–409 (2003).
3. Grijsen, J.G.S., Snoeker, X.C., Vermeulen, C.J.M.: An information system for
flood early warning. Pres. at the 3rd International Conference on Floods and
Flood Management, Florence, Italy, 24–26 November 1992 (1993).
4. Simoudis, E.: Reality Check for Data Mining. IEEE EXPERT 11(5), 26–33
5. Verdenius, F., Hunter, L.: The power and pitfalls of inductive modelling. In:
Tijskens, L.M.M., Hertog, M.L.A.T.M., Nicolai, B.M. (eds.) Food Process Mod-
elling. Woodhead Publishing Limited, 105–136 (2000).
Part II
Contributions Logistics and SCM Workshop
Optimizing Inventory Decisions in a
Multi-Stage Supply Chain Under Stochastic
Ab Rahman Ahmad1and M. E. Seliaman2
1UTM, Johor, Malaysia, [email protected]
2King Fahd University of Petroleum and Minerals, Dhahran 31261, KSA,
Supply chain management can be defined as a set of approaches utilized to
efficiently integrate suppliers, manufacturers, warehouses, and stores, so that
merchandise is produced and distributed at the right quantities, to the right
locations, and at the right time, in order to minimize system-wide cost while
satisfying service level requirements. Recently numerous articles in supply
chain modeling have been written in response to the global competition. How-
ever, most supply chain inventory models deal with two-stage supply chains.
Even when multi-stage supply chains are considered, most of the developed
models are based on restrictive assumptions. Therefore, there is a need to
analyze models that relax the usual assumptions to allow for a more realistic
analysis of the supply chain.
In this paper we consider the case of a three-stage supply where a firm can
supply many customers. This supply chain system involves suppliers, manu-
facturers, and retailers. Production and inventory decisions are made at the
suppliers and manufacturers levels. The production rates for the suppliers
and manufacturers are assumed finite. In addition the demand for each firm
is assumed to be stochastic. The problem is to coordinate production and
inventory decisions across the supply chain so that the total cost of the sys-
tem is minimized. For this purpose, we develop a model to deal with different
inventory coordination mechanisms between the chain members. Numerical
examples will be presented and simulation experiments will be used to vali-
date the model.
Impact of E-Commerce on an Integrated
Distribution Network
Daniela Ambrosino and Anna Sciomachen
DIEM – Universit´a di Genova Via Vivaldi, 5, 16126 Genova, Italy, [ambrosin,
E-Commerce (EC) provides new channels for the distribution of goods; it
represents an opportunity to improve the flows in the supply chain, and con-
sequently, to reduce the inventory level in the whole network [4]. But on the
other hand, EC imposes to use fast and accurate information systems and
communication; in these new systems collaboration and coordination become
a critical issue and integration represents the only way to survive [5].
Motivated by the above considerations, in this work we will devote our
attention to the integration in the inventory management in a multi-echelon,
multi-channel distribution system; in particular, we will analyse the manage-
ment of inventories of final goods in a distribution system where products are
available in different supply channels: a traditional channel in which the prod-
ucts are distributed through depots, a direct channel and an Internet-enabled
direct channel.
The distribution network we are involved with is made up of three levels:
central depots (CD), peripheral depots (D) and customers (clients C, big
clients BC and e shopping clients e C). The channels for supplying goods
are the following: a traditional channel where peripheral depots (supplied by
central ones) serve clients (C); a direct channel for serving big clients (BC),
i.e., clients characterized by a large demand are served directly by CD; the
Internet-enabled channel where depots at the top echelon (CD) serve e clients
(e C). The CD represent the supply points of the network and play a dual
role: they supply peripheral depots and serve customers. Customers served by
CD are big-clients and e-clients of the direct and Internet-enabled channel,
respectively. The assignment of peripheral depots and both big clients and
e clients to central depots is known and also the assignment of clients C to
the peripheral depots. In order to better understand the flows of goods in
the network under investigation, we report in Figure 1 a simple distribution
system with two central and three peripheral depots, four big clients and a
set of customers and e-clients.
Impact of E-Commerce on an Integrated Distribution Network 47
Fig. 1. A simple case of the distribution network under investigation
We assume that the demand of customers is known and expressed in terms
of units of a single representative commodity. Inventories can be stocked both
at CD and D.
Note that in the Internet-enabled channel the manufacturer receives orders
directly from e-clients (via Internet) and ships the product directly to them.
This supply chain strategy has the same structure as a single echelon tradi-
tional supply chain; however, the problems arising in this new channel are
completely new and in part connected with the increasing customer service
expectations [3].
Given the network described above and a time horizon Tsplit into periods,
the problem we deal with is to determine the optimal inventory level for central
and peripheral depots within each time period Tin order to minimise ordering,
inventory, stock out, e commerce and transportation costs, whilst satisfying
capacity and requirements constraints and granting a certain customer service
level. The inventory policy is based on a periodic review policy (e.g., every
day) in which goods are ordered when inventories are under a given level
called ordering point; the quantity to order is defined for restoring inventories
while minimising the logistic costs; it depends on both the existing stock in
the whole system and the inventory strategy.
We present a three phase algorithm: the pre-processing phase is devoted
to the definition of the order point for each stock point of the network (i.e.,
CD and D), the first phase determines the optimal inventory policy by solving
a Mixed Integer Linear Programming model (MIP); finally, a second phase,
denoted “integration” phase, defines the “current stock situation” in the whole
48 Daniela Ambrosino and Anna Sciomachen
network and identifies accordingly the best transferring policy for managing
the flow of goods in the network and improving the customer service level.
Note that a particular stock situation in a part of the network can require
to modify the optimal inventory allocation in the whole distribution system;
the aim of this integration phase is to avoid to have unbalanced inventory
levels in the different echelons when the network or a part of it is suffering
or is risking to suffer a shortage (a stock out). For doing this we introduce
some controls on local and global stock out and define the best transferring
strategy for granting an inventory balance in accordance with the rationing
strategy [2] and the base stock policy modification [1].
The proposed three phase solution approach is used for evaluating new
distribution strategies for an Italian food company. We simulate different sce-
narios by assuming different initial stock situations, different customers’ de-
mands and different percentage of demand devoted to e-commerce (20, 40, 60
and 80% of the demand of C).
We evaluate the impact of EC on both the distribution costs and the in-
ventory levels in the network. Preliminary results show that if a part of clients
C, usually served by peripheral depots, choose the Internet channel, the inven-
tory level at the peripheral echelon of the network gets lower. Remembering
the dual rule of central depots, we can note that EC has a positive effect
also on inventories taken at the central depots for supplying the depots at
the lower level of the network. In our case the Internet-enabled direct channel
enables the company to obtain an average cost reduction of 15%.1
1. Chen, F.: Optimal policies for multi-echelon inventory problems with batch
ordering. Operations Research 48(3), 376–389 (2000).
2. Diks, E.B., De Kok, A.G.: Optimal control of a divergent multi-echelon inven-
tory system. European Journal of Operational Research 111, 75–97 (1998).
3. Disney, S.M., Naim, M.M., Potter, A.: Assessing the impact of e-business on
supply chain dynamics. International Journal of Production Economics 89,
109–118 (2004).
4. Gunasekaran, A., Marri, H.B., McGaughey, R.E., Nebhwani, M.D.: E-commerce
and its impact on operations management. International Journal of Production
Economics 75, 185–197 (2002).
5. Manthou, V., Vlachopoulou, M., Folinas, D.: Virtual e-Chain (VeC) model for
supply chain collaboration. International Journal of Production Economics 87,
241–250 (2004).
1Partially supported by MIUR PRI N 2005012452-003 project.
An Interval Pivoting Heuristic for Finding
Quality Solutions to Uniform-Bound
Interval-Flow Transportation Problem
Aruna Apte1and Richard S. Barr2
1Graduate School of Business and Public Policy, Naval Postgraduate School,
Monterey CA 93943, USA, [email protected]
2Department of Engineering Management, Information, and Systems, Southern
Methodist University, Dallas, TX 75275, USA, [email protected]
We present interval-flow networks, network flow models in which the flow on
an arc may be required to be either zero or within a specified range. The ad-
dition of such conditional lower bounds creates a mixed-integer program that
captures such well-known restrictions as time windows and minimum load
sizes. This paper describes the mathematical properties of interval-flow net-
works as the basis for an efficient new heuristic approach that incorporates the
conditional bounds into the simplex pivoting process and exploits the efficient,
specialized pure-network simplex technologies. The algorithm was applied to
interval-flow transportation problems with a uniform conditional lower bound
and tested on problems with up to 5000 nodes and 10000 arcs. Empirical
comparisons with CPLEX demonstrate the effectiveness of this methodology,
both in terms of solution quality and processing time.
Managing the Service Supply Chain in the US
Department of Defense: Opportunities and
Uday Apte, Geraldo Ferrer, Ira Lewis, and Rene Rendon
Graduate School of Business and Public Policy, Naval Postgraduate School, 555
Dyer Street, Monterey, CA 93943, USA, [email protected]
The services acquisition volume in the US Department of Defense (DoD) has
continued to increase in scope and dollars in the past decade. Between FY 1999
to FY 2003, DoD’s spending on services increased by 66%, and in FY 2003, the
DoD spent over $118 billion or approximately 57% of total DoD’s procurement
dollars on services. In recent years, DoD has spent more on services than on
supplies, equipment and goods, even considering the high value of weapon
systems and large military items. These services belong to a very broad range
of activities ranging from grounds maintenance to space launch operations.
The major categories include professional, administrative, and management
support; construction, repair, and maintenance of facilities and equipment;
information technology; research and development, and medical care.
As DoD’s services acquisition volume continues to increase in scope and
dollars, the agency must keep greater attention to proper acquisition plan-
ning, adequate requirements definition, sufficient price evaluation, and proper
contractor oversight. In many ways, these are the same issues affecting the
acquisition of physical supplies and weapon systems. However, the unique
characteristics of services and the increasing importance of services acquisi-
tion offer a significant opportunity for conducting research in the management
of the service supply chain in the Department of Defense.
The objectives of the exploratory research presented in the paper are to
1. analyze the size, structure and trends in DOD’s service supply chain,
2. understand the challenges faced by contracting officers, program managers
and end users in services acquisition,
3. develop a conceptual framework for understanding and analyzing the sup-
ply chain in services, and
4. provide policy recommendations that can lead to more effective and effi-
cient management of DOD’s spending on services.
In addition to the analysis of service acquisition related data and theory de-
velopment, this research also includes empirical work in terms of site visits
Managing the Service Supply Chain in the US Department of Defense 51
and interviews at Navy, Army and Air Force bases. Addressing issues related
to both theory and practice, this paper makes a modest contribution towards
more effective and efficient management of service acquisition in the Depart-
ment of Defense.
Keywords: Service supply chain, outsourcing, contract management
Analysis of Heuristic Search Methods for
Scheduling Automated Guided Vehicles
Thomas Bednarczyk and Andreas Fink
Chair of Information Systems, Department of Economics,
Helmut-Schmidt-University / Universit¨at der Bundeswehr Hamburg,
Holstenhofweg 85, 22043 Hamburg, Germany, [thomas.bednarczyk,
We consider the problem of scheduling automated guided vehicles (AGVs) for
processing elementary transportation jobs. This problem, which is a special
case of the general pickup and delivery problem [1, 5], arises, e.g., on seaport
container terminals, where AGVs may be employed to transport containers
from quay cranes that load and unload ships to storage locations on the termi-
nal yard and vice versa [3, 6, 7]. That is, containers are to be moved between
the ship area and the yard using a fleet of vehicles, each of which can carry
one container at a time.
We look at the problem of dispatching AGVs to the transportation jobs
in order to minimize the total time it takes to serve a given set of jobs (e.g.,
minimizing the total time it takes to unload and upload a given set of con-
tainers to a ship and from a ship, respectively). Essentially, given a set of
resources (AGVs) and jobs (transportation requests) with processing times
for the jobs and sequence-dependent setup times for subsequently processed
jobs (driving times between the destination location of a job and the origin
point of the subsequent job), we aim for determining a sorted assignment of
jobs to resources such that the maximum completion time for the resources is
minimized. This problem is N P -hard, which motivates the use and analysis
of heuristics.
Since problems from practice, such as the AGV scheduling problem de-
scribed above, mostly embrace distinctive characteristics, applying heuristics
may imply a costly development of specialized algorithms which hinders the
application of such methods in the real world. On the one hand, this problem
might be partly solved by applying metaheuristics, which are generic with re-
gard to the type of problem and the respective solution space. In practice one
would like to apply metaheuristics by reusing suitable software components
which have to be adapted to the specific problem at hand in some well-defined
manner. HotFrame [2] provides such metaheuristics software components. On
the other hand, there is the question which metaheuristic and which config-
uration of some selected metaheuristic may provide best results for specific
Analysis of Heuristic Search Methods 53
problem instances for some considered problem scenarios. Therefore, we are
interested in analyzing connections between search landscape characteristics
and the performance of heuristic search methods.
We focus on metaheuristics that are based on the local search paradigm:
A greedy local search strategy such as steepest descent means selecting and
performing in each iteration of a search process a best move (i.e., an apparently
most promising change of the current solution); the search stops at a locally
optimal solution with no better neighboring solution. As the solution quality
of such local optima may be unsatisfactory we consider an iterated steepest
descent approach where the local search process restarts after a local optimum
has been obtained by means of some randomized perturbation scheme that
generates a new initial solution. Moreover, we use simulated annealing and
different tabu search approaches which employ more intelligent concepts to
overcome local optimality.
By means of search landscape analysis, in particular considering fitness
distance correlations [4], we examine the relationship between the solution
quality and the distance between solutions within a given search landscape.
This provides information about the difficulty of problem instances, e.g., in
connection with the valley structure of the underlying search landscape, which
can be used to select and configure elements of search methods.
1. Cordeau, J.-F., Laporte, G., Potvin, J.-Y., Savelsbergh, M.W.P.: Transporta-
tion on demand. Working Paper, CRT-2004-25 (2004).
2. Fink, A., Voß, S.: HotFrame: A heuristic optimization framework. In: Voß, S.,
Woodruff, D.L. (eds.) Optimization Software Class Libraries. Kluwer, Boston,
81–154 (2002).
3. Grunow, M., G¨unther, H.-O., Lehmann, M.: Dispatching multi-load AGVs
in highly automated seaport container terminals. OR Spectrum 26, 211–235
4. Hoos, H.H., St¨utzle, T.: Stochastic Local Search, Foundations and Applications.
Morgan Kaufmann, San Francisco (2005).
5. Savelsbergh, M.W.P., Sol, M.: The general pickup and delivery problem. Trans-
portation Science 29, 17–29 (1995).
6. Steenken, D., Voß, S., Stahlbock, R.: Container terminal operation and opera-
tions research – a classification and literations review. OR Spectrum 26, 3–49
7. Vis, I.F.A., Harika, I.: Comparison of vehicle types at an automated container
terminal. OR Spectrum 26, 117–143 (2004).
Exact and Approximate Algorithms for a Class
of Steiner Tree Problems Arising in Network
Design and Lot Sizing
Alysson M. Costa, Jean-Fran¸cois Cordeau, and Gilbert Laporte
Centre for Research in Transportation and Canada Research Chair in Distribution
Management, HEC Montr´eal, 3000 chemin de la Cˆote-Sainte-Catherine, Montr´eal,
Canada H3T 2A7, [alysson, cordeau, gilbert]
Several network design problems can be modeled as Steiner tree problems
with additional constraints, including budget constraints (imposing an upper
bound on the total network cost), and hop constraints (imposing that the path
from the root to any vertex in the solution has a maximum of hhops). Budget
constraints are frequently encountered in the design of distribution or telecom-
munication networks where the goal is to obtain a minimizing-cost network
connecting certain vertices. Hop constraints are also encountered in telecom-
munications, where they are used to model the network reliability or impose
limits on the transmission delays. Moreover, for certain classes of lot-sizing
problems modeled as Steiner tree problems, the addition of hop constraints
enables the consideration of time-capacity constraints. In this work, we deal
with a variation of the Steiner tree problem where, besides costs associated
with the arcs, one also has revenues associated with the vertices. The goal is
to maximize the sum of the collected revenues while respecting both hop and
budget constraints. We propose several mathematical formulations for this
problem and use them to develop branch-and-cut algorithms which are tested
on middle-sized instances. Computational results show that the choice of the
best formulation/algorithm strongly depends on the number of allowed hops.
We also propose a destroy-and-repair heuristic capable of obtaining very good
approximations within short computational times.
Keywords: Prize collecting, network design, Steiner tree problem, budget,
branch-and-cut, hop constraints, lot-sizing, time-capacity constraints.
Supply Chain Management in Archeological
Surveys, Excavations and Scientific Use
Joachim R. Daduna1and Veit St¨urmer2
1Berlin University of Applied Business Administration, Badensche Straße 50–51,
D-10715 Berlin, Germany, [email protected]
2Winckelmann Institut f¨ur Klassische Arch¨aologie, Humboldt-Universit¨at, Unter
den Linden 6, D-10099 Berlin, Germany
A fundamental problem in archaeology is the recording and management of
large numbers of objects, which represent the basis for the study, evaluation
and reconstruction of excavation results as well as their scientific presentation.
Here the use of methods in Supply Chain-Management (SCM) when plan-
ning archaeological processes can offer a solution, in particular with regard to
the mandatory management of information. The archaeological processes of
achievement, excavation (procurement), evaluation and reconstruction (pro-
duction) as well as provision for preservation and/or (public)presentation
(distribution) constitute the first part in this complex. The second part, man-
agement of evaluation, (in the sense of ‘after sales-achievements’) encompasses
essentially the administration,conservation,restoration and presentation of
excavated objects. Thereby the goal is to install efficient processes in the eval-
uation of material and storage through the use of logistic concepts and tech-
niques in the physical organisation of processes as well as in the development
of a comprehensive management of information. Beginning with a description
of the present structure of processes, a SCM-based concept is presented that
should reveal new possibilities in the area of archaeology.
Real-World Agent-Based Transport
Klaus Dorer
Senior Researcher, Whitestein Technologies GmbH (previously Living Systems
As with many industries and markets, the logistics sector faces extensive and
fundamental challenges associated with globalization. With shrinking margins
and, in many cases, just barely coping with immense cost pressures, companies
are being driven to substantially revise their product and service offerings,
business processes, and levels of operational excellence. It is widely recognized
that the optimal utilization of available capacity is the single most important
critical success factor for logistics operations.
Whitestein Technologies offers, with its Living System R
°Adaptive Trans-
portation Networks (LS/ATN) software, a sophisticated IT solution that ad-
dresses the needs of logistics companies operating in a dynamic and unpre-
dictable business world. LS/ATN focuses on the management and dispatching
of transportation orders and the optimization, execution and monitoring of
capacities (e.g., trucks).
In this talk we present the agent architecture on which the LS/ATN
bottom-up optimization is based. Agents interact to solve subproblems of
transporting orders that, when consolidated, result in an optimized solution
to the overall problem. Similar to human decision-making, solutions to prob-
lems arise from the interaction of individual decision makers (represented by
software agents), each with their own local knowledge. We present results
obtained by running LS/ATN on real world data of big logistics companies.
Finally we present LS/ATN in a live demo.
Scheduling of Automated Double
Rail-Mounted Gantry Cranes
Ren´e Eisenberg
University of Hamburg, Institute of Information Systems, Von-Melle-Park 5, 20146
Hamburg, Germany, [email protected]
Double rail-mounted gantry cranes (DRMG) depict one of the latest devel-
opments in container terminal handling equipment and have been put into
operation on the Container Terminal Altenwerder at the port of Hamburg in
2002. In a setup like this two rail-mounted gantry cranes of different sizes can
serve any stack of a single container block since the super-sized crane is able
to pass the standard crane even when loaded. Containers have to be relocated
to and from vehicles in transfer areas on both sides of a block and if required
within the block to free blocked up from-bin containers. Hence, five job types
are distinguished. Especially for the latter type ping-pong or cyclic restacking
has to be avoided.
In order to synchronize the cranes with adjacent transportation equipment,
for each container move a transfer date is given and has to be met in order to
avoid delay in horizontal transportation. These transfer dates are determined
by a superordinate planning component covering the whole terminal. Hence,
productivity maximization is limited by the defined transfer dates. In the
offline case, e.g., all information is available in advance, the DRMG scheduling
problem may be seen as a multiple travelling salesman problem with time
windows and, thus, is N P-hard when minimizing the idle movements and
maximizing productivity, respectively.
In practice not all information is present or correct in the first place. Also,
in the course of operations especially on the land side of a container block,
where cranes are remotely controlled manually by fewer crane operators than
cranes, transfer times are delayed stochastically. Because both cranes may
serve the whole stacking block, inter crane interferences might lead to longer
crane movements. These conditions cause uncertainty and make this an online
optimization problem and its data may change any time.
In this paper we present simple priority-rules and metaheuristics, but
also a simple branch-and-bound approach considering reduced problem sizes.
Limited time availability in real-time and the online problem characteristic
make algorithm design challenging. Algorithms are tested against a stochastic
discrete-event simulation model of a single block with realistic container load
58 Ren´e Eisenberg
which was implemented by the Hamburger Hafen und Logistik AG (HHLA).
The performance of the algorithms is measured in comparison to a simple
FIFO heuristic with respect to minimum delay of the horizontal transport
and minimum idle crane movements.
Keywords: Container transport, online optimization, travelling salesman
problem with time windows, discrete event simulation
Solving Real-World Vehicle Scheduling and
Routing Problems
Jens Gottlieb
SAP AG, Walldorf, Germany, [email protected]
This talk introduces the vehicle scheduling and routing problem (VSRP), for
which an optimization algorithm is offered in SAP’s supply chain management
solution. The algorithm for the VSRP is sketched, followed by a discussion of
its application to typical real-world scenarios from SAP’s customer base.
Exact and Heuristic Solution of the Global
Supply Chain Problem with Transfer Pricing
and Transportation Cost Allocation
Pierre Hansen1, S´ebastien Le Digabel2, Nenad Mladenovi´c3, and Sylvain
1GERAD and HEC Montr´eal, [email protected]
Ecole Polytechnique de Montr´eal, [email protected]
3Brunel University and GERAD, [email protected]
4HEC Montr´eal, [email protected]
In this paper, we consider one of the most important issues for multination-
als, i.e., the determination of transfer prices (prices that a buying subsidiary
of a firm has to pay to a selling subsidiary of the same firm). More specifi-
cally, we consider a multinational corporation that attempts to maximize its
global after tax profits by determining the flow of goods, the transfer prices,
and the transportation allocation between each of its subsidiaries. Vidal and
Goetschalckx [4] have formulated this problem as a Bilinear Program (BLP)
where each bilinear term corresponds to the product of two decision variables
representing the flow of goods and the transfer price between two subsidiaries,
respectively. These authors have proposed solving the BLP with an alternate
heuristic algorithm where an initial solution is obtained by linearization. This
local method consists in successively fixing one set of variables and solving
the remaining LP for the other set. The process can be terminated when the
change in the objective function is negligible. Under given conditions, the
solution obtained by the alternate heuristic corresponds to a local optimum.
In this paper, we propose an efficient way of using a new heuristic based on
Variable Neighbourhood Search (VNS) [3]. This algorithm has already been
used for the Pooling Problem [1], which may also be formulated as a BLP.
VNS consists of successively repeating two steps: (i) perturbing the current
solution within a neighbourhood of length k(initially set to 1); and (ii) from
this perturbed point, finding a new point with a local search. If this new local
optimum is better, it becomes the new current point, and the kparameter is
set again to 1. If this new local optimum is not better, the original current
point is kept, and the kparameter is increased for a bigger perturbation in
step (i). In our implementation of VNS, the local search is performed by the
alternate heuristic method proposed by Vidal and Goetschalckx. The pertur-
Transfer Pricing and Transportation Cost Allocation 61
bation procedure is performed by moving to a feasible extreme point in the
neighbourhood of the current solution.
Since BLP is a particular case of a nonconvex quadratic program with
nonconvex constraints (QP), an exact solution method designed for QP may
be applied to solve BLP. We therefore propose to use the branch and cut
algorithm of Audet et al. [2] for that purpose. This algorithm provides a
globally optimal solution (within given feasibility and optimality tolerances) in
finite time. The basic idea of this algorithm is to estimate all quadratic terms
by successive linearizations (outer approximations) within an enumeration
tree using Reformulation-Linearization Techniques (RLT). For the BLP case,
using RLT means replacing each bilinear term by a linear variable and adding
linear constraints to force the linear variable to approximate the bilinear term.
The three solution methods (Alternate, VNS, and branch and cut) are
tested on random instances.
1. Audet, C., Brimberg, J., Hansen, P., Le Digabel, S., Mladenovi´c, N.: Pooling
Problem: Alternate Formulations and Solution Methods. Management Science
50(6), 761–776 (2004).
2. Audet, C., Hansen, P., Jaumard, B., Savard, G.: A branch and cut algorithm
for nonconvex quadratically constrained quadratic programming. Mathematical
Programming 87(1, Ser. A), 131–152 (2000).
3. Hansen, P., Mladenovi´c, N.: Variable neighborhood search. In: Glover, F.,
Kochenberger, G.A. (eds.) Handbook of Metaheuristics, Kluwer, Boston, 145–
184 (2003).
4. Vidal, C.J., Goetschalckx, M.: A global supply chain model with transfer pricing
and transportation cost allocation. European Journal of Operational Research
129(1), 134–158 (2001).
Planning Problems for Combined Pick-up
Point Allocation, Transportation, and
Production Processes with Time-Varying
Processing Capacities
Christoph Hempsch
Deutsche Post Endowed Chair of Optimization of Distribution Networks, RWTH
Aachen University, Templergraben 64, D-52062 Aachen, Germany,
Some providers of postal or parcel services promise high levels of service to
their customers, e.g., next day delivery, resulting in tight lead times. To meet
these high service levels, complex logistics networks need to be planned and
operated for collection and delivery of mailings. Within such a supply chain
the interaction of processing at and transportation between different kinds of
facilities play a vital role. Therefore, postal and parcel services are a good
example for an application where supply chain planning needs to integrate a
wide range of heterogeneous decisions.
The talk starts with a discussion of strategic and operational decisions
within an example postal supply chain. Strategic planning decisions include
the location of distribution centers and the allocation of customers to them
while operational planning tasks include transportation of mailings or routing
of vehicles. Customer allocation may as well be planned within an operational
planning horizon. Also in postal or parcel logistics networks, complex sorting
processes at the distribution centers need to be considered. The amount of het-
erogeneous decisions within this virtual supply chain reflects the complexity of
problems operations research is facing nowadays. To reduce this complexity, in
literature problems tend to be decomposed into well known standard problems
such as facility location problems, vehicle routing problems and production
planning problems. Yet, models incorporating heterogeneous decisions are in-
creasingly found in literature. An example are inventory routing problems,
which combine decisions on vehicle routing and required inventory at remote
customer facilities.
Motivated by the described supply chain and the discussion above, a model
covering a comprehensive part of the supply chain is introduced. The model
combines decisions in pick-up point allocation and transportation from pick-
up points to production facilities. Within the use case of a postal service
Planning Problems for Combined Pick-up Point Allocation 63
provider, the pick-up points can represent letter boxes or corporate clients,
while the production facilities may represent sorting centers. Pick-up points
contain information on quantities as well as time windows. The model also
includes production processes at the sorting centers as time-varying processing
capacities. The processing capacities induce input requirements. To model the
processing capacities or input requirements, respectively, the planning horizon
is divided into a discrete set of time intervals.
The resulting model incorporates decisions from different logistics func-
tions within a supply chain. Solutions of the model are interesting for decision
support in strategic planning of supply chains like those of postal or parcel
service providers.
The model is formulated as a mixed integer problem. A problem formula-
tion as well as the underlying assumptions are presented. A software prototype
was implemented to solve test instances. Small and medium sized instances
are solved with standard solver software ILOG CPLEX. Allocations of cus-
tomers to production facilities as well as input distributions at the facilities
are visualized by the software for validation and analysis of results. The talk
concludes with examples of research perspectives.
Paradigm Shift in the Supply Chain – Is it
Really Happening?
Britta Kesper and Yuriy Kapys
DHL Solutions GmbH, Godesberger Allee 83-91, 53175 Bonn, Germany
Logistics landscape and requirements have drastically changed over the last
ten years. Manufacturing companies compete more and more on core com-
petencies which are mainly development, product design and production and
marketing. Meanwhile the companies start to cooperate in the field of sup-
ply chain management. Such developments increase the industry demand for
knowledge based, adaptive, flexible and collaborative business models. The
LLP/4PL business model is one of them.
As supply chains become more complex, solutions need to be more ad-
vanced. One of the services that is part of the LLP/4PL business model is
Supply Chain Consulting. Its objective is to support customers during their
decision making process at strategic and tactical levels to identify the best
supply chain structure.
DHL Supply Chain Consulting is focusing on three areas to create ad-
vanced logistics solutions, which are network design, carrier selection and
transport optimization. This workshop will use a reference case where the
benefits of network design and transport optimization are presented. Three
customers are competing in their product segments and geographical markets
but cooperate in the field of logistics with the support of DHL Exel Supply
Support of Bid-Price Generation for
International Large-Scale Plant Projects
Dirk Mattfeld and Jiayi Yang
Technische Universit¨at Braunschweig, Abt-Jerusalem-Str. 4, 38106 Braunschweig,
Summary. We propose a mathematical model for the support of sourcing and
scheduling decisions for large-scale industrial plant projects. Local content require-
ments and limited production capacity constrain the problem of determining a lower
bound on the bid-price for an industrial plant project. The talk will describe the
problem domain paying particular attention to the rapid development of the East-
Asian market. Challenges for Western industrial engineering vendors are discussed.
According to the special interest group on large-scale plant engineering, a di-
vision of VDMA (Verband Deutscher Maschinen- und Anlagenbau), 80% of
large-scale industrial orders placed in Germany in 2004 and 2005 came from
foreign countries. As this trend will continue, competition in the interna-
tional markets for industrial plants will intensify, causing additional pressure
on prices, increase of the local content requirement and decrease of project
Local content requirements (LCR) are set up for multiple reasons, such
as supporting domestic industry, developing domestic technological capacity
and ensuring protection for the domestic workforce. In order to satisfy a given
LCR, the German vendor has to decide on the part of the plant to be produced
in the buyers country. This may cause an outflow of engineering know-how
and may also increase costs. Furthermore, producing abroad will conflict with
a short expected project leadtime.
Most of the articles published on local content rules are more or less the-
oretical treatments in the economics literature, e.g., Grossman [7], Hollander
[8] and Richardson [15]. These studies focus on macroeconomic production
and welfare effects of local content policies. Only few papers look at this topic
from a business management point of view, e.g., by Munson and Rosenblatt
[11]. An overview of LCR for large-scale plant projects is given by Petersen
Most literature focusing on large-scale plant projects originates from en-
gineering disciplines. Exceptions are Backhaus [1] considering marketing as-
pects, Reiner [14] evaluating price management and Schiller [16] treating com-
66 Dirk Mattfeld and Jiayi Yang
petence management. These studies focus on qualitative approaches whereas
quantitative approaches on project scheduling typically refrain from an appli-
cation viewpoint, e.g., Kolisch [10], Klein [9], Neumann et al. [12], Zimmer-
mann et al. [18].
Strategic network planning in the context of supply chain management
has a strong link to large-scale plant project management, as decisions on
international facility location are to be taken; see, e.g., Goetschalckx and
Fleischmann [5], Vidal and Goetschalckx [17], Geoffrion and Powers [4], Cohen
and Lee [3]. These approaches support the strategic planning for sustainable
production within one period.
In conclusion, a deficiency on the strategic and tactical planning level
is seen for operations in large-scale industrial plant projects. In particular,
quantitative decision support approaches, addressing the interdependencies
between local content requirement, international facility location decision and
project scheduling are desirable for the phase of bid placement. To file a tender,
a lower bound on the project’s bid price has to be determined. The bid price
largely depends on sourcing decisions for the project components involved.
Decisions concerning the production location of project components are to
be taken such that total costs of production and transport are minimized.
These decisions are constrained by the LCR. On the other hand, activities
associated with the production of project components are to be scheduled
under limited resource capacities so that a predetermined project due date
is met. The duration of activities largely depends on the location decisions
taken. On the basis of this interrelation a mathematical optimization decision
support model is proposed. This model combines the international facility
location problem and the resource-constrained project scheduling problem.
The optimal solution obtained considers constraints such as the local content
requirement, resource capacities and the expected project lead time.
Support of Bid-Price Generation 67
1. Backhaus, K.: Industrieg¨utermarketing, 7th ed., M¨unchen (2003).
2. Burghardt, M.: Projektmanagement: Leitfaden f¨ur die Planung, ¨
und Steuerung von Entwicklungsprojekten, 6th ed., Erlangen (2002).
3. Cohen, M.A., Lee, H.L.: Resource Deployment Analysis of Global Manufactur-
ing and Distribution Networks. European Journal of Manufacturing and Oper-
ations Management 2(2), 81–104 (1989).
4. Geoffrion, A.M., Powers, R.F.: Twenty years of strategic distribution system
design: An evolutionary perspective. Interfaces 25(5), 105–128 (1995).
5. Goetschalckx, M., Fleischmann, B.: Strategic Network Planning. In: Stadtler,
H., Kilger, C. (eds.) Supply Chain Management and Advanced Planning: Con-
cepts, Models, Software and Case Studies. Berlin (2005).
6. Gottwald, K., Stroh, V., Waldmann, T.: Rekord im Ausland – Investitions-
schw¨ache im Inland, Lagebericht der Arbeitsgemeinschaft Großanlagenbau. Ar-
beitsgemeinschaft Großanlagenbau, Lyoner Straße 18, 60528 Frankfurt am Main
7. Grossman, G.M.: The Theory of Domestic Content Protection and Content
Preference. The Quarterly Journal of Economics 96(4), 583–603 (1981).
8. Hollander, A.: Content Protection and Transnational Monopoly. Journal of
International Economics 23(3/4), 283–297 (1987).
9. Klein, R.: Scheduling of Resource-Constrained Projects, Boston (1999).
10. Kolisch, R.: Project Scheduling under Resource Constraints, Heidelberg (1995).
11. Munson, C.L., Rosenblatt, M.J.: The Impact of Local Content Rules on Global
Sourcing Decisions. Production and Operations Management 6(3), 277–290
12. Neumann, K., Schwindt, C., Zimmermann, J.: Project Scheduling with Time
Windows and Scarce Resources, 2nd ed., Berlin (2003).
13. Petersen, J.: Local Content-Auflagen, betriebswirtschaftliche Relevanz und
Handhabung am Beispiel des internationalen Großanlagenbaus, Wiesbaden
14. Reiner, N.: Preismanagement im Anlagengesch¨aft: Ein entscheidungs-
orientierter Ansatz zur Angebotspreisbestimmung, Wiesbaden (2002).
15. Richardson, M.: The Effects of a Content Requirement on a Foreign Duopsonist.
Journal of International Economics 31(1/2), 143–155 (1991).
16. Schiller, T.: Kompetenz-Management f¨ur den Anlagenbau, Ansatz, Empirie und
Aufgaben, Wiesbaden (2000).
17. Vidal, C., Goetschalckx, M.: Strategic production-distribution models: A criti-
cal review with emphasis on global supply chain models. European Journal of
Operational Research 98(1), 1–18 (1997).
18. Zimmermann, J., Stark, C., Rieck, J.: Projektplanung: Modelle, Methoden,
Management, Berlin (2005).
Bid Querying Policies in Combinatorial
Auctions for Collaborative Transportation
Giselher Pankratz
Faculty of Economics and Business Administration, FernUniversit¨at – University of
Hagen, Profilstraße 8, 58084 Hagen, Germany,
Combinatorial Auctions for Collaborative
Transportation Planning
Due to their autonomy-preserving properties, auctions are considered to be
suitable coordination mechanisms in loosely-coupled collaborative systems. In
particular, combinatorial reverse auctions have been proposed in the literature
as an appropriate means for task reallocation in the field of collaborative
transportation planning (see, e.g., [4, 3]). This is because combinatorial reverse
auctions offer the bidders the possibility to express valuation dependencies
among transportation requests, thus allowing a more economically efficient
allocation of the requests.
However, combinatorial reverse auctions impose several problems which
up to now impede their dissemination in practice. On the one hand, the auc-
tioneer faces a N P -hard optimization problem when searching for an optimal
allocation of the requests. This problem has been introduced as the winner
determination problem [5] and has been well studied in the literature (see,
e.g., [6]). On the other hand, the exponential bid space loads the bidders
with heavy computational burden: given mrequests to be allocated, there are
2mcombinations of requests a bidder may have to submit bids for. In the
transportation domain, this would require the bidder to solve an individual
N P -hard optimization problem for each and every bundle in order to provide
all valuations.
Recently, preference elicitation has been proposed as an approach for tak-
ing some of the strain off the bidders [1]. Generally speaking, preference elicita-
tion aims at significantly reducing the number of valuations explicitly revealed
by the bidders through an intelligent process of stepwise querying conducted
by the auctioneer who systematically exploits implicit information contained
in previously revealed bids and strictly focuses on relevant information while
Bid Querying Policies 69
Proposed Bid Querying Policies
Most of the work on preference elicitation presented in the literature deals
with fairly generalized combinatorial auction scenarios. In our contribution,
we propose rather specialized elicitation policies which are more tailored to
collaborative planning situations in transportation. In particular, we make
use of several characteristic properties of the transportation domain in order
to make the bid querying process as efficient as possible. Such properties are,
among others:
1. Valuation dependencies in the transportation domain involve both sub-
additivity and super-additivity. Sub-additivity means that a bidders cost
of a given combination of transportation requests is less than the sum
of the cost of the individual requests due to complementarities between
the requests. Super-additivity, which means that the costs of the bundle
exceeds the sum of its individual requests, occurs, e.g., when two requests
cannot be transported on the same truck due to mutually excluding time
requirements, thus giving rise to additional costs for an extra vehicle.
2. The free disposal assumption holds in the transportation domain. In the
context of transportation reverse auctions, free disposal basically means
that for any transportation request, the cost of a bundle including this
request is never below the cost of the same bundle without that request,
which appears quite reasonable in the transportation domain.
Based on these and other observations, we have developed two different bid
querying approaches:
1. The first approach takes up and extends research presented by the au-
thors in [3] and [2]. A two-phase querying policy is established: in the
initial bidding phase, the bidders are required to place bids on all sub-
additive bundles, i.e., bundles for which synergies can be realized between
the transportation requests contained. During the second phase, the auc-
tioneer systematically constructs promising allocations from the set of all
submissions. If an allocation contains two or more bids of the same bidder,
the bidder is requested to place a supplementary bid for the set union. As
per definition of the initial bidding phase, the bidders evaluation of such
supplementary bids must be super-additive. This approach strongly rests
upon the observation that in practice often only a small fraction of all
possible bundles exhibit complementarities. On the other hand, by com-
mitting the bidders to submit all sub-additive bids during the first phase,
the auctioneers search process can be organized very efficiently.
2. The second approach takes a further step towards relieving the bidders
of unnecessary computational burden by abandoning the requirement to
identify and evaluate all sub-additive bundles in advance. Unlike the first
approach, during the first phase the auctioneer only requests bids on
rather small bids (e.g., containing up to three transportation requests)
70 Giselher Pankratz
which can be easily evaluated by the bidders. Similarly to the first ap-
proach, if the auctioneer identifies a promising allocation in the second
phase which contains one or more bundles for which the exact evaluation
is unknown, the auctioneer requests supplementary bids on these bundles.
In order to keep track of the auctioneers cumulative knowledge about the
bidders valuations, a constraint network is used [1]. Exploiting the free
disposal property, lower and upper bounds on a bidders true value of a
given bundle can be easily determined based on the corresponding values
known for its subbundles and superbundles, respectively [1]. If the valu-
ation of a bundle is updated, e.g., on receiving a supplementary bid, this
information is propagated through the network, thus tightening the lower
and upper bounds involved.
Both approaches have been implemented in a simulation environment using
the .NET environment. At the moment, the approaches are subject to in-
tensive tests using different sets of randomly generated problem instances.
Preliminary results have shown a good performance of the approaches. The
approaches and their computational results will be presented at the conference
in detail.
1. Hudson, B., Sandholm, T.: Effectiveness of Query Types and Policies for Prefer-
ence Elicitation in Combinatorial Auctions. In: Proceedings of the International
Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS),
New York, 386–393 (2004).
2. Kopfer, H., Pankratz, G., Gehring, H.: Combinatorial Auctions for a Gener-
alized Cooperative Transportation Problem. Extended abstract of a scientific
talk given at the ODYSSEUS 2000 conference, Chania, Greece (2000).
3. Pankratz, G.: Analyse kombinatorischer Auktionen f¨ur ein Multi-
Agentensystem zur L¨osung des Groupage-Problems kooperierender Spedi-
tionen. In: Inderfurth, K., Schw¨odiauer, G., Domschke, W., Juhnke, F.,
Kleinschmidt, P., W¨ascher, G. (eds.): Operations Research Proceedings 1999,
Springer, Berlin, 443–448 (2000).
4. Sandholm, T.: An Implementation of the Contract Net Protocol Based on
Marginal Cost Calculations. In: Proceedings of the 11th Nat. Conf. on Arti-
ficial Intelligence (AAAI-93), Washington D.C., 256–263 (1993).
5. Sandholm, T.: Algorithm for Optimal Winner Determination in Combinatorial
Auctions. Artificial Intelligence 135, 1–54 (2002).
6. Sandholm, T., Suri, S., Gilpin, A., Levine, D.: CABOB: A Fast Optimal Al-
gorithm for Winner Determination in Combinatorial Auctions. Management
Science 51, 374–390 (2005).
Application of HotFrame on Tabu Search for
the Multiple Freight Consolidation Problem
Filip Rychnavsk´y
University of Bremen, Schwachhauser Ring 60, 28209 Bremen, Germany,
Consolidation Problem
There are situations when forwarders do not do the carriage on their own and
use services of partners [1]. Reasons can be, e.g., a temporary insufficiency of
capacity or a disadvantageous scatter of orders. A forwarder can save a return
to the depot and the scatter can be more suitable for a partner with another
A freight fee is to be paid for the partner services. It is a result of mar-
ket negotiations and can depend on the actual situation. It does not depend
directly on costs caused at the executing carrier. Factors can be of natural
origins like distance, weight or time. Nonlinearity of fees in relation to the
scale of orders is expected [2]. A possible objective function is the minimiza-
tion of the sum of freights for each shipment. It could result in a lower freight
payment for a bundle of orders than for sending these orders separately. The
costs caused on the side of the executing carrier do not affect establishing the
price (freight fee) for the service. His costs are not known to other members
of the market.
It is supposed that each vehicle has a given weight capacity. The total
sum of the weights of all orders is higher than the capacity of one vehicle.
It means that additional vehicles of subcontractors have to be hired. Orders
are to be allocated (bundled) to the vehicles of subcontractors. The bundle is
shipped to the unload node of some order and the node’s order is unloaded.
The remaining bundle is shipped to the next unload node or it can be split
to two or more subbundles. This represents that the subcontractor would add
one or more additional vehicles.
The real (physical) fulfillment of the supply is in the hands of the executing
carrier. The task is to propose the cheapest flow, where all orders reach the
customers and transported weight on each arc does not exceed capacity of
a vehicle. The multiple freight consolidation problem can be interpreted as
flow-oriented problem with capacity restrictions and nonlinear goal function.
72 Filip Rychnavsk´y
The Initial Solution
A starting algorithm was developed that gives us a feasible solution in a non-
combinatorial way. The main aspect is that we can not compute freights before
we know the structure of a flow (used arcs and transported weights on them).
These weights are not known before setting them on arc’s ancestors.
The central idea is to bundle nodes (customers) that are near to each other
and whose orders are of significant weights. It can be described as building
compact clusters. The sum of weights of nodes in one cluster may not exceed
vehicle capacity. A modified gravitation index is computed for each pair of
nodes. Based on the mutual gravity, member nodes of a cluster are selected.
When members of a cluster are set, spanning trees in the clusters can be
constructed. Arcs from the depot to a core node of a cluster are established
first. Free nodes are added to the used ones according to their gravitation
power. Nodes with light unload weights are added on the end to prevent the
transporting of huge amounts on small distances.
The last part of the opening procedure is setting weights on used arcs.
The path of each order from the depot to an unload node is known now. So
the unload weight of the order is added to each arc on its path.
Tabu Search
The tabu search was chosen because of the possibility to integrate a lot of
problem specific knowledge. The HotFrame framework is a template for using
local search methods. The user has to implement its components and set
parameters. The program calls for attributes of solutions and neighbors and
decides about further steps. In our case, a solution is represented as a binary
matrix. Rows symbolise starting nodes and columns goal nodes. This matrix
represents a flow; weights and freights can be derived from it.
Two kinds of neighborhoods have been implemented. They both need a special
structure made by the start solution, where the flows consist mostly from
nodes connected after each other. Proposed neighbors are based on a modified
2-opt-move, where used arcs will be deleted and other arcs set.
Tail swap move
Each node can have some followers. Let us call the following spanning tree
a tail. If a node changes its antecessors with another node the antecessors
receive new tails. This kind of move can cause large changes in the whole
structure of the solution.
Two nodes swap move
An arc from the antecessor of a node can be connected directly to the
followers of this node. This node is added to another node, where the
Application of HotFrame 73
same procedure has been done. So two nodes in a solution are reallocated.
The structure of the solution does not change so much.
Special modifications are to be done, if the swap should be done with nodes
where one belongs to the antecessor line of the second.
To reduce the number of iterations, a selection of pairs of nodes is done.
The idea is to change important nodes that are close to each other. A modified
gravitation index is used. Not the unload weight of a node, but the weight of
incoming orders is considered.
HotFrame [3] requires the definition of the problem’s specific components.
Attributes are represented with deleted and built arcs. Tabu status checks if
arcs deleted by the actual neighbor would be the same as the new built arcs of
the move in the tabu list. Tabu threshold is involved in the decision about a
tabu status. Some parts of the original HotFrame had to be changed, because
it is not possible to decide about feasibility of a move before constructing a
solution based on this move. By the selection of the best admissible neighbor
only a restricted set of neighbours is inspected.
A construction algorithm has been programmed and tabu search components
specific for this problem have been proposed. The algorithm has been imple-
mented in HotFrame. Test problems have been generated because of the lack
of real data. Initial solutions have been improved in the range of 5% to 15%.
The full version of this paper will include technical details of the implemen-
1. Pankratz, G.: Speditionelle Transportdisposition: Modell- und Verfahrensent-
wicklung unter Ber¨ucksichtigung von Dynamik und Fremdvergabe, PhD Thesis,
University of Hagen (2002).
2. Kopfer, H., Rychnavsk´y, F.: Freight optimization problem with approximated
fee function. Presented at the 22nd International Conference of Mathematical
Methods in Economics, Brno (2004).
3. Fink, A., Voß, S.: HotFrame: A Heuristic Optimization Framework. In: Voß, S.,
Woodruff, D.L. (eds.) Optimization Software Class Libraries, Kluwer, Boston,
81–154 (2002).
Simulation Metamodeling of a Perishable
Supply Chain
M.E. Seliaman1and Ab Rahman Ahmad2
1King Fahd Univeristy of Petroleum and Minerals, Dhahran 31261, KSA,
2UTM, Johor, Malaysia, [email protected]
Perishable goods are those goods, which have a fixed or specified lifetime after
which they are considered unusable, i.e., they cannot be utilized to meet the
demand. The planning and control of supply chain of perishable goods is im-
portant because in real life products like milk, blood, drug, food, vegetables
and some chemicals do have fixed life times after which they will perish. The
presence of these kinds of products after their lifetime will not only occupy
space of the store but also effect the lifetime (damage) of the neighboring
items. In some cases of perishable goods, which consume electricity for their
storage, the loss is greater. The determination of the ordering and replenish-
ment policies, to meet the demand of these types of goods across the supply
chain hence becomes very crucial. The problem becomes difficult when there
are stochastic demands and lead times.
This paper proposes the use of regression metamodels in simulation to
support transportation-inventory decisions within a supply chain of perish-
able products. The supply chain consists of a single production facility and
multiple retailers. (Daily replenishment policy is followed.) We consider a per-
ishable product which has a common deterministic lifetime and units of the
same age will fail together if they are not taken by demands. We assume that
the demand is a random variable. We further assume that demands are in
batches, with random batch sizes and inter-demand times with back orders.
The objective is to determine the optimum ordering plan that minimizes the
expected total cost across the supply chain. The total cost includes the or-
dering costs, inventory holding costs, transportation costs, shortage cost and
the cost due to outdated inventories. The developed simulation model rep-
resents the described supply chain. After careful verification and validation,
post-simulation regression analysis is used to determine the optimum oper-
ating conditions for this perishable supply chain system. Data from a local
distribution supply chain will be used to demonstrate the model.
Non-Cooperative Games in Liner Shipping
Strategic Alliances
Xiaoning Shi1and Stefan V2
1Department of International Shipping Management Shanghai, Jiao Tong
University, 1954 Hua Shan Road, Shanghai 200030, P. R. China,
2University of Hamburg, Institute of Information Systems, Von-Melle-Park 5,
20146 Hamburg, Germany, [email protected]
Nowadays, there is a trend to establish new business linkages and alliances
within the shipping industry together with customers, suppliers, competitors,
consultants, and other companies. A number of studies have attempted to ex-
plain this phenomenon occuring in the liner shipping industry using a variety
of conceptual and theoretical frameworks. This paper focuses on liner ship-
pings strategic alliances and their establishment and transformation within
the framework of non-cooperative game theory. The concepts developed and
improved by Nash, Selten and Harsanyi should be considered as effective and
capable tools to analyse motivations, competitive structures, strategies and
potential pay-offs in the turbulent liner shipping industry.
Not only a liner shipping company could be regarded as a player in shipping
alliances, but also a liner shipping strategic alliance itself could be viewed as
a player when it competes with other alliances. However, in this paper, we
pay more attention to the former model assuming those liner companies are
unable to make enforceable contracts through outside parties. The aims of
this paper are to
indicate the motivations of short-run cooperation among several liner car-
analyse pros and cons of being members in liner shipping strategic al-
explain the departure of a player when it faces turbulence and unpre-
dictable shipping circumstances;
advise ways to contain long-run alliances stability by increasing benefits
while decreasing drawbacks.
Among those four main points, the differences between short term cooperation
and long term alliance are the amounts of sub-games and the potential pay-off
in future. Consequently, we set up specific models based on non-cooperative
games and repeated games to give those differences clear explanations. The
76 Xiaoning Shi and Stefan Voß
outcome of this paper shall be helpful for those liner shipping carriers who
attempt to succeed in the shipping industry with greater efficiency, better
customer service and lower cost.
Keywords: Game theory, non-cooperative, shipping, strategic alliance
Container Terminal Operation and Operations
Dirk Steenken1, Stefan V2, and Robert Stahlbock2
1Former: HHLA, IS – Information Systems/Equipment Control, 20457 Hamburg,
2University of Hamburg, Institute of Information Systems, Von-Melle-Park 5,
20146 Hamburg, Germany, [email protected]
Containers came into the market for international conveyance of sea freight al-
most five decades ago. The breakthrough was achieved with large investments
in specially designed ships, adapted seaport terminals with suitable equip-
ment, and availability of containers. Today over 60 % of the world’s deep-sea
general cargo is transported in containers, whereas some routes are even con-
tainerized up to 100 %. International containerization market analysis still
shows high increasing rates for container freight transportation in the future.
This leads to higher demands on seaport container terminals, container logis-
tics and management as well as on technical equipment, resulting in an in-
creased competition between seaports. The seaports mainly compete for ocean
carrier patronage and short sea operators as well as for the land-based truck
and railroad services. The competitiveness of a container seaport is marked
by different success factors, particularly the time in port for ships, combined
with low rates for loading and discharging. Therefore, a crucial competitive
advantage is the rapid turnover of the containers, which corresponds to a re-
duction of a ship’s time in port and of the costs of the transshipment process
The objective of this paper is to provide a survey and a classification of
container terminal operations. Moreover, examples for applications of opera-
tions research models – including exact methods, heuristic methods as well as
simulation based approaches – are mentioned. For a detailed description and
a comprehensive list of references see [1].
1. Steenken, D., Voß, S., Stahlbock, R.: Container terminal operation and opera-
tions research – a classification and literations review. OR Spectrum 26, 3–49
Mixed Integer Models for Optimized
Production Planning Under Uncertainty
David L. Woodruff
Graduate School of Management, UC Davis, Davis CA 95616, USA,
We concern ourselves with the process of making optimized production plan-
ning and inventory decisions in the face of low frequency, high impact uncer-
tainty, which takes the form of a small number of discrete scenarios. In this talk
we will describe general formulations as well as the general solution method,
progressive hedging. Computational results for a particular real-world, mixed
integer, inventory problem that is very large will be described.
Part III
Contributions Not Presented
Simulation Optimization of the Cross Dock
Door Assignment Problem
Uwe Aickelin and Adrian Adewunmi
University of Nottingham, Jubilee Campus, Wollaton Road, Nottingham, NG8
1BB, UK, [uxa,aqa]
Summary. We present the Cross Dock Door Assignment Problem. This involves
assigning destinations to outbound dock doors of Cross Dock centres, such that the
total cost by material handling equipment is minimized. Proposed is a two fold solu-
tion; simulation and optimization of the simulation model – simulation optimization.
The novel aspect of our approach is that we intend to use discrete event simulation
to simulate the arrangement and assignment of destinations to dock doors. We will
include some random variability in the discrete simulation model, i.e., variation in
freight flow within the Cross Dock centre. The purpose of applying discrete event
simulation to the Cross Dock assignment problem is to derive a more realistic ob-
jective function. Furthermore, we intend to minimise the realistic objective function
derived by the discrete event simulation. This will be achieved using Memetic algo-
rithms. The advantage of using Memetic algorithms is that it combines Local Search
with Genetic Algorithms. The Cross Dock Door Assignment Problem is a new ap-
plication domain to Memetic Algorithms and as such will prove to be challenging
and interesting research.
Keywords: Cross dock door assignment problem, discrete event simulation,
optimization, genetic algorithms
Traditionally, warehousing companies have had the following functions: receiv-
ing, storage, order picking and shipping. They have found storage and order
picking cost intensive; in order to abate cost, a strategy of Cross Docking was
implemented. The goal of Cross Docking is to sort, consolidate and transfer
incoming freight onto outgoing trailers for delivery to pre-determined desti-
nations [2]. Presently, each incoming trailer is assigned an available inbound
door as soon as it arrives and each outbound trailer is assigned a specific
single outbound door. The efficiency of the Cross Dock centre is dependent
on factors which include, i.e., an optimal scheduling of dock doors, the reduc-
tion of Cross Dock congestion and a minimum travelling distance for material
82 Uwe Aickelin and Adrian Adewunmi
handling equipment. We are interested in investigating the Cross Dock Door
Assignment Problem.
Cross Dock Door Assignment Problem
The Cross Dock Door Assignment Problem is related to the Dock Door As-
signment Problem, first formulated by [5]. The Cross Dock Door Assignment
Problem objective is to find the optimal arrangement of a Cross Dock centre’s
inbound and outbound doors and the most efficient assignment of destina-
tions to outbound doors, such that the distance travelled by material han-
dling equipment is minimized. It is assumed that there are Iinbound doors,
Joutbound doors, Morigins and Ndestinations for the Cross Dock centre,
IMand JN. Let Xim = 1 if origin mis assigned to inbound door
i, Xim = 0 otherwise. Let Ynj = 1 if destination nis assigned to outbound
door j, Ynj = 0 otherwise. Let dij represent the distance between inbound
door iand outbound door j. Let wmn represent the number of trips required
by the material handling equipment to move items originating from mto the
Cross Dock door where freight destined for nis being consolidated. A math-
ematical formulation for the Cross Dock Door Assignment Problem based on
work by [5] is presented below:
dij wmnxmi ynj
+ Constraints
Proposed Plan of Work
Discrete Event Simulation
In order to find solutions to problems, representation by mathematical models
has been a reasonable approach. These mathematical relationships (i.e., equa-
tions, inequalities, etc.) have a parallel correlation with relationships that exist
within problems. However, mathematical models take a standard static form,
which can make modelling certain aspects of a problem difficult. [1] consider
objective functions as “Black Boxes”. The reason is that there are peculiar
problems like inventory management that need simulation runs to obtain a
globally optimal design. By using discrete event simulation to simulate the
dock door assignment, we will assess the performance of input parameters in
these relationships and gain a better understanding of the inherent relation-
ships that exist in the Cross Dock Door Assignment Problem. These relation-
ships in the objective function, which are not visible by a simple mathematical
formulation, will become clearer. Thus we will derive a more realistic objec-
tive function. Amongst others, we will simulate the flow of freight between
inbound and outbound doors and as well as the arrangement and destination
assignment of the Cross Dock doors.
Simulation Optimization of the Cross Dock Door Assignment Problem 83
Simulation Optimisation
As well as simulating the different possible Cross Dock Door Assignments,
we intend to find the optimal door to freight and trailer to door assignment
using Memetic Algorithms. This can be achieved by optimizing the door as-
signment from the discrete event simulation models that performed the best
against predetermined criteria. In essence, the results of the discrete event sim-
ulation without the models inherent stochastic noise will be used as the fitness
function for the Memetic Algorithm optimiser. [4] present two simulated an-
nealing algorithms that are designed to handle noisy objective functions; we
are interested in comparing the performance of Memetic algorithms to other
popular heuristics in relation to noisy objective functions [3]. The objective
is to demonstrate the ability of Memetic Algorithms to produce high quality
solutions with minimal computational expense.
As iterated, the purpose of this research is to present a novel search method
for solving the Cross Dock Door Assignment Problem, it emphasises on using
discrete event simulation to simulating various dock door destination assign-
ments, taking into consideration the stochastic nature of the problem, in order
to obtain a more realistic objective function. Furthermore, the pursuit of an
optimal solution will focus on exploring a global search for promising solutions
within the whole feasible region, while exploiting local searches for optimal
solutions to the Cross Dock Door Assignment Problem.
1. Baumert, S., Smith, L.R.: Pure Random Search Noisy Objective Functions.
The University of Michigan, Technical Report (2002).
2. Li, Y., Lim, A., Rodrigues, B.: Cross Docking – JIT scheduling with time win-
dows. Journal of the Operational Research Society 55, 1342–1351 (2004).
3. Merz, P., Freisleben, B.: A comparison of memetic algorithms, tabu search,
and ant colonies for the quadratic assignment problem. Proceedings of the 1999
Congress on Evolutionary Computation, 2063–2070 (1999).
4. Prudius, A.A., Andrad´ottir, S.: Two simulated anealing algorithms for noisy
objective functions. In: Kuhl, M.E., Steiger, N.M., Armstrong, F.B., Joines,
J.A. (eds.) Proceedings of the 2005 Winter Simulation Conference (2005).
5. Tsui, Y.L., Chang, C.-H.: A microcomputer based decision support tool for
assigning dock doors in freight yards. Computers & Industrial Engineering 19
(1–4), 309–312 (1990).
Heuristics for the Multi-Layer Design of
Holger H¨oller and Stefan Voß
University of Hamburg, Institute of Information Systems, Von-Melle-Park 5, 20146
Hamburg, Germany,
Current high-speed networks are mainly based on Synchronous Digital Hi-
erarchy (SDH) or its American equivalent Synchronous Optical Network
(SONET), Wavelength Division Multiplex (WDM) and Internet Protocol /
Multi Protocol Label Switching (IP/MPLS). The multi-layer network design
problem, as treated, e.g., in [3] is to decide which combination of equipment
and routing will be able to carry the given (protected) demands with the lowest
investment in new equipment. The models presented here rely on a specific set
of common network components: switches, cross-connects, multiplexers, port-
cards and so on. The different layers considered are the fiber-layer, WDM,
SDH and IP/MPLS. Some of these layers might also have different line speeds,
e.g., 2.5Gbit/s and 10Gbit/s. However, we do not consider native packet pro-
cessing but restrict ourselves to IP/MPLS traffic engineering scenarios with
dedicated (though unidirectional) label switched paths (LSPs).
To solve such multi-layer network design problems we have designed a
network optimizer that works according to the following general outline based
on [2]. Starting from a feasible solution, e.g., a shortest path routing, demands
are consecutively rerouted until no more reduction in the overall invest for the
network infrastructure can be achieved.
During the last years, several ideas from different metaheuristic concepts
have contributed to the current state of our network optimizer and some more
are envisaged for the future. This ranges from simple random Multi-Start
over the Greedy Randomized Adaptive Search Procedure (GRASP), aspects
similar to Variable Neighborhood Search (VNS) up to the Pilot Method. Some
of these concepts have become an integral part while some others are modules
that can be used optionally.
The random components serve primarily as a means for diversification,
while the GRASP ideas are used to intensify the search in promising parts
of the solution space. Similar to the VNS, the neighborhoods that are used
during the search can be changed. We do not limit these changes to the size
and position of the neighborhood, but we also change its inner structure. This
Heuristics for the Multi-Layer Design of MPLS/SDH/WDM Networks 85
might be a change in the granularity, e.g., from bundle rerouting to single
demand rerouting or some more fundamental changes. Also, the Pilot Method
might be used to evaluate which neighborhood will be the new choice.
While we have incorporated ideas from many well-known metaheuristics,
we do not always use the respective concepts exactly in their original sense.
Instead, we try to combine and modify ideas in maybe new ways inspired
by the specific needs of our problem and our past experience in the field of
network planning. References to the underlying original metaheuristics can be
found, e.g., in the following publications. A general introduction to GRASP
is given in [4] and a bibliography can be found in [5]. Details with regard to
VNS can be found, e.g., in [1], while the Pilot Method is described in [6].
A mixed integer programming formulation solved by CPLEX serves as a
benchmark for the quality of the heuristics. However, due to the complexity of
the problem, it is only feasible for small to medium sized problem instances,
also strongly depending on the details of the equipment modeling and the
freedom of choice for the layers at intermediate nodes.
Keywords: Network design, SDH, WDM, GRASP, VNS, pilot method
1. Hansen, P., Mladenovi´c, N.: Variable neighborhood search. In: Glover, F.,
Kochenberger, G.A. (eds.) Handbook of Metaheuristics, Kluwer, Boston, 145–
184 (2003).
2. oller, H., Voß, S.: A heuristic approach for combined equipment-planning and
routing in multi-layer SDH/WDM networks. European Journal of Operational
Research 171 (3), 787–796, 2006.
3. Melian, B., Laguna, M., Moreno-Perez, J.A.: Capacity expansion of fiber optic
networks with WDM systems: Problem formulation and comparative analysis.
Computers & Operations Research 31 (3), 461–472, 2004.
4. Pitsoulis, L.S., Resende, M.G.C.: Greedy randomized adaptive search
procedures, 2001., state:
5. Festa, P., Resende, M.G.C.: An updated bibliography of GRASP, 2003. http:
//, state: 31.05.2006.
6. Voß, S., Fink, A., Duin, C.: Looking ahead with the pilot method. Annals of
Operations Research 136, 285–302, 2005.
Heuristics for the Multi-Layer Design of MPLS/SDH/WDM Networks 87

  • … Agents have their own beliefs about and preferences over the status of their environment and have particular sets of actions to change it. As a distributed problem-solving paradigm, an agent-based approach breaks complex problems into small and manageable subproblems [13,6,32,16]. Due to these properties, an agent-based scheduling model can operate in environments that are partly unknown and unpredictable. To address the need of reduced complexity and increased fault tolerance and flexibility, a continuous feedback control approach has been developed in distributed manufacturing applications [23]. …
    Distributed adaptive control of production scheduling and machine capacity
    • Apr 2007
    • Sohyung Cho
    • Vittaldas V. Prabhu

      Vittaldas V. Prabhu

  • Organizacje komercyjne i niekomercyjne wobec wzmożonej konkurencji oraz wzrastających wymagań konsumentów (Tom 8)
    Full-text available
    • Jan 2009
    • Anna Ujwary-Gil

      Anna Ujwary-Gil

    • Adam Nalepka

  • Decentralized Resource Allocation and Scheduling via Walrasian Auctions with Negotiable Agents
    Conference Paper
    Full-text available
    • Aug 2010
    • HuaXing Chen
    • Hoong Chuin Lau

      Hoong Chuin Lau

  • Land Use Changes in Shendong Coal Mining Area
    • Oct 2010
    • Guoliang Chen
    • Yunjia Wang

      Yunjia Wang

Recommended publications

Discover more publications, questions and projects in Negotiation

Conference Paper
Probabilistic Analysis of Local Search on Random Instances of Constraint Satisfaction.
January 1996
    Read more
    Approximation of Constraint Satisfaction via local search
    August 1995
      Without Abstract
      Read more
      Conference Paper
      Probabilistic Analysis of Local Search and NP-Completeness Result for Constraint Satisfaction (Exten…
      January 1996
        Read more
        Conference Paper
        Parameters’ Optimization of Resources in a Container Terminal
        July 2013
          The world seaborne trade has been developing considerably in the last decade, mainly due to globalization and continued development of emerging countries. This world growth has an influence on the development of ports and maritime terminals. It is always a hot topic about how to make the port run in the maximum productivity with minimum cost. But in order to ameliorate the productivity in a… [Show full abstract]
          Read more

          Discover more

          Help center
          Business solutions
          © ResearchGate 2018 . All rights reserved.
          • Imprint
          • Terms
          • Privacy


          Discover by subject area

          Join for free
          Log in