|Book Series (85)||
|Biochemistry, molecular biology, gene technology||110|
|Domestic and nutritional science||44|
|Environmental research, ecology and landscape conservation||142|
5. Auflage bestellen
|ISBN-13 (Hard Copy)||9783869556086|
|Lamination of Cover||matt|
This book consists of, apart from the introduction, the chapters
- DA Stochastic Dynamic Programming with Random Disturbances,
- The Problem of Stochastic Dynamic Distance Optimal Partitioning
- Partitions-Requirements-Matrices (PRMs).
DA („decision after“) stochastic dynamic programming with random disturbances“
is characterized by the fact that these random disturbances are observed
before the decision is made at each stage. In the past, only very moderate
attention was given to problems with this characteristic.
In Chapter 2 specific properties of DA stochastic dynamic programming problems
are worked out for theoretical characterization and for more efficient
solution strategies of such problems.
The (DA) Stochastic Dynamic Distance Optimal Partitioning problem
(SDDP problem) is an extremely complex Operations Research problem. It
shows several connections with other problems of operations research and
informatics such as stochastic dynamic transportation and facility location
problems or metric task systems and more specific k-server problems.
Partitions of integers as states of SDDP problems require an enormous
amount of storage space for the corresponding computer programs. Investigations
of inherent characteristic structures of SDDP problems are also important
as a basis for heuristics.
Partitions-requirements-matrices (PRMs) (Chapter 4) are matrices of transition
probabilities of SDDP problems which are formulated as Markov decision
processes. PRMs „in the strict meaning“ include optimal decisions of
certain reduced SDDP problems, as is shown (in many cases) toward the end
of the book.
PRMs (in the strict meaning) themselves represent interesting (almost selfevident)
combinatorial structures, which are not otherwise found in literature.
In order to understand the investigations of this book, previous knowledge
about stochastic dynamic Programming and Markov decision processes is
useful, however not absolutely necessary since the concerned models are
developed from scratch.