Handbook of Markov Decision Processes

Handbook of Markov Decision Processes by Unknown
Authors
Unknown
ISBN
9780792374596
Published
Binding
Hardcover
Pages
565
Dimensions
155 x 235mm

The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations.
MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the "Handbook of Markov Decision Processes" is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part III examines specific applications. Individual chapters are written by leading experts on the subject.
401.99




Enter your Postcode or Suburb to view availability and delivery times.

Other Titles by Unknown

Description
Information
The theory of Markov Decision Processes - also known under several other names including sequential stochastic optimization, discrete-time stochastic control, and stochastic dynamic programming - studies sequential optimization of discrete time stochastic systems. Fundamentally, this is a methodology that examines and analyzes a discrete-time stochastic system whose transition mechanism can be controlled over time. Each control policy defines the stochastic process and values of objective functions associated with this process. Its objective is to select a "good" control policy. In real life, decisions that humans and computers make on all levels usually have two types of impacts: (i) they cost or save time, money, or other resources, or they bring revenues, as well as (ii) they have an impact on the future, by influencing the dynamics. In many situations, decisions with the largest immediate profit may not be good in view of future events. Markov Decision Processes (MDPs) model this paradigm and provide results on the structure and existence of good policies and on methods for their calculations.
MDPs are attractive to many researchers because they are important both from the practical and the intellectual points of view. MDPs provide tools for the solution of important real-life problems. In particular, many business and engineering applications use MDP models. Analysis of various problems arising in MDPs leads to a large variety of interesting mathematical and computational problems. Accordingly, the "Handbook of Markov Decision Processes" is split into three parts: Part I deals with models with finite state and action spaces and Part II deals with infinite state problems, and Part III examines specific applications. Individual chapters are written by leading experts on the subject.
ISBN:
9780792374596
Publication Date:
09 / 01 / 2020
Pages:
565
Dimensions:
155 x 235mm

You might also like


RRP refers to the Recommended Retail Price as set out by the original publisher at time of release.
The RRP set by overseas publishers may vary to those set by local publishers due to exchange rates and shipping costs.
Due to our competitive pricing, we may have not sold all products at their original RRP.