Skip to product information
1 of 1
Regular price £67.59 GBP
Regular price £82.99 GBP Sale price £67.59 GBP
Sale Sold out
Free UK Shipping

Freshly Printed - allow 8 days lead

Partially Observed Markov Decision Processes
From Filtering to Controlled Sensing

This book covers formulation, algorithms, and structural results of partially observed Markov decision processes, linking theory to real-world applications in controlled sensing.

Vikram Krishnamurthy (Author)

9781107134607, Cambridge University Press

Hardback, published 21 March 2016

488 pages, 47 b/w illus. 5 tables
25.4 x 18 x 2.5 cm, 1.1 kg

Covering formulation, algorithms, and structural results, and linking theory to real-world applications in controlled sensing (including social learning, adaptive radars and sequential detection), this book focuses on the conceptual foundations of partially observed Markov decision processes (POMDPs). It emphasizes structural results in stochastic dynamic programming, enabling graduate students and researchers in engineering, operations research, and economics to understand the underlying unifying themes without getting weighed down by mathematical technicalities. Bringing together research from across the literature, the book provides an introduction to nonlinear filtering followed by a systematic development of stochastic dynamic programming, lattice programming and reinforcement learning for POMDPs. Questions addressed in the book include: when does a POMDP have a threshold optimal policy? When are myopic policies optimal? How do local and global decision makers interact in adaptive decision making in multi-agent social learning where there is herding and data incest? And how can sophisticated radars and sensors adapt their sensing in real time?

Preface
1. Introduction
Part I. Stochastic Models and Bayesian Filtering: 2. Stochastic state-space models
3. Optimal filtering
4. Algorithms for maximum likelihood parameter estimation
5. Multi-agent sensing: social learning and data incest
Part II. Partially Observed Markov Decision Processes. Models and Algorithms: 6. Fully observed Markov decision processes
7. Partially observed Markov decision processes (POMDPs)
8. POMDPs in controlled sensing and sensor scheduling
Part III. Partially Observed Markov Decision Processes: 9. Structural results for Markov decision processes
10. Structural results for optimal filters
11. Monotonicity of value function for POMPDs
12. Structural results for stopping time POMPDs
13. Stopping time POMPDs for quickest change detection
14. Myopic policy bounds for POMPDs and sensitivity to model parameters
Part IV. Stochastic Approximation and Reinforcement Learning: 15. Stochastic optimization and gradient estimation
16. Reinforcement learning
17. Stochastic approximation algorithms: examples
18. Summary of algorithms for solving POMPDs
Appendix A. Short primer on stochastic simulation
Appendix B. Continuous-time HMM filters
Appendix C. Markov processes
Appendix D. Some limit theorems
Bibliography
Index.

Subject Areas: Signal processing [UYS], Communications engineering / telecommunications [TJK], Electronics engineering [TJF], Applied mathematics [PBW], Probability & statistics [PBT]

View full details