Skip to product information
1 of 1
Regular price £44.29 GBP
Regular price £49.99 GBP Sale price £44.29 GBP
Sale Sold out
Free UK Shipping

Freshly Printed - allow 3 days lead

Control Systems and Reinforcement Learning

A how-to guide and scientific tutorial covering the universe of reinforcement learning and control theory for online decision making.

Sean Meyn (Author)

9781316511961, Cambridge University Press

Hardback, published 9 June 2022

450 pages
26 x 18 x 2.6 cm, 1.04 kg

'Reinforcement learning, now the de facto workhorse powering most AI-based algorithms, has deep connections with optimal control and dynamic programing. Meyn explores these connections in a marvelous manner and uses them to develop fast, reliable iterative algorithms for solving RL problems. This excellent, timely book from a leading expert on stochastic optimal control and approximation theory is a must-read for all practitioners in this active research area.' Panagiotis Tsiotras, David and Andrew Lewis Chair and Professor, Guggenheim School of Aerospace Engineering, Georgia Institute of Technology

A high school student can create deep Q-learning code to control her robot, without any understanding of the meaning of 'deep' or 'Q', or why the code sometimes fails. This book is designed to explain the science behind reinforcement learning and optimal control in a way that is accessible to students with a background in calculus and matrix algebra. A unique focus is algorithm design to obtain the fastest possible speed of convergence for learning algorithms, along with insight into why reinforcement learning sometimes fails. Advanced stochastic process theory is avoided at the start by substituting random exploration with more intuitive deterministic probing for learning. Once these ideas are understood, it is not difficult to master techniques rooted in stochastic control. These topics are covered in the second part of the book, starting with Markov chain theory and ending with a fresh look at actor-critic methods for reinforcement learning.

1. Introduction
Part I. Fundamentals Without Noise: 2. Control crash course
3. Optimal control
4. ODE methods for algorithm design
5. Value function approximations
Part II. Reinforcement Learning and Stochastic Control: 6. Markov chains
7. Stochastic control
8. Stochastic approximation
9. Temporal difference methods
10. Setting the stage, return of the actors
A. Mathematical background
B. Markov decision processes
C. Partial observations and belief states
References
Glossary of Symbols and Acronyms
Index.

Subject Areas: Machine learning [UYQM], Algorithms & data structures [UMB], Stochastics [PBWL], Mathematical modelling [PBWH], Probability & statistics [PBT], Econometrics [KCH]

View full details