Skip to product information
1 of 1
Regular price £69.59 GBP
Regular price £84.99 GBP Sale price £69.59 GBP
Sale Sold out
Free UK Shipping

Freshly Printed - allow 4 days lead

Inference and Learning from Data: Volume 1
Foundations

Discover core topics in inference and learning with the first volume of this extraordinary three-volume set.

Ali H. Sayed (Author)

9781009218122, Cambridge University Press

Hardback, published 22 December 2022

1010 pages
25.5 x 18 x 4 cm, 1.79 kg

'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU Darmstadt

This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. This first volume, Foundations, introduces core topics in inference and learning, such as matrix theory, linear algebra, random variables, convex optimization and stochastic optimization, and prepares students for studying their practical application in later volumes. A consistent structure and pedagogy is employed throughout this volume to reinforce student understanding, with over 600 end-of-chapter problems (including solutions for instructors), 100 figures, 180 solved examples, datasets and downloadable Matlab code. Supported by sister volumes Inference and Learning, and unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.

Contents
Preface
Notation
1. Matrix theory
2. Vector differentiation
3. Random variables
4. Gaussian distribution
5. Exponential distributions
6. Entropy and divergence
7. Random processes
8. Convex functions
9. Convex optimization
10. Lipschitz conditions
11. Proximal operator
12. Gradient descent method
13. Conjugate gradient method
14. Subgradient method
15. Proximal and mirror descent methods
16. Stochastic optimization
17. Adaptive gradient methods
18. Gradient noise
19. Convergence analysis I: Stochastic gradient algorithms
20. Convergence analysis II: Stochasic subgradient algorithms
21: Convergence analysis III: Stochastic proximal algorithms
22. Variance-reduced methods I: Uniform sampling
23. Variance-reduced methods II: Random reshuffling
24. Nonconvex optimization
25. Decentralized optimization I: Primal methods
26: Decentralized optimization II: Primal-dual methods
Author index
Subject index.

Subject Areas: Signal processing [UYS], Pattern recognition [UYQP], Machine learning [UYQM], Communications engineering / telecommunications [TJK], Information theory [GPF]

View full details