Freshly Printed - allow 4 days lead
Inference and Learning from Data
Discover core topics in inference and learning with this extraordinary three-volume set.
Ali H. Sayed (Author)
9781009218108, Cambridge University Press
Multiple-component retail product, published 22 December 2022
3370 pages
25.5 x 18 x 12 cm, 5.42 kg
'The book series is timely and indispensable. It is a unique companion for graduate students and early-career researchers. The three volumes provide an extraordinary breadth and depth of techniques and tools, and encapsulate the experience and expertise of a world-class expert in the field. The pedagogically crafted text is written lucidly, yet never compromises rigor. Theoretical concepts are enhanced with illustrative figures, well-thought problems, intuitive examples, datasets, and MATLAB codes that reinforce readers' learning.' Abdelhak Zoubir, TU Darmstadt
This extraordinary three-volume work, written in an engaging and rigorous style by a world authority in the field, provides an accessible, comprehensive introduction to the full spectrum of mathematical and statistical techniques underpinning contemporary methods in data-driven learning and inference. The first volume, Foundations, establishes core topics in inference and learning, and prepares readers for studying their practical application. The second volume, Inference, introduces readers to cutting-edge techniques for inferring unknown variables and quantities. The final volume, Learning, provides a rigorous introduction to state-of-the-art learning methods. A consistent structure and pedagogy is employed throughout all three volumes to reinforce student understanding, with over 1280 end-of-chapter problems (including solutions for instructors), over 600 figures, over 470 solved examples, datasets and downloadable Matlab code. Unique in its scale and depth, this textbook sequence is ideal for early-career researchers and graduate students across many courses in signal processing, machine learning, statistical analysis, data science and inference.
Volume I. Foundations: 1. Matrix theory
2. Vector differentiation
3. Random variables
4. Gaussian distribution
5. Exponential distributions
6. Entropy and divergence
7. Random processes
8. Convex functions
9. Convex optimization
10. Lipschitz conditions
11. Proximal operator
12. Gradient descent method
13. Conjugate gradient method
14. Subgradient method
15. Proximal and mirror descent methods
16. Stochastic optimization
17. Adaptive gradient methods
18. Gradient noise
19. Convergence analysis I: stochastic gradient algorithms
20. Convergence analysis II: stochasic subgradient algorithms
21. Convergence analysis III: stochastic proximal algorithms
22. Variance-reduced methods I: uniform sampling
23. Variance-reduced methods II: random reshuffling
24. Nonconvex optimization
25. Decentralized optimization I: primal methods
26. Decentralized optimization II: primal-dual methods
Author index
Subject index. Volume II. Inference: 27. Mean-Square-Error inference
28. Bayesian inference
29. Linear regression
30. Kalman filter
31. Maximum likelihood
32. Expectation maximization
33. Predictive modeling
34. Expectation propagation
35. Particle filters
36. Variational inference
37. Latent Dirichlet allocation
38. Hidden Markov models
39. Decoding HMMs
40. Independent component analysis
41. Bayesian networks
42. Inference over graphs
43. Undirected graphs
44. Markov decision processes
45. Value and policy iterations
46. Temporal difference learning
47. Q-learning
48. Value function approximation
49. Policy gradient methods
Author index
Subject index. Volume III. Learning: 50. Least-squares problems
51. Regularization
52. Nearest-neighbor rule
53. Self-organizing maps
54. Decision trees
55. Naive Bayes classifier
56. Linear discriminant analysis
57. Principal component analysis
58. Dictionary learning
59. Logistic regression
60. Perceptron
61. Support vector machines
62. Bagging and boosting
63. Kernel methods
64. Generalization theory
65. Feed forward neural networks
66. Deep belief networks
67. Convolutional networks
68. Generative networks
69. Recurrent networks
70. Explainable learning
71. Adversarial attacks
72. Meta learning
Author index
Subject index.
Subject Areas: Signal processing [UYS], Pattern recognition [UYQP], Machine learning [UYQM], Communications engineering / telecommunications [TJK], Information theory [GPF]