Freshly Printed - allow 4 days lead
Adversarial Machine Learning
This study allows readers to get to grips with the conceptual tools and practical techniques for building robust machine learning in the face of adversaries.
Anthony D. Joseph (Author), Blaine Nelson (Author), Benjamin I. P. Rubinstein (Author), J. D. Tygar (Author)
9781107043466, Cambridge University Press
Hardback, published 21 February 2019
338 pages, 37 b/w illus. 8 tables
25.4 x 17.8 x 1.9 cm, 0.84 kg
'This is a timely book. Right time and right book, written with an authoritative but inclusive style. Machine learning is becoming ubiquitous. But for people to trust it, they first need to understand how reliable it is.' Fabio Roli, University of Cagliari, Italy
Written by leading researchers, this complete introduction brings together all the theory and tools needed for building robust machine learning in adversarial environments. Discover how machine learning systems can adapt when an adversary actively poisons data to manipulate statistical inference, learn the latest practical techniques for investigating system security and performing robust data analysis, and gain insight into new approaches for designing effective countermeasures against the latest wave of cyber-attacks. Privacy-preserving mechanisms and the near-optimal evasion of classifiers are discussed in detail, and in-depth case studies on email spam and network security highlight successful attacks on traditional machine learning algorithms. Providing a thorough overview of the current state of the art in the field, and possible future directions, this groundbreaking work is essential reading for researchers, practitioners and students in computer security and machine learning, and those wanting to learn about the next stage of the cybersecurity arms race.
Part I. Overview of Adversarial Machine Learning: 1. Introduction
2. Background and notation
3. A framework for secure learning
Part II. Causative Attacks on Machine Learning: 4. Attacking a hypersphere learner
5. Availability attack case study: SpamBayes
6. Integrity attack case study: PCA detector
Part III. Exploratory Attacks on Machine Learning: 7. Privacy-preserving mechanisms for SVM learning
8. Near-optimal evasion of classifiers
Part IV. Future Directions in Adversarial Machine Learning: 9. Adversarial machine learning challenges.
Subject Areas: Signal processing [UYS], Machine learning [UYQM], Artificial intelligence [UYQ], Computer security [UR], Information theory [GPF]