A comprehensive and self-contained introduction to Gaussian processes, which provide a principled, practical, probabilistic approach to learning in kernel machines.
Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics. The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes.
Carl Edward Rasmussen is a Lecturer at the Department of Engineering, University of Cambridge, and Adjunct Research Scientist at the Max Planck Institute for Biological Cybernetics, Tübingen.
I found this book rather theoretical and inaccessible; it is written in a style where the equations are largely expected to speak for themselves without assistance from words. Do not read it unless you are comfortable reading pages of equations with no useful expository text and deriving value from them.
It was a meh... book. Gaussian processes are essentially understanding, the conditional and marginal densities of a normal distribution. Felt it was badly written.
A must read for any one interested in Gaussian processes. The first chapter could perhaps be written in a more accessible way for beginners. The book is a bit outdated however and does not reflect the recent research progress in this important field in machine learning.
Still the best reference on GPs out there today, even after 14 years. I like how RW shoves the boring calculations to the back of the book. Overall -- quite accessible to a ML practitioner with probability background.
I consider this book to be the bridge between classical statistics and modern machine learning. However, it is quite mathematical and I am not a fan of abrupt formulas without a glimpse of proof; instead, the book is quite intuition oriented, but sometimes fails to explain complex formulas.
The book is such a lifesaver for people who know basic probabilistics but are unfamiliar with deeper concepts such as statistical machine learning.
The book guides you through many concepts, not just technically but also historically—why & how the method of Gaussian Process is natural in Machine Learning. The mathematical derivations are clearly presented, albeit with several less important equations being omitted.
I learnt a lot from reading this particular book and highly recommend it to any CS student who wishes to dive into the theoretical aspects of Machine Learning. This book is but the beginning!
There is not much to be said about this book other than that it is the definitive, obvious reference on Gaussian processes. Clear, well-written, and concise. One can get pretty far from the introductory material alone, but there are satisfying dives into grimy theoretical details and some extensions as well.