Jump to ratings and reviews
Rate this book

Interpretable Machine Learning

Rate this book
This book is about making machine learning models and their decisions interpretable.

After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. Later chapters focus on general model-agnostic methods for interpreting black box models like feature importance and accumulated local effects and explaining individual predictions with Shapley values and LIME.

All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project.

Unknown Binding

Published December 3, 2017

35 people are currently reading
296 people want to read

About the author

Christoph Molnar

10 books25 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
39 (39%)
4 stars
41 (41%)
3 stars
14 (14%)
2 stars
3 (3%)
1 star
1 (1%)
Displaying 1 - 22 of 22 reviews
Profile Image for A.
523 reviews13 followers
May 17, 2019
Great book, well-written and entertaining. An interesting topic that is just starting.
Profile Image for molly.
603 reviews13 followers
November 8, 2022
There is such a major gap in statistical writing, in my opinion. As someone who literally stumbled into statistics via required classes during my masters degree and gradually became fascinated and passionate about the topic and now even teach it, we too often start with the boring methodological minutiae. WHY do we need statistics? What exactly is it doing under the hood aside from spitting out p-values that tell me whether I can publish my findings (calm down, this is a joke...!)?

What I love about this book is that it starts with the big picture instead of diving immediately into the nitty gritty of the methods (although all of that is there, too). I don't think I've encountered this anywhere else in stats/ML writing, but he starts with some anecdotes about situations where interpretability is a key element of machine learning. These anecdotes come in very handy when I debate my computer science trained brother and other people outside academia about interpretability, so already I'm a fan. (This is another topic, but I really think scientists need to work on this type of applied communication. So much more useful than yet another dry academic paper).

As someone who again uses stats all the time but doesn't have a stats background (which I would wager is true of the vast majority of those actually using statistics, actually), the other major pro is how accessibly written the methods are. Arcane academic language is entirely avoided, and there is minimal discussion of anything related to deep mathiness like multiplying matrices, etc, etc (my personal least favorite aspect of statistics, sorry). Christoph also has written the iml package and stores the code for the book on github, and I was able to very easily use both to quickly create an impressive ICE and pdp plot for a publication.

Highly recommend for anyone dipping their toes into the constantly expanding world of machine learning. Also recommend following Christoph on Twitter for thought provoking conversations on the future of ML methods and beyond.
Profile Image for Petr.
437 reviews
December 24, 2018
This book is a worthy endeavour to map interpretability in Machine learning. I read the book still as a work in progress, and one can notice that some parts still need a lot of finetuning (if I have some spare time I will try to help by submitting my suggestions). However, even if you would look at it now, I would consider it a little wordy but extensive insight into interpretable machine learning. Its writing style is as if a friendly co-academic or co-developer created a thorough guide for you to the topic. It does not shy away from technical terms, referencing some deep but not central ideas just by name, but also giving some personal suggestions, opinions and small jokes. I hope the book will grow and fill the missing spot on the tech literature landscape, although I would probably suggest thinking about breaking it into focused chapters or even separate books for each topic and a general overview or introduction.
Profile Image for Isaac Lambert.
481 reviews5 followers
March 12, 2021
a good framework to think through models; I'll continue to reference in my future work. written casually and friendly, and enables as much digging by the user as required.

tldr- interpretability good! black box bad. but with black box, you can start to think about how to proceed and investigate wtf is going on.
Profile Image for Olatomiwa Bifarin.
172 reviews4 followers
January 25, 2021
Perhaps it will be befitting to start with a confession: I have not read many machine learning texts that are well written like Christoph's book. It is technical, and at the same time not technical (say, Schrödinger's book :) ). I am a biochemist who work in a field that use morbidly old IML method(s), and to read the book feels like taking in large quantities of very cold water after a hectic soccer game on a merciless hot (summer) day. It feels like I have just read ~30 recent papers(!) in IML - papers that I wouldn't have had the time to read. The book started with a friendly introduction to the field, followed by intrinsically interpretable models, model agnostic models, neural network interpretations, and some prophecy on the future of I(ML).
218 reviews3 followers
November 21, 2022
The book provides good explanations of PDP/ICE/ALP plots, LIME, Shapley value, and other methods to interpret black-box models.

What I have learned:
• Drawing data distribution alone with Partial Dependence Plot helps one identify where the model is less reliable
• H-statistic and Variable Interaction Networks (VIN) can detect feature interaction
• Feature importance on train and test data are different and can both be useful. Feature importance on test set tells us how useful a feature is in prediction, while calculation of the value on training set tells us the features on which the model rely. The latter can be used in model diagnosis when the model overfits and performs badly on testing data.
• Checking instance influence directly helps to identify the instances that should be checked for errors.

Limitations of the book:
• Sometimes terms such as "unbiased" are given without definitions.
• Some formula are not well annotated.

Since this book is under open publication, I hope the author could keep updating the book, especially after new packages are published.
1 review
May 22, 2023
I started reading this book because my supervisor at my previous internship recommended me to read it - with all honesty, I was not expecting to enjoy this book as much as I did!

I am relatively new to Machine Learning and my work had a lot of components that were based on ML - this book guided me so well about the subject as an undergraduate student in the beginning of her AI/ML journey! It broke down concepts so well and explained them in a way that made it easy for me to understand while also piquing my curiosity. It was easy to follow along and very well written overall.

I am so glad that I came across with this book and had the chance to read it! I am looking forward to reading other books from the author!
Profile Image for Danny D. Leybzon.
162 reviews1 follower
April 26, 2020
Well-researched and thought-through, this book provides a deep yet accessible explanation of the state of the art of machine learning interpretability. I was doing research on interpretability for a presentation that I'm planning, expecting to have to compile information from lots of different primary sources. Instead, I found that Christoph had done a lot of my work for me, providing great explanations of both the methods that I'm familiar with and those that I wanted to learn more about. In this rapidly evolving field, "Interpretable Machine Learning" stands out as a great compilation of the current research.
Profile Image for Daniel Fernandez.
86 reviews1 follower
January 1, 2024
IML techniques hold great promise for making breakthroughs in science and beyond by mining ever larger data sets to detect the rarest signals. But at the same time, these IML discoveries should be interpreted with caution without careful validation or uncertainty quantification. Solving this grand challenge is critical for promoting replicable and reliable (data) science as well as trustworthy machine learning; they also provide exciting research opportunities at the intersection of statistics and machine learning.
Profile Image for Gautam.
4 reviews
September 13, 2019
The book serves as a good introduction. With technical terms and advice, this book helps in getting up-to-date in the field. As Interpretable Machine Learning is expanding, its challenging to keep track of it. Its great to see other contributing authors adding 'newer' topics. Most of all, the book has been made available online for free.
Profile Image for Mehdi.
23 reviews
June 29, 2020
This book gives a good overview of the approaches to explain machine learning models. These methods are categorized in a clear way while always providing the advantages and disadvantages of each. The book is also up to date with the most recent techniques and libraries to use! I hope the author with keep updating it in the future :)
2 reviews4 followers
September 20, 2018
Explainable or interpretable AI is a very new topic. Recently there has been a surge of interest in the field and the book can serve as a good intro. Sadly, it does not mention gradient-based methods such as DeepLift, which similar to LIME and Shapley values, seem to be very promising techniques.
Profile Image for Joe Born.
115 reviews
May 30, 2021
If you are looking for a great summary of the state of interpretable AI, this is the book for you. It's like skimming dozens of research papers. Very clear, well-articulated explanations, pros and cons, engaging style (especially given how technical the subject is)
Profile Image for ریچارد.
166 reviews43 followers
August 28, 2022
This is a very good book on machine learning. However, the chapter on SHAP is not as thorough as it could be. Especially not explaining much how the values are actually calculated in practice.

And if some complementary notebook with code examples were to be added, that would be wonderful.
1 review
August 19, 2023
I read this book during my Ph.D. research. Molnar deftly bridges the gap between technicality and understandability, making the subject approachable for both experts and novices. A must-read for anyone wishing to delve into (or refresh their knowledge on) topics concerning interpretable ML.
Profile Image for Lucille Nguyen.
440 reviews11 followers
September 9, 2025
Readable introduction to machine learning interpretability. Provides a good starter for many of the concepts and reviews many of the state-of-the-art papers (for the time). A good starter for such a broad area, definitely not the last word and some areas could use more focus.
Profile Image for Ferhat Culfaz.
268 reviews17 followers
August 2, 2019
Nice overview of techniques to interpret black box models as well as see the local effects of specific drivers for the overall prediction.
Profile Image for Lisa Falco.
Author 1 book7 followers
January 11, 2022
Interesting topic and well written. I especially enjoyed the part about human interpretability which had some new concepts to me.
Displaying 1 - 22 of 22 reviews

Can't find what you're looking for?

Get help and learn more about the design.