Interpretability-Machine-Learning
Programming

Free eBook to learn about Interpretable Machine Learning

  •  
  •  
  •  
  •  
  •  
  •  

Interested in learning more about machine learning interpretability? Learn about the fundamentals, simple interpretable models, and techniques for understanding more complex black box models in this free eBook.

Stakeholders around the domain are concerned about interpretable machine learning. The value of interpretable machine learning and AI has been made known to more and more people in recent years for a variety of reasons. It is no longer esoteric consternation or a “good to have” for practitioners.

All of this can lead to the question: where does one go to find a library of high-quality reading material to learn about such an important topic? Enter Christoph Molnar‘s free eBook, Interpretable Machine Learning.

First and foremost, what is the book’s motivation? The following is taken straight from the book:

Machine learning holds a lot of promise for bettering goods, procedures, and science. However, most computers do not justify their predictions, which is an obstacle to machine learning adoption. The aim of this book is to make machine learning models and their decisions more understandable.

In the book’s preface, Molnar continues:

I assumed that, given the popularity of machine learning and the importance of interpretability, there would be a plethora of books and tutorials on the topic. However, I only found related research papers and a few blog posts strewn around the internet, but nothing that provided a comprehensive overview. There are no books, guides, or summary papers available. This void prompted me to begin writing this novel.

The following is a synopsis of how the book progresses:

You’ll learn about simple, interpretable models like decision trees, decision laws, and linear regression after exploring the principles of interpretability. Later chapters concentrate on model-agnostic approaches for understanding black box models, such as feature significance and cumulative local effects, as well as Shapley values and LIME for describing individual predictions.

Remember that Molnar is a data scientist and PhD candidate in interpretable machine learning, so you can be certain that this won’t be a series of out-of-date or fringe ideas on the topic. Instead, expect the distilled knowledge of someone who is obviously involved in and passionate about the topic, as well as someone who has done extensive research on it.

The following is the table of contents for the book:

  1. Introduction
  2. Interpretability
  3. Datasets
  4. Interpretable Models
  5. Model-Agnostic Methods
  6. Example-Based Explanations
  7. Neural Network Interpretation
  8. A Look Into The Crystal Ball

If you don’t have the time or interest to read the book from beginning to end, Molnar recommends:

You can jump back and forth between strategies, focusing on the ones that concern you the most. I only suggest that you begin with the introduction and the interpretability chapter. The majority of the chapters have a common structure and rely on a single form of analysis.

And what kind of data is being modelled in this book?

The book concentrates on machine learning models for tabular data (also known as relational or structured data) rather than computer vision and natural language processing. Machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable can read the book.

Molnar invites others to contribute to the book, which is constantly updated (“similar to how software is developed“). Suggestions for corrections and updates can be sent as pull requests on the book’s GitHub page.

For those who are interested, the book is also available in PDF and eBook format on leanpub.com, as well as a print edition on lulu.com.


  •  
  •  
  •  
  •  
  •  
  •