Course 2A: Machine Learning Interpretability in Driverless AI

Description:

In this hands-on training session, we guide you on a deep dive into Driverless AI’s Machine Learning Interpretability functionality and features. We discuss the use of surrogate models such as LIME (local interpretable model-agnostic explanations) and surrogate Decision Trees, as well as LOCO in conjunction with Random Forests. We investigate the game-theoretic roots of Shapley values and explain how they can be used, on both original features and engineered features, to understand the impact of variables on the final predictions. We demonstrate practical uses of these metrics, such as creating reason codes, understanding impact at an individual row level through ICE, disparate impact analysis, and sensitivity analysis. We conclude by showing how this functionality can be deployed to score in real time or at scale on new data.

Learning Outcomes:

By the end of this training, the attendee will be able to

  • Create feature importance and partial dependence plots for global data
  • Create feature importance and ICE plots for individual rows of data
  • Create LIME models and surrogate decision trees
  • Create Shapley values for original and engineered features
  • Download reason codes based on Shapley or LIME for all rows of data
  • Investigate disparate impact of the model across variables in the original data
  • Run sensitivity analysis on the model
  • Create an MLI-scoring pipeline

Prerequisites

This course assumes 

  • Completion of “Introduction to Driverless AI” or equivalent experience.
  • Familiarity with statistical or machine learning modeling.

Are you interested in this course? If so, we will email you once this course is available.


Please login to access this poll.

Key:

Complete
Failed
Available
Locked