
Module 6 - Responsible AI with H2O-3 and Driverless AI
-
Register
- Prices available after logging in
Already registered? Log in now.
In this hands-on session we will lead you in practical applications of the Machine Learning Interpretability methods to explain a models’ predictions. We will discuss methods such as building surrogate models, utilizing interpretability techniques like K-LIME, variable and feature importance for a machine learning model. We will also demonstrate the use of explainable techniques like partial dependence plots and Shapley values to to provide exact contributions of a feature to a prediction. Additionally we will examine fairness in a model through disparate impact analysis and use sensitivity analysis to debug our model and probe it for security and fairness.
Learning Outcomes
- Build an explainable surrogate model
- Apply & interpret the K-LIME method for a ML model
- Apply & interpret the Variable/Feature Importance for a ML model
- Apply & interpret a Decision Tree Surrogate Model for a ML model
- Apply & interpret the Partial Dependence & ICE Plots for a ML model
- Generate Shapley Values for a ML model
- Examine a model for bias using Disparate Impact Analysis
- Run Sensitivity/What-if Analysis for a ML model
Key:





Session 1(Slides & Replay): Introduction to Machine Learning Interpretability with Driverless AI
Click on View to access the replay and the slides
Click on View to access the replay and the slides
Slides and Replay of our first ML Foundations Course session Module 6
Session 1: Introduction to Machine Learning Interpretability with Driverless AI Hands-On Activity
Select the "Read" button to begin.
Select the "Read" button to begin.
Quiz 1: Introduction to Machine Learning Interpretability with Driverless AI
10 Questions | 3 attempts | 8/10 points to pass
10 Questions | 3 attempts | 8/10 points to pass
Session 2(Slides and Replay): Machine Learning Interpretability with H2O-3
Click on View to access the replay and the slides
Click on View to access the replay and the slides
Slides and Replay of our second ML Foundations Course session Module 6
Session 2: Machine Learning Interpretability with H2O-3 Assignment
Select the "Read" button to begin.
Select the "Read" button to begin.
You must be logged in to post to the discussion
Blocked on the Jupyter Notebook. Done some modification to use it on Colab by GCP but still blocked:
https://colab.research.google....
runnning:
aml.explain(test);
I get this error:
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) "><ipython-input-6-29d11752b609> in <module>() ----> 1 aml.explain(test); AttributeError: 'H2OAutoML' object has no attribute 'explain'Struck in the Tutorial 1C: Machine Learning Interpretability Tutorial question test; please review the wording of the questions: I checked EVERY single one and, after written them down and checked with available documentation, I scored only:
You have 1 out of 2 allowed attempts remaining.Your previous attempt scored 18/25 and did not pass.
Please support... Thanks.
Question typo:
8. Using the credit card example, in the Summary section for Sensitivity Analysis dashboard in Driverless AI, what does having a value below the CUTOFF metric mean?
Shouldn't it be "HAVE" ?
Thank you for letting us know, Rino!
All,
The Hands-On activity and quiz for session 1 have been posted, and you can find them under the "Contents" tab.
Also, from this week on, we will only have one study session, and it will take place on Saturday. With that said, we will no longer have the study session on Sunday.
Thanks