Tutorial 5A: Disparate Impact Analysis Tutorial

Not understanding the ins of an AI model can lead to problems when discovering that it discriminates. AI practitioners must understand the ins of their models. Driverless AI allows you to check the ins of your model.

Key:

Complete
Failed
Available
Locked
Objective
Select the "Read" button to begin.
Select the "Read" button to begin. Understand the objective of the tutorial and its relevance compared to other tutorials.
Prerequisites
Select the "Read" button to begin.
Select the "Read" button to begin. Understand the prerequisites for this tutorial.
Task 1: Launch Machine Learning Interpretability Experiment
Select the "Read" button to begin.
Select the "Read" button to begin. Launch a machine learning interpretability experiment and understand how to obtain a more interpretable model.
Task 2: Concepts
Select the "Read" button to begin.
Select the "Read" button to begin. Review the following concepts for our experiment: Fairness & Bias, Disparate Impact Analysis, and Sensitivity Analysis/What-If Analysis.
Task 3: Confusion Matrix
Select the "Read" button to begin.
Select the "Read" button to begin. Explore confusion matrices and how they are amply in Driverless AI disparate impact analysis page.
Task 4: Disparate Impact Analysis
Select the "Read" button to begin.
Select the "Read" button to begin. Review the model results and use the disparate impact analysis tool to check for bias.
Task 5: Sensitivity Analysis Part 1: Checking for Bias
Select the "Read" button to begin.
Select the "Read" button to begin. Start a new experiment with the same dataset as before and later use the sensitivity analysis tool to check for bias.
Task 6: Sensitivity Analysis Part 2: Checking for Bias
Select the "Read" button to begin.
Select the "Read" button to begin. Continue using the sensitivity analysis tool to check for bias in our Driverless AI model.
Next Steps
Select the "Read" button to begin.
Select the "Read" button to begin. Explore the next tutorial to use DAI to create and analyze a criminal risk scorer.
Quiz
25 Questions  |  2 attempts  |  20/25 points to pass
25 Questions  |  2 attempts  |  20/25 points to pass
Badge
  |  Badge available
  |  Badge available After passing the quiz, you can get your badge by clicking on "Badge Earned". Please check your email address for instructions on how to view, manage, and share your new badge!
You must be logged in to post to the discussion
  • RS

    Task 5, point 5:

    "the DAI MODEL tab."

    Same mistake as below (DAI instead of DIA).

    Reply
  • SP

    According to our Driverless AI interface, it is "DAI MODEL" and not "DIA MODEL".  

    Reply
  • RS

    As below, in 

    Task 5: Sensitivity Analysis Part 1: Checking for Bias

    "

    • After, in the DAI Models tab you should click on the Sensitivity Analysis option

    "

    Reply
  • SP

    According to our Driverless AI interface, it is "DAI MODEL" and not "DIA MODEL".  

    Reply
  • RS

    Task 3; typo:

    After the model is interpreted, you will be taken to the "MLI: Regression and Classification Explanations" page. The DAI Model tab

    Should be DIA...

    Reply
  • SP

    According to our Driverless AI interface, it is "DAI MODEL" and not "DIA MODEL".  

    Reply
  • RS

    In

    3. Task 1: Launch Machine Learning Interpretability Experiment

    the following phrase can be misleading (point 10 - more comprehensible if a NOT is added ...):

    The consequence of creating group fairness metrics that appear to be reasonable is the illusion that individuals within that group may NOT be treated differently or unfairly. The local (individual) discrimination would likely not appear or be visible in that group metric.

    Reply
  • SP

    Thank you for your feedback, and you are right; adding NOT can make the sentence clearer. The sentence has been rewritten according to your feedback. Have a nice day. 

    Reply