Debiasing AI Using Amazon SageMaker

Debiasing AI Using Amazon SageMaker
Debiasing AI Using Amazon SageMaker
English | MP4 | AVC 1280×720 | AAC 48KHz 2ch | 1h 42m | 273 MB

Artificial intelligence (AI) can have deeply embedded bias. It’s the job of data scientists and developers to ensure their algorithms are fair, transparent, and explainable. This responsibility is critically important when building models that may determine policy—or shape the course of people’s lives. In this course, award-winning software engineer Kesha Williams explains how to debias AI with Amazon SageMaker. She shows how to use SageMaker to create a predictive-policing machine-learning model that integrates Rekognition and AWS DeepLens, creating a crime-fighting model that can “see” what’s happening in a live scene. By following the development process, you can learn what goes into making a model that doesn’t suffer from cultural prejudices. Kesha also discusses how to remove bias in training data, test a model for fairness, and build trust in AI by making models that are explainable.

Topics include:

  • Reviewing the crime-fighting case study
  • Amazon SageMaker basics
  • Preparing the data
  • Training the model
  • Evaluating the model
  • Deploying a face-detection model to AWS DeepLens
  • Retrieving data for the model with AWS Rekognition
  • Sending data points to a SageMaker hosted model
  • Retrieving predictions
  • Making your models explainable
Table of Contents

Introduction
1 Debiasing AI using Amazon SageMaker
2 What you should know

Crime-Fighting Case Study
3 Predictive policing
4 Overview
5 Architecture diagram
6 Tools services and costs
7 Terms and concepts
8 Demo of Amazon SageMaker

Building the Model via SageMaker
9 What is SageMaker
10 Machine learning process
11 Inspect and visualize data
12 Prepare the data
13 Train the model
14 Deploy the model

Deploying and Testing the Model via DeepLens
15 What is DeepLens
16 Deploy model to AWS DeepLens
17 Extend AWS DeepLens
18 Retrieve attributes via AWS Rekognition
19 Invoke the crime model
20 Set up model alerts

Explaining the Model
21 What is explainable AI XAI
22 Trust and transparency issues
23 Making algorithms explainable

Conclusion
24 Next steps