You will critically analyze, implement, and discuss various technical approaches for improving human-AI interaction through a series of assignments hosted on our custom interactive platforms.
There will be a set of assignments that will be announced as the course progresses, including:
Hands-on exercises, analysis, and reflection are a fun and effective way to learn.
We'll create an assignment in KLMS for each assignment.
You'll lose 10% for each late day. Submissions will be accepted until three days after the deadline. After then you'll get 0 on that assignment.
Please answer the following questions after you complete the exploration and implementation through the platform above. Make sure to cite any external sources when you refer to examples, ideas, and quotes to support your arguments.
The following papers should be useful to get a sense of the background, methodology, and technical approaches we will be using.
You need to submit two things: (1) code with your implementation and (2) answers to the discussion questions. You do not need to explicitly submit your code as the server keeps track of your latest implementation. Answers to the discussion questions need to be written as a report in PDF. We highly encourage you to use resources from your Jupyter Notebook such as code, figures, and statistical results to support your arguments in the discussion.
Explainable AI helps users understand and interpret predictions produced by models. The objective of this assignment is for you to try existing off-the-shelf tools for explanations, think about strengths and weaknesses of them, and design your own interactive user interface that provides user-centered explanations that can address such weaknesses.
You will work with methods for explaining model predictions in image classification tasks. Such explanations help users resolve questions around what’s happening inside of the model and why. As users explore these explanations, they may come up with additional questions about the model, which possibly requires other kinds of explanations. You must use KAIST VPN to access the interactive platform. The guideline for setting up VPN can be found here. You need to have an account to turn on the VPN, which can be found here. After setting up the VPN, you should be able to access the interactive platform.
In this assignment, you are asked to (1) explore Google’s What-If Tool, a platform that helps users understand the performance of models, (2) build and run an algorithm based on LIME for presenting which parts of an image contribute to the prediction for better interpretation of classification results, and (3) design a UI that further helps users interpret the results especially when such explanation is not enough. For each of the stages, you are asked to discuss what can be explained with such tools/methods, and limitations of such explanations. For (2), we are going to use our interactive platform that provides an environment for implementing the algorithm and applying your algorithm to images. You can easily organize the result of explanations to focus more on analyzing the limitations of the explanation algorithm without additional implementations for experimenting.
What-If Tool consists of three tabs: Datapoint editor, Performance & Fairness, and Features. Each tab represents different aspects of the model and results.
Datapoint editor tab
The following resources should be useful for you in getting a sense of the background, methodology, and technical approaches we will be using.
You only need to submit a .pdf file that answers the discussion questions. We highly encourage you to use resources such as code, figures, and statistical results to support your arguments in the discussion. Note that you do not need to explicitly submit your implementations, description of limitations of LIME algorithm, and prototype of interactive UI as they are automatically stored in the server.