KAIST Spring 2021

CS492E: Human-AI Interaction

Humans and AI are more closely interacting than ever before, in all areas of our work, education, and life. As more intelligent machines are entering our lives, their accuracy and performance are not the only important factor that matters. As designers of such technology, we have to carefully consider the user experience of AI in order for AI to be of practical value. In this course, we will explore various dimensions of human-AI interaction, including ethics, explainability, design process involving AI, visualization, human-AI collaboration, recommender systems, and a few notable application areas.

A side goal of this course is to encourage all of us to bridge the gap between the two fields of HCI and AI. As a step toward this vision, we want to encourage students with HCI and AI background to mingle, interact, discuss, and collaborate through this course. We expect most students taking this course to have background knowledge in either HCI or AI through at least intro-level coursework. If you’re unsure if you meet this criterion, please contact the course staff immediately. Having background in both is great, although not required.

This is a highly interactive class: You’ll be expected to actively participate in activities, projects, assignments, and discussions. There will be no lectures or exams. Major course activities include:

  • Reading Response: You'll read and discuss important papers and articles in the field. Each week, there will be 1-2 reading assignments, for which you'll write a short response.
  • Assignments: You'll design, implement, and analyze a few human-AI interaction scenarios.
  • In-class Activities: Each class will feature activities that will help you experience and practice the core concepts introduced in the course.

Course Staff

Instructors: Prof. Jean Young Song & Prof. Juho Kim
    Office Hours: by appointment

TA: Hyungyu Shin
    Office Hours: by appointment

Staff Mailing List: human-ai@kixlab.org

Time & Location

When: 2:30-3:45pm Tue/Thu
Where: Zoom live sessions (As active participation in in-class activity, discussion, and presentation is expected, attending live sessions is required.)

Links

Course Website: https://human-ai.kixlab.org/
Submission & Grading: KLMS
Discussion and Q&A: Campuswire
Reading Groups: Assignment Spreadsheet

Updates

Schedule

Week Date Instructor Topic Reading (response indicates a reading response is required for the material.) Due
1 3/2 Kim Introduction & Course Overview
1 3/4 Kim A Quick Tour of Human-AI Interaction (1) Licklider, Joseph CR. "Man-computer symbiosis." IRE transactions on human factors in electronics 1 (1960): 4-11.
(2) Shyam Sankar. The Rise of Human Computer Cooperation. TED Talk Video, 2012 (12 mins).
2 3/9 Song Primer on AI (Part 1) (1) response Lubars, Brian, and Chenhao Tan. "Ask not what AI can do, but what AI should do: Towards a framework of task delegability." In Advances in Neural Information Processing Systems, pp. 57-67. 2019.
(2) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17). pp. 6000–6010. 2017.
RR by all
2 3/11 Song Primer on AI (Part 2) | Tutorial (1) response Xu, Anbang, Zhe Liu, Yufan Guo, Vibha Sinha, and Rama Akkiraju. "A new chatbot for customer service on social media." In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 3506-3510. 2017.
(2) Nityesh Agarwal. "Getting started with reading Deep Learning Research papers: The Why and the How", a blog post at Towards Data Science (2018).
RR by A
3 3/16 Kim Primer on HCI (Part 1) | Tutorial (1) response Amershi, Saleema, et al. "Guidelines for human-AI interaction." Proceedings of the 2019 chi conference on human factors in computing systems. 2019.
(2) Google PAIR. People + AI Guidebook. Published May 8, 2019.
RR by B
Assignment #1 announced
3 3/18 Kim Primer on HCI (Part 2) | Tutorial (1) response Shneiderman, B., "Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy." International Journal of Human-Computer Interaction 36, 6, 495-504. 2020.
(2) Henriette Cramer and Juho Kim. "Confronting the tensions where UX meets AI." interactions 26.6 (2019): 69-71.
RR by A
4 3/23 Kim Ethics and FAccT of AI (Part 1) (1) response Davidson, Thomas, Debasmita Bhattacharya, and Ingmar Weber. "Racial bias in hate speech and abusive language detection datasets." arXiv preprint arXiv:1905.12516 (2019).
(2) Bolukbasi, Tolga, et al. "Man is to computer programmer as woman is to homemaker? debiasing word embeddings." Advances in Neural Information Processing Systems. 2016.
RR by B
4 3/25 Kim Ethics and FAccT of AI (Part 2) (1) response Timnit Gebru. "Computer vision in practice: who is benefiting and who is being harmed?" (video, 51 mins) Slides
(2) Kate Crawford and Trevor Paglen, “Excavating AI: The Politics of Training Sets for Machine Learning" (September 19, 2019)
RR by A
5 3/30 Song Historical Perspectives on Human-AI Interaction (1) response Horvitz, Eric. "Principles of mixed-initiative user interfaces." In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 159-166. 1999.
(2) Ben Schneiderman and Pattie Maes. "Direct Manipulation vs. Interface Agents". Interactions 1997.
RR by B
Assignment #1 DUE
5 4/1 Song Metrics to Measure Human-AI Performance (1) response Gagan Bansal, Besmira Nushi, Ece Kamar, et al. "Beyond accuracy: The role of mental models in human-AI team performance." In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 2019.
(2) Matthew Kay, Shwetak N. Patel, and Julie A. Kientz. "How good is 85%? A survey tool to connect classifier evaluation to acceptability of accuracy." In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 2015.
RR by A
6 4/6 Song Interpretable and Explainable AI (Part 1) (1) response Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ""Why should I trust you?" Explaining the predictions of any classifier." In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
(2) Zachary C. Lipton. "The mythos of model interpretability." 2018.
RR by B
6 4/8 Song Interpretable and Explainable AI (Part 2) (1) response Daniel S. Weld, and Gagan Bansal. "The challenge of crafting intelligible intelligence." Communications of the ACM. 2019.
(2) Alison Smith-Renner, Ron Fan, Melissa Birchfield, et al. "No Explainability without Accountability: An Empirical Study of Explanations and Feedback in Interactive ML." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020.
RR by A
7 4/13 Kim AI and Crowds (1) response Jennifer Wortman Vaughan. 2018. Making Better Use of the Crowd: How Crowdsourcing Can Advance Machine Learning Research. Journal of Machine Learning Research 18, 193: 1–46.
*** Instructor note: the sections 3 and 5 could be skimmed.
(2) Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, et al. The Future of Crowd Work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work (CSCW '13), 1301–1318. 2018.
RR by B
7 4/15 Both Project proposal feedback Assignment #2 announced
8 4/20 No class (Midterms week)
8 4/22 No class (Midterms week)
9 4/27 Kim AI Design Process (1) response Mitchell, Margaret, et al. "Model cards for model reporting." Proceedings of the conference on fairness, accountability, and transparency. 2019.
(2) Sculley, David, et al. "Hidden technical debt in machine learning systems." Advances in neural information processing systems. 2015.
RR by A
9 4/29 Kim Recommender Systems (1) response Olteanu, Alexandra, Fernando Diaz, and Gabriella Kazai. "When Are Search Completion Suggestions Problematic?." Proceedings of the ACM on Human-Computer Interaction 4.CSCW2 (2020): 1-25.
(2) Gomez-Uribe, Carlos A., and Neil Hunt. "The netflix recommender system: Algorithms, business value, and innovation." ACM Transactions on Management Information Systems (TMIS) 6.4 (2015): 1-19.
RR by B
10 5/4 Song InfoViz and Data Visualization (1) response Kay, Matthew, Tara Kola, Jessica R. Hullman, and Sean A. Munson. "When (ish) is my bus? user-centered visualizations of uncertainty in everyday, mobile predictive systems." In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2016.
(2) Amershi, Saleema, Max Chickering, Steven M. Drucker, Bongshin Lee, Patrice Simard, and Jina Suh. "Modeltracker: Redesigning performance analysis tools for machine learning." In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015.
RR by A
10 5/6 Song InfoViz and Data Visualization (1) response Cai, Carrie J., Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg et al. "Human-centered tools for coping with imperfect algorithms during medical decision-making." In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019.
(2) Cheng, Hao-Fei, Ruotong Wang, Zheng Zhang, Fiona O'Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. "Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders." In Proceedings of the 2019 chi conference on human factors in computing systems, 2019.
RR by B
Assignment #2 DUE
11 5/11 Both Project Pitches
11 5/13 No class (CHI week)
12 5/18 Kim Human-AI Collaboration (1) response Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. 2019. "Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff." Proceedings of the AAAI Conference on Artificial Intelligence 33, 01: 2429–2437.
(2) Hoffman, Guy, and Cynthia Breazeal. "Collaboration in human-robot teams." AIAA 1st Intelligent Systems Technical Conference. 2004.
RR by A
12 5/20 Kim Human-AI Collaboration (1) response Zhou, Sharon, Melissa Valentine, and Michael S. Bernstein. "In search of the dream team: temporally constrained multi-armed bandits for identifying effective team structures." Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 2018.
(2) Nguyen, An T., et al. "Believe it or not: Designing a human-ai partnership for mixed-initiative fact-checking." Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 2018.
RR by B
13 5/25 Both Project Feedback Meetings
13 5/27 Invited Talk: Gagan Bansal (Univ. of Washington)
14 6/1 Song Application Areas (Part 1) (1) response Subramonyam, Hariharan, Colleen Seifert, and Eytan Adar. "ProtoAI: Model-Informed Prototyping for AI-Powered Interfaces." In 26th International Conference on Intelligent User Interfaces, pp. 48-58. 2021.
(2) Amershi, Saleema, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, and Thomas Zimmermann. "Software engineering for machine learning: A case study." In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp. 291-300. IEEE, 2019.
RR by A
14 6/3 Song Application Areas (Part 2) (1) response Hara, Kotaro, Jin Sun, Robert Moore, David Jacobs, and Jon Froehlich. "Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning." In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 189-204. 2014.
(2) Stangl, Abigale, Meredith Ringel Morris, and Danna Gurari. ""Person, Shoes, Tree. Is the Person Naked?" What People with Vision Impairments Want in Image Descriptions." In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1-13. 2020.
RR by B
15 6/8 Invited Talk: Hari Subramonyam (Univ. of Michigan)
15 6/10 Both Final Presentations & Course Wrap-up
16 6/15 No class (Finals week)
16 6/17 No class (Finals week)

Topics (tentative)

Major topics include: Ethics and FAccT in Machine Learning, Metrics to Measure HAI Performance, AI Design Process, Interpretable and Explainable AI, InfoViz and Data Visualization, Recommender Systems, and Human-AI Collaboration

Grading

  • Design project: 30%
  • Reading responses: 30%
  • Assignments: 30%
  • Class participation: 10%
Late policy: No late submissions are allowed for the reading responses. For assignments and project milestones, you'll lose 10% for each late day. Submissions will be accepted until three days after the deadline.

Prerequisites

You need to have at least introduction course-level knowledge in either HCI (e.g., CS374, CS473) or AI (e.g., CS470, CS376). If you're unsure whether you quality, please contact course staff.