top of page

🎉

I'm on the Academic and (research-focused) Industry Job Market this year!

Please reach out if you think there is a fit. 

ResearchMy research exists in the translational space between HCI and AI, building on my philosophy and electrical engineering backgrounds. I work with Dr. Mark Riedl at the Entertainment Intelligence & Human-centered AI Lab at Georgia Tech and am an affiliate at the Data & Society Research Institute. Pre-PhD, I had past lives in management consulting and building startups. Beyond academia, I regularly work with Fortune 500 companies as an expert consultant on AI-related projects and serve on the boards of startups. 

 

My research agenda has expanded the epistemic canvas of Explainable AI (XAI) and Responsible AI (RAI). My personal "why" is to create a world where the next generation has opportunities that are unimaginable for my generation. This informs my research "why" which is to create a world where anyone, regardless of their background, can interact with AI systems in an explainable, accountable, and dignified manner. 

flat vector diagram showing how explainability in AI is algorithmic transparency & social transparency

In XAI, my work pioneered the field of Human-centered XAI (HCXAI), from coining the term to nurturing a vibrant research community. HCXAI expands the XAI discourse from an algorithm-centered perspective to a human-centered one. Specifically, HCXAI emphasizes that while "opening" the black box is important, who opens the box matters just as much, if not more. It elucidates how factors outside the AI’s black-box can boost its explainability. 

vector diagram circular showing how Human-centered XAI is made up of 4 disciplines

In RAI, my work demonstrated how algorithmic harms can persist long after an algorithm is destroyed. This allows us to "see" algorithmic harms that would otherwise be invisible, radically transforming how we might do Algorithmic Impact Assessments (AIAs). This concept is being used by the United Nations to draft policies around algorithmic reparations.

Throughout my PhD, I have researched at Microsoft Research (FATE), IBM Research, and Google, resulting in multiple top-tier publications. My work is generously supported by the NSF, DARPA, A2I, Microsoft, IBM, and the World Bank. 

IMPACT. My work has been recognized inside and outside academia. My papers have received awards and honors from prominent venues like CHI, HCII, ICCC. My work has won prestigious fellowships like the Prime Minister’s Innovator Award (top 5 nationwide) and the Foley Scholarship (top honor at Georgia Tech Computing). Beyond numerous podcasts, it has been prominently covered in major media outlets like MIT Tech Review, AAAS Science, ACM Communications, Vice, and VentureBeat. It has informed RAI reports and policies at influential institutes like the Mozilla Foundation. Seven Fortune100 companies have adopted the technology from my work, improving trust calibration and AI explainability of 3 million users. 

SERVICE. I serve as the lead Associate Editor of the inaugural ACM Journal Issue on HCXAI. I spearheaded the creation of the flagship workshop on HCXAI. Since 2020, I have served as the lead organizer for the HCXAI workshop at CHI. Over 300 participants from 18+ countries have joined us so far. I have served as Area Chairs in ACM DIS, CHI, and FAccT and in the Program Committees of IUI and AIES. Beyond academia, I am also honored to serve on AI Task Forces for multiple Asian governments. Given my tenure in management consulting and startups, I serve on the board of startups. My service outside academia creates synergistic opportunities for research with real systems and users at scale. 

PUBLICATIONS

My work has found academic homes in top-tier HCI and AI venues like ACM CHI, CSCW, IUI, FAccT and AAAI AIES. Over the years, I've had the privilege of collaborating with 53 researchers spanning 30 institutes (academia, industry, non-profit) from 6 countries spanning North America, Asia, and Europe.

  1. Seamful XAI: Operationalizing Seamful Design in Explainable AI.
    Upol Ehsan,
    Q. Vera Liao, Samir Passi, Mark O. Riedl, and Hal Daume III. (2024, in press). In Proceedings of the 2024 Proceedings of the ACM on Human-Computer Interaction, (CSCW’24). [PDF]

     

  2. The Sociotechnical Gap in Explainable AI (XAI): Charting the Social and Technical Dimensions.
    Upol Ehsan, Koustuv Saha, Munmun De Choudhury, and Mark O. Riedl. (2023). In Proceedings of the 2022 Proceedings of the ACM on Human-Computer Interaction, (CSCW’23). [PDF

     

  3. Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems.
    Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, and Mark O. Riedl. (2023). In Proceedings of the 14th International Conference on Computational Creativity (ICCC’23). [PDF] Best Paper 🏆

     

  4. Human-Centered Explainable AI (HCXAI): Coming of Age.
    Upol Ehsan, Philipp Wintersberger, Elizabeth A Watkins, Carina Manger, Gonzalo Ramos, Justin D. Weisz, Hal Daumé III, Andreas Riener, and Mark O Riedl (2023). In Extended Abstracts of CHI Conference on Human Factors in Computing Systems (CHI ’23). [PDF]

     

  5. The Algorithmic Imprint.
    Upol Ehsan
    , Ranjit Singh, Jacob Metcalf, and Mark Riedl. (2022). In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT’22). [PDF] [Video]

     

  6. Human-Centered Explainable AI (HCXAI): Beyond Opening the Black-box of AI.
    Upol Ehsan
    , Philipp Wintersberger, Q. Vera Liao, Elizabeth Anne Watkins, Carina Manger, Hal Daumé III, Andreas Riener, and Mark O Riedl. (2022). In Extended Abstracts of CHI Conference on Human Factors in Computing Systems (CHI ’22). [PDF]

     

  7. Social Construction of XAI: Do We Need One Definition to Rule Them All?.
    Upol Ehsan and Mark O Riedl. (2021). In Proceedings of the Human-centered AI Workshop at The Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS’22). [PDF]

     

  8. The Algorithmic Imprint: Critically Examining the Algorithmic Afterlife and Impact Assessments.
    Upol Ehsan
    (2021). The AI Parables in/from the Global South Workshop, Data & Society Research Institute.

     

  9. The Who in Explainable AI: How AI Background Shapes Perceptions of AI Explanations.
    Upol Ehsan,
    Samir Passi, Q. Vera Liao, Larry Chan, I.-Hsiang Lee, Michael Muller, and Mark O. Riedl.  (2021). [PDF]

     

  10. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI.
    Upol Ehsan
    and Mark O Riedl. (2021). In Proceedings of the Human-centered AI Workshop at The Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS’21). [PDF]

     

  11. Expanding Explainability: Towards Social Transparency in AI systems.
    Upol Ehsan
    , Q. Vera Liao, Michael Muller, Mark O. Riedl, and Justin D. Weisz. 2021. (2020). In Proceedings of CHI Conference on Human Factors in Computing Systems (CHI ’21). [PDF] [Video] Best Paper Honorable Mention 🏆

     

  12. Operationalizing Human-Centered Perspectives in Explainable AI.
    Upol Ehsan, Philipp Wintersberger, Q Vera Liao, Martina Mara, Marc Streit, Sandra Wachter, Andreas Riener, and Mark O Riedl. (2021). In Extended Abstracts of CHI Conference on Human Factors in Computing Systems (CHI ’21). [PDF]

     

  13. LEx: A Framework for Operationalising Layers of Machine Learning Explanations.
    Ronal Singh, Upol Ehsan, Marc Cheong, Mark O. Riedl, and Tim Miller. (2021). In Proceedings of Human-centered Explainable AI Workshop at CHI’21. [PDF]

     

  14. Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach.
    Upol Ehsan
    and Mark O Riedl. (2020). In Proceedings of HCI International 2020. [PDF] Best Paper 🏆

     

  15. Reflective Human-centered Explainable AI: Social Transparency, Trust, and Value Tensions in Radiology.
    Upol Ehsan
    , Judy Gichoya, and Mark O Riedl. (2020). [PDF]

     

  16. Again, Together: Socially Reliving Virtual Reality Experiences When Separated.
    Cheng Wang, Mose Sakashita, Jingjin Li, Upol Ehsan, and Andrea Won.  (2020). In Proceedings of the CHI conference on Human Factors in Computing Systems (CHI ’20).

     

  17. On Design and Evaluation of Human-centered Explainable  AI systems.
    Upol Ehsan and Mark O Riedl. (2019).  In Proceedings of Emerging Perspectives in Human-Centered Machine Learning : A Workshop at The ACM CHI Conference on Human Factors in Computing Systems (CHI), Glasgow, UK. [PDF]

     

  18. Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions.
    Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark Riedl. (2019). In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 263-274). [PDF] [Video]

     

  19. RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together.
    Cheng Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, and Andrea Won. (2019). In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR) [PDF] Best Poster Honorable Mention 🏆

     

  20. Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations.
    Upol Ehsan, Brent Harrison, Larry Chan, and Mark O. Riedl. (2018). In Proceedings of the AAAI Conference on Artificial Intelligence, Ethics, and Society * equal contribution [PDF]

     

  21. Learning to Generate Natural Language Rationales for Game Playing Agents .
    Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O Riedl. (2018). In AAAI Workshop on Experimental AI in Games (EXAG), Edmonton, Canada [PDF]

     

  22. Confronting Autism in Urban Bangladesh: Unpacking Infrastructural and Cultural Challenges.
    Upol Ehsan
    , Nazmus Sakib, Md Munirul Haque, Tanjir Soron, Devansh Saxena, Sheikh Ahamed, Amy Schwichtenberg, Golam Rabbani, Shaheen Akter, Faruq Alam, Azima Begum, and Syed Ishtiaque Ahmed (2018).In EAI Endorsed Transactions on Pervasive Health and Technology, 4(14). doi:10.4108/eai.13-7-2018.155082 [PDF]

     

  23. Guiding Reinforcement Learning Exploration Using Natural Language.
    Brent Harrison, Upol Ehsan, and Mark O. Riedl. (2018). In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS'17). [PDF]

     

  24. Design Guidelines for Parent-School Technologies to Support the Ecology of Parental Engagement.
    Upol Ehsan*, Marisol Wong-Villacres*, Amber Solomon, Mercedes Pozo Buil, and Betsy DiSalvo (2017). In Proceedings of the The 16th International Conference on Interaction Design and Children. (IDC'16) * equal contribution [PDF]

     

  25. Unpack that Tweet: A Traceable and Interpretable Cognitive Modeling System.
    Upol Ehsan, Christopher Purdy, Christina Kelley, Lalith Polepeddi, and Nicholas Davis (2017). In Workshop Proceedings at the 2017 International Conference on Computational Creativity [PDF]

SELECTED AWARDS

🎖️| 2023 | Best Paper Award, ICCC 2023 (top 1%)

🎖️| 2023 | Sigma Tau Phi, Oldest & Most Prestigious Philosophy Honor Society

🎖️| 2022 | Runner Up, Pat Goldberg Award, IBM Research (most impactful/best paper)


🎖️| 2021 | GVU Foley Scholar, Georgia Tech (top honor for PhD students)


🎖️| 2021 | Best Paper Award, Honorable Mention, CHI 2021 (top 5%)


🎖️| 2020-2024 | Prime Minister’s Young Innovator Award ($125,000 | 5 awardees nationwide)


🎖️| 2020 | Best Paper Award, HCII 2020 (top 1%)


🎖️| 2020 | CRIDC Showcase Runner Up, Georgia Tech


🎖️| 2012 | Phi Beta Kappa, Washington & Lee University (top 1% of top 10% US colleges)


🎖️| 2012 | Sigma Pi Sigma - National [US] Honor Society in Physics, Washington & Lee University


🎖️| 2012 | Johnson Grant, Outstanding Leadership, Washington & Lee University ($10,000)


🎖️| 2012 | Student Independent Research, Washington & Lee University ($9,000)


🎖️| 2009 | Phi Eta Sigma (oldest US honor society for freshman), Washington & Lee University (top 10%)


🎖️| 2009-2013 | W&L Scholar Grant ($307,000), Washington & Lee University (top 1% worldwide)


🎖️| 2008 | World Highest in GCE A-Level Physics (3 awardees globally)

KEYNOTES & TALKS

2023

How do we balance user needs and business needs in XAI?, Expert panel at the Workshop on Measuring the Quality of Explanations in Recommender Systems at RecSys’23 (Sep 11, 2023)

How to Leverage Your Unfair Advantage: Activating a Classical Liberal Arts Education in the Modern World, Keynote for Sigma Tau Phi, oldest Philosophy Honor Society (US) (Mar 14, 2023)

The Algorithmic Imprint: How Algorithmic Harms Persist even after the Algorithm’s Death, Invited Public Lecture at Washington & Lee University (Lexington, VA, Mar 13, 2023)

Understanding the relationships between Responsible AI & Performance, Invited Expert Talk at AI Risk and Vulnerability Alliance (Feb 17, 2023)

2022

Human-Centered Explainable AI: Why XAI cannot afford a technocentric view, Invited Talk at NEC Labs, Germany, (Nov 9, 2022)

Human-Centered Explainable AI, Invited Talk at the Tea with Interesting People Series, Harvard University (Oct 18, 2022)

The Algorithmic Imprint: How Algorithmic Harms Persist in the Algorithm’s Afterlife, Invited Talk at the University of Oxford’s Responsible Technology Institute, (Oct 13, 2022)

The Algorithmic Imprint, ACM FAccT (Fairness, Accountability, and Transparency) Conference 2022, (Jun 22, 2022)

Explainable to All: Co-designing AI Experiences, Panel Keynote at the Trust, Transparency and Control Labs (TTC) Summit, Meta, (May 4-5, 2022)

Human-centered Explainable AI, Foley Scholars Talk at Georgia Tech, (Atlanta, GA, USA Apr 7, 2022)

Who Opens the AI’s Black Box?, Invited talk at Meta (Facebook), (Mar 29, 2022)

2021

The Human Touch: The role of explainability in AI and medical decision making, Expert Panelist at the AI + Health Conference hosted by Stanford Medical School, (Dec 8, 2021) 

Human-centered Explainable AI: Charting the Landscape, Invited Lecture at University of Buffalo, (Nov 30, 2021 [virtual])

Ethics, Equity, and Explainability of AI systems, Keynote at the World Usability Day 2021 Forum, Puget Sound, (Nov 11, 2021 [upcoming, virtual])

Making sense of it all: scoping XAI as a field and how to find your way around it, Invited Talk at Eleuther AI, (Jul 7, 2021 [virtual])

Expanding Explainability: Towards Social Transparency in AI Systems, ACM CHI 2021 (May 14, 2021 [virtual])

Of Bicycles, Explainable AI, and Humans: Towards a Human-centered Paradigm, NYU Abu Dhabi Human-centered Data Science Lecture Series (Abu Dhabi, UAE, Apr 4, 2021 [virtual])

2020

Human-centered Explainable AI: highlighting and acting on our blind spots, IBM Research Explainability Guild (Yorktown Heights, NY, USA Aug 5, 2020 [virtual])

Expanding Explainability: Towards Better Human-AI assemblages, IBM Research Global Research Showcase (Yorktown Heights, NY, USA Aug 4, 2020 [virtual]), see poster

Human-centered Explainable AI: towards a Reflective Sociotechnical Approach, HCI International 2020 conference (Copenhagen, Denmark July 23, 2020 [virtual])

 

Towards Human-centered Explainable AI, Computing Research Association URMD Conference Showcase (Austin, TX, USA Mar 7, 2020)

Human Perceptions of Explainable AI systems for non-experts, Georgia Tech CRIDC Research Showcase (Atlanta, GA, USA Jan 27, 2020)

How should we start a design school in Bangladesh?, Brac University Design Panel (Dhaka, Bangladesh Jan 5, 2020)

2019

How to make an AI agent generate plausible rationales in plain English, Google PIRC (Sunnyvale, USA July 31, 2019), see poster

Rationale Generation – a human-centered paradigm in Explainable AI, Google Cloud (Seattle, USA June 28, 2019)

On Design and Evaluation of Human-centered Explainable AI systems, Emerging Perspectives in Human-Centered Machine Learning : CHI 2019 (Glasgow, UK May 4, 2019)

Automated Rationale Generation: A Technique for XAI and its Effects on Human Perceptions, Intelligent User Interfaces (IUI) (Los Angeles, USA Mar 16, 2019)

2018

Human Perceptions of Rationale Generating Agents, AAAI Working of Experimental AI in Games (Edmonton, Canada Nov 14, 2018)

AI Rationalization, AAAI Conference on AI, Ethics, and Society (New Orleans Feb 2, 2018), see poster

2017

AI Rationalization: Explainability for Everyone , Georgia Tech (GVU Research showcase Oct 18, 2017), see poster

Tech and STEM Entrepreneurs, Invited Expert Panelist at the Entrepreneurship Summit, Washington & Lee University (Sep 30, 2017)

The human side of AI: Explainability, Department of Philosophy, Washington & Lee University (Invited Talk Sep 29, 2017)

Philosophy’s Role in Machine Ethics, Washington & Lee University (Invited Talk Sep 28, 2017)

Unpack that Tweet Rationalization: A Traceable and Interpretable Cognitive Modeling System, International Conference on Computational Creativity (ICCC) presentation (Jun 20, 2017)

Towards encultured, explainable, and ethical human-centered Technology, Intel Research (Invited Talk Apr 17, 2017)

Human-centered AI: Understanding People and Designing Technology, HP Labs (Invited Talk Apr 5, 2017)

2016

SWAN: System for Wearable Audio Navigation in VR, Georgia Tech (GVU Research showcase Oct 26, 2016), see poster, see presentation

Connecting Ideas: Towards Sustainable Development in Emerging Ecosystems using Jugaad Innovation Principles, Innovation Lab, Dhaka University (Invited Talk July 29, 2016)

SELECTED
PRESS & MEDIA

📰 | 2023 MIT Tech ReviewThis driverless car company is using chatbots to make its vehicles smarter

🎙️ | 2023 All Tech is HumanCharting the Sociotechnical Gap in XAI

📰 | 2023 Georgia TechAlgorithmic Aftermath: Researcher Explores the Damage They Can Leave Behind

📰 | 2023 BuiltinExplainable AI, Explained

📰 | 2023 AgoliaWhat is explainable AI, and why is transparency so important for machine-learning solutions?

📰 | 2022 VentureBeatResearchers are working toward more transparent language models

📰 | 2022 The GradientHuman-Centered Explainable AI and Social Transparency

📰 | 2022 UXPA MaganizeExplaining the Unexplainable: Explainable AI (XAI) for UX

📰 | 2021 VentureBeatEven experts are too quick to rely on AI explanations

📰 | 2021 Synced ReviewThe ‘Who’ in Explainable AI: New Study Explores the Creator-Consumer Gap

📰 | 2021 AIMHow Does Understanding Of AI Shape Perceptions Of XAI?

🎙️ | 2021 Interaction HourBeyond algorithmic transparency: incorporating social factors in XAI

📰 | 2020 MIT Tech ReviewWhy asking an AI to explain itself can make things worse

📰 | 2019 ACM CommunicationsA Breakthrough in Explainable AI

📰 | 2019 TechXploreAI agent offers rationales using everyday language to explain its actions

📰 | 2019 ViceScientists Created a ’Frogger’-Playing AI That Explains Its Decisions

📰 | 2019 ScienceDailyAI agent offers rationales using everyday language to explain its actions

📰 | 2019 Georgia Tech Press ReleaseResearch Findings May Lead to More Explainable AI

📰 | 2019 Innovation TorontoNew AI agents seem more relatable and trustworthy to humans

📰 | 2019 AI in HealthcareNovel AI agent can rationalize its actions

📰 | 2019 Innovation TorontoNew AI agents seem more relatable and trustworthy to humans

🎙️ | 2019 Tech UnboundAI Agent Plays Frogger & Convinces Spectators It Knows What It’s Doing

📰 | 2019 The Institute of Engineering & TechnologyAI agent talks through its decisions in simple English

📰 | 2017 QuartzThis AI translates its internal monologue for humans to understand—and plays Frogger

📰 | 2017 AAAS ScienceHow AI detectives are cracking open the black box of deep learning

bottom of page