Designing Explainable AI

Vunet

Product Designer, June 2020 - 2021
My most notable work at Vunet was conceptualizing business insight cards with the lens of explainability of our AI predictions. In advocating for increased user transparency, I also designed the ML Model Management experience for users of various expertise levels.

Team

Ashwin Ramachandran, Santhosh Vasudeva Mallan, Jithesh Kaveetil.
I collaborated with the Product Team, Machine Learning Team, Sales Team and an external design agency called Lollypop Design.

Contributions

Led an engagement with a UX Design Studio, created a companion mobile app for the AIOps product vuSmartMaps, redesigned the web platform

Another Clubhouse

Why do we need 'Explainable' AI?

The Problem

AI is a 'Black Box Technology'

Our tool replaces 'eyeball monitoring' with AI based monitoring. This kind of change causes disruption of trust amongst the employees. Without explainability, employees are likely to feel threatened by AI systems that can potentially replace their job. Our goal is to replace this fear with an greater awareness of how AI works to common end users.

how might we

"For an AI driven analytics product, how might we design insights that establish trust & transparency between user and the system?"

My Approach

01 - define

Background Research

02 - research

User Interviews

Task Analysis

User Persona Document

03 - design

Insight Card Structure

ML Management

What kind of research was performed?

I conducted audits and critiques of business insight platforms and organized brainstorm sessions with the cross functional stakeholders such as ML Engineers, CXOs & the Product Teams.

Typical Design Issues in AI Tech

1. Unexplainable algorithms

Typical outputs of AI tech involves probabilities and numbers. AI systems that translate numerical insights into a paragraph of text are useful because users find it easier to comprehend textual information.

2. Context of the parameter being measured

In transaction analysis, one event can have an effect on multiple events. End users can benefit from seeing the historical data and other correlated events that the model considered before making a certain decision.

3. Credibility of the prediction

AI Tech can often be put on a pedestal to always be considered as accurate. These systems must have a way for the users to easily understand the credibility of each decision made by the system.

Research

User Persona Document

METHOD

Contextual Inquiries (3)

Analysis

Task Analysis

I conducted contextual inquiries with 3 on-site users of our product. I also conducted calls with the ML Team and the Sales Team to understand the typical goals of ML Engineers & CXOs. I synthesized the data obtained from the user interviews into a 'User Persona Document'. To have a complete understanding of each user's goals, touchpoints and actions, I created a 3-part description of each user type.

The 3 parts consisted of User Persona, Universe of Trigger Points and Task Map. These were made for each user type and cover a holistic view of the end-user on our platform.

Design

Insight Cards

During our brainstorming session, we used research data to finalize the components that an ML insight can be broken down into. I translated this into a design concept which contained these insights in a card format. After multiple rounds of feedback and design iterations, we finalized the insight cards.

01 - predictive insights

Wave microinteraction was inspired by the usual graph form used in prediction charts

02 - proactive insights

Radial microinteraction was inspired by the radars used in detection interfaces

03 - Feedback on insight

User feedback on the insight is fed back into the ML Model to facilitate better predictions

ML Management

Research showed that managing ML Models requires thorough knowledge about the technical metrics and parameters that measure the health of these systems. While this imposes a constraint, it is a cause for the lack of trust and transparency between the system and the user, as it excludes the user from checking the credibility of the decision making process - something that users once did when they relied on traditional eyeball monitoring. To build transparency, I created a design concept based on two levels of abstraction.

Level One

The first level of abstraction is represented in the main dashboard screen, where any user irrespective of their technical expertise in Machine Learning, can understand the health of the models the system is using. This user can interpret the health, the probable cause (whether its performance, training or input data) and can use the suggested actions to perform corrective measures.

Level Two

The second level of abstraction is represented in the pop-up that shows the details of each model. This section enlists the various nascent parameters that are important for ML Engineers, in order to understand the health of the model at a deeper level than regular users.

Learnings

1. Structure beats ambiguity

Because the design + AI space is still unexplored, there is a lot of ambiguity around the notion of transparency in ML and what that looks like. Having structure in my process - right from the stakeholders I would contact, the design meetings and the design decisions that would taken - helped us unravel the ambiguity slowly over time.

2. Transparency is often overlooked

One of the most important things I learnt from this project was to always bring the power back to the user. From the research and interviews, I realised how easily users are likely to believe an AI system's prediction without looking at any performance or credibility metrics. This concerns me, and as a designer, I learnt to notice imbalanced technologies and learnt that the way to combat it is to include the user in the credibility process (for instance, we did this here with the feedback mechanism of the insight card).

3. Designing = Translation

During this project, I often found myself feeling like a translator of languages - ML to spoken human language. This made me realise the important of UX Copy in any UX work. We could have achieved transparency with focussing on cleaning and simplifying the UX Copy.