The Problem

The product is used by major Indian banks, for end to end monitoring of business journeys. We wanted to introduce the insights generated by our AI algorithms into the user workflow. We wanted to explore different ways that this ‘AI - human’ interface could be designed.

“What would an interface between AI algorithm predictions and the current dashboard look like?”

The Process

1

Research

ML Team Interviews

Competitor Analysis

ML Tools Research

2

Define

Design requirements

Success metrics

3

Ideate

Sketching ideas

Quick prototyping

Testing and feedback on the iterations

4

Design

Wireframes

Final design

Animations

Prototyping

Research

Our initial goal was to find a suitable form for the insights that are delivered by the AI algorithm. However, our research revealed an important opportunity that made us revisit the problem - trust.

Monitoring happens manually, a process called ‘eyeball monitoring’

When failures happen, the recovery is also a very human and manual process

TRUST

When we introduce AI into a workflow that is mostly human, trust becomes paramount.

Our users had a mixed perception of AI. Too much trust and too little trust, both are a result of not understanding how algorithms make decisions for a job that our end user is accountable for.

“I think AI is always right.”

“I don’t know how it works, so it is difficult to trust it.”

“It makes my work easier”

“I feel threatened by AI”

Opportunity

Trust, ethics and explainability are important factors to consider when implementing AI in the real world. Our research showed us an opportunity to make AI explainable to all user types - with low, medium and high AI expertise.

“How can we make AI interfaces explainable?”

Significance

AI can attract a lot of new customers, but having a trustworthy product makes them stay. A human centred approach to AI design helps Vunet stand out amongst competitors. We soften the stance of perfection placed on AI algorithms by showing users how decisions and predictions are made.

The Solution

1

AI Insight Cards

Proactive and predictive insights

For users interested in insights about business journeys

Confidence score, correlated events and historical data charts make it explainable

2

ML Management Dashboard

Managing ML Algorithms

For users wishing to learn more about or manage the algortihms

Having 3 levels of abstraction make it explainable to people with different

Ideate

In collaboration with the ML team and the Product team, we went through 3 iterations of AI insights. The ideation process was very quick. We would conduct brainstorm sessions, after which I would quickly prototype our ideas. We meet again, review the design together and discuss feedback for the next iteration.

Iteration 1 -Insights in natural language

Machine generated insights is quantitative data, whereas our minds make sense of this data qualitatively. We want an insight card to bridge this gap, and reduce the interpretation time required.

Pros

Easier to read

Very effective if accurate

Cons

Subjective and open to misinterpretation

Lack of additional necessary contextual data

Iteration 2 -Insights under a panel

We were inspired by Google Analytics’ AI insights panel. Before it could even be tested, it was shot down by development due to framework scalability and feasibility issues.

Panel based insights

Pros

Within context of dashboard data

Can be shown or collapsed, as per preference

Technically feasible

Cons

Not scalable for high number of insights

Constrained to use in interactive or touch screen devices only

1

AI Insight Cards

Proactive and predictive insights

Confidence score, correlated events and historical data charts make it explainable

More examples of the insight cards

Iteration 3 - Insights as a card component

Framework constraints and familiarity of design to users came up often in the design reviews. I worked closely with the CEO to understand the vision, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

What information does an insight card contain?

Our insight tells us more than just the metric and time stamp. The cards are designed to be a visual representation of the algorithms. The following sections help achieve this and make our AI decisions explainable.

How did we end up with our final design?

The card component went through 4 iterations. I designed these in collaboration with the ML Team.

Iteration 1

Title unclear

Confidence factor is not seen

Iteration 2

More context required

Confidence factor taking too much space

Iteration 3

Make title clearer

We can have multiple card types

Final

Metric Value

Description Text

Card Title

Correlated events

Confidence Factor

Suggested Action

Time Series Chart

Graphical representation

of the prediction

Based on the prediction,

the system suggests action

to be performed next

Users can provide feedback to the system to maintain accuracy

Percentage value that defines how confident the model is about the prediction

Lists other events occured at the same time that may have some relation with this event

Describes the event occured, possible cause and impact

Timestamp

Feedback Mechanism

ML Management Dashboard: An extension of AI insights

An insight card is great for users who frequently monitor the dashboard and have a low ML expertise. Our client companies often have Data Scientists and ML Engineers, who would likely want a more granular view of the algorithms. Based on these varying needs, we designed an ML management dashboard which has 3 layers.

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Takeaways

A strong foundation of user research helps make ambiguous problems less challenging. AI is changing the world and how we work at light speed, and design plays an important role in bringing that power to people. Trust, transparency and explainability are crucial concepts in designing for AI.

Made with love, figma and framer.

Vunet Systems

Explainable AI - B2B SaaS

UX Research

Dashboard

Mobile

Designing for AI

In this mid-sized startup, I took on many roles as the only designer among 70 employees. I delved into research, concept development, wireframing, and designing the company's AI-led business insights and ML Management Dashboard. I worked hand-in-hand with the ML division and top executives to guarantee that the design offers transparency and explanation when it comes to AI-driven decisions for end-users.

Role

Product Designer

Duration

Jun 2020 - 21

I worked closely with the CEO to understand the vision, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

The Problem

“What would an interface between AI algorithm predictions and the current dashboard look like?”

The product is used by major Indian banks, for end to end monitoring of business journeys. We wanted to introduce the insights generated by our AI algorithms into the user workflow. We wanted to explore different ways that this ‘AI - human’ interface could be designed.

The Process

1

Research

ML Team Interviews

Competitor Analysis

ML Tools Research

2

Define

Design requirements

Success metrics

3

Ideate

Sketching ideas

Quick prototyping

Testing and feedback on the iterations

4

Design

Wireframes

Final design

Animations

Prototyping

Research

Our initial goal was to find a suitable form for the insights that are delivered by the AI algorithm. However, our research revealed an important opportunity that made us revisit the problem - trust.

TRUST

Monitoring happens manually, a process called ‘eyeball monitoring’

When failures happen, the recovery is also a very human and manual process

When we introduce AI into a workflow that is mostly human, trust becomes paramount.

Our users had a mixed perception of AI. Too much trust and too little trust, both are a result of not understanding how algorithms make decisions for a job that our end user is accountable for.

“I think AI is always right.”

“I feel threatened by AI”

“I don’t know how it works, so it is difficult to trust it.”

“It makes my work easier”

Opportunity

Trust, ethics and explainability are important factors to consider when implementing AI in the real world. Our research showed us an opportunity to make AI explainable to all user types - with low, medium and high AI expertise.

“How can we make AI interfaces explainable?”

Significance

AI can attract a lot of new customers, but having a trustworthy product makes them stay. A human centred approach to AI design helps Vunet stand out amongst competitors. We soften the stance of perfection placed on AI algorithms by showing users how decisions and predictions are made.

Solution

1

AI Insight Cards

Proactive and predictive insights

For users interested in insights about business journeys

Confidence score, correlated events and historical data charts make it explainable

2

ML Management Dashboard

Managing ML Algorithms

For users wishing to learn more about or manage the algortihms

Having 3 levels of abstraction make it explainable to people with different

Ideate

In collaboration with the ML team and the Product team, we went through 3 iterations of AI insights. The ideation process was very quick. We would conduct brainstorm sessions, after which I would quickly prototype our ideas. We meet again, review the design together and discuss feedback for the next iteration.

Iteration 1 -
Insights in natural language

Machine generated insights is quantitative data, whereas our minds make sense of this data qualitatively. We want an insight card to bridge this gap, and reduce the interpretation time required.

Pros

Easier to read

Very effective if accurate

Cons

Subjective and open to misinterpretation

Lack of additional necessary contextual data

Iteration 2 -
Insights under a panel

We were inspired by Google Analytics’ AI insights panel. Before it could even be tested, it was shot down by development due to framework scalability and feasibility issues.

Panel based insights

Pros

  • Within context of dashboard data

  • Can be shown or collapsed, as per preference

  • Technically feasible

Cons

  • Not scalable for high number of insights

  • Constrained to use in interactive or touch screen devices only

Iteration 3 -
Insights as a card component

Framework constraints and familiarity of design to users came up often in the design reviews. I worked closely with the CEO to understand the vision, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

1

AI Insight Cards

Proactive and predictive insights

Confidence score, correlated events and historical data charts make it explainable

What information does an insight card contain?

Our insight tells us more than just the metric and time stamp. The cards are designed to be a visual representation of the algorithms. The following sections help achieve this and make our AI decisions explainable.

Metric Value

Description Text

Card Title

Correlated events

Confidence Factor

Suggested Action

Time Series Chart

Graphical representation

of the prediction

Based on the prediction,

the system suggests action

to be performed next

Users can provide feedback to the system to maintain accuracy

Percentage value that defines how confident the model is about the prediction

Lists other events occured at the same time that may have some relation with this event

Describes the event occured, possible cause and impact

Timestamp

Feedback Mechanism

How did we end up with our final design?

The card component went through 4 iterations. I designed these in collaboration with the ML Team.

Iteration 1

Title unclear

Confidence factor is not seen

Iteration 2

More context required

Confidence factor taking too much space

Iteration 3

Make title clearer

We can have multiple card types

Final

More examples of the insight cards

ML Management Dashboard: An extension of AI insights

An insight card is great for users who frequently monitor the dashboard and have a low ML expertise. Our client companies often have Data Scientists and ML Engineers, who would likely want a more granular view of the algorithms. Based on these varying needs, we designed an ML management dashboard which has 3 layers.

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Layer 2 - Model Performance

Used by mid level ML users to understand performance, training and input data quality. They can further drill down to know more by clicking the ‘More’ button.

Layer 3 - Individual Model Details

Highest level of granularity. Can be used by high level ML users to redesign, analyse or fix algorithms.

All together

Takeaways

A strong foundation of user research helps make ambiguous problems less challenging. AI is changing the world and how we work at light speed, and design plays an important role in bringing that power to people. Trust, transparency and explainability are crucial concepts in designing for AI.

Made with love, figma and framer.

Vunet Systems

Explainable AI - B2B SaaS

UX Research

Dashboard

Mobile

Designing for AI

In this mid-phase startup, I took on many roles as the only designer among 70 staff members. Predominantly, I delved into research, concept development, wireframing, and designing the company's AI-led business insights and ML Management Dashboard. I worked hand-in-hand with the ML division and top executives to guarantee that the design offers transparency and explanation when it comes to AI-driven decisions for end-users.

Role

UX Design Intern

Duration

Jun 2020 - 21

I worked closely with the CEO to understand the vision, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

back

menu