back

menu

Vunet Systems

Making AI Explainable - B2B SaaS

UX Research
Dashboard
Mobile
Designing for AI
Mobile
Designing for AI

In this fast-paced mid-sized AI startup, I was a solo designer. I designed AI interfaces focused on transparency and explainablity. I worked closely with the CEO, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

B2B SaaS product in the financial sector

Vunet's products are used by major Indian banks, for end to end monitoring of business journeys. We wanted to introduce the insights generated by our AI algorithms into the user workflow. We wanted to explore different ways that this ‘AI - human’ interface could be designed.

Problem

“How can we introduce AI powered insights onto our primarily dashboard centric platform?”

  1. AI Insight Cards

  • Insights on dashboards

  • For users interested in insights about business journeys

  • Confidence score, correlated events and historical data charts make it explainable

  1. AI Management Dashboard

  • All new dashboard

  • For users wishing to learn more about or manage the algortihms

  • Having 3 levels of abstraction make it explainable to people with different

The Process

  • The real process

    The actual process was a little messy. Navigating ambiguity was one of the most challenging parts of this project. We did not have requirement or spec documents to begin with, instead there is a vision or an idea we want to bring to life. Collaboration with cross functional collaborators is key for this success, and it was not linear. Then began a journey of collaboration on iterations which led us to the final design.

  • The real process

    The actual process was a little messy. Navigating ambiguity was one of the most challenging parts of this project. We did not have requirement or spec documents to begin with, instead there is a vision or an idea we want to bring to life. Collaboration with cross functional collaborators is key for this success, and it was not linear. Then began a journey of collaboration on iterations which led us to the final design.

  • The real process

    The actual process was a little messy. Navigating ambiguity was one of the most challenging parts of this project. We did not have requirement or spec documents to begin with, instead there is a vision or an idea we want to bring to life. Collaboration with cross functional collaborators is key for this success, and it was not linear. Then began a journey of collaboration on iterations which led us to the final design.

  • The real process

    The actual process was a little messy. Navigating ambiguity was one of the most challenging parts of this project. We did not have requirement or spec documents to begin with, instead there is a vision or an idea we want to bring to life. Collaboration with cross functional collaborators is key for this success, and it was not linear. Then began a journey of collaboration on iterations which led us to the final design.

Discovery

Research Insights

Our initial goal was to find a suitable form for the insights that are delivered by the AI algorithm. However, our research revealed an important opportunity that made us revisit the problem - trust.

Our users had a mixed perception of AI. Too much trust and too little trust, both are a result of not understanding how algorithms make decisions for a job that our end user is accountable for.

“I think AI is always right.”

“I don’t know how it works, so it is difficult to trust it.”

“It makes my work easier”

“I feel threatened by AI”

“I think AI is always right.”

“I feel threatened by AI”

“I don’t know how it works, so it is difficult to trust it.”

“It makes my work easier”

TRUST

TRUST

TRUST

Monitoring happens manually, a process called ‘eyeball monitoring’
Monitoring happens manually, a process called ‘eyeball monitoring’
Monitoring happens manually, a process called ‘eyeball monitoring’
Monitoring happens manually, a process called ‘eyeball monitoring’
When failures happen, the recovery is also a very human and manual process
When failures happen, the recovery is also a very human and manual process
When failures happen, the recovery is also a very human and manual process
When failures happen, the recovery is also a very human and manual process
When we introduce AI into a workflow that is mostly human, trust becomes paramount.
When we introduce AI into a workflow that is mostly human, trust becomes paramount.
When we introduce AI into a workflow that is mostly human, trust becomes paramount.
When we introduce AI into a workflow that is mostly human, trust becomes paramount.
Opportunity

Trust, ethics and explainability are important factors to consider when implementing AI in the real world. Our research showed us an opportunity to make AI explainable to all user types - with low, medium and high AI expertise.

Significance

AI can attract a lot of new customers, but having a trustworthy product makes them stay. A human centred approach to AI design helps Vunet stand out amongst competitors. We soften the stance of perfection placed on AI algorithms by showing users how decisions and predictions are made.

Ideation, Design & Prototyping

In collaboration with the ML team and the Product team, we went through 3 iterations of AI insights. The ideation process was very quick. We would conduct brainstorm sessions, after which I would quickly prototype our ideas. We meet again, review the design together and discuss feedback for the next iteration.

  1. Insight Cards

In collaboration with the ML team and the Product team, we went through 3 iterations of AI insights. The ideation process was very quick. We would conduct brainstorm sessions, after which I would quickly prototype our ideas. We meet again, review the design together and discuss feedback for the next iteration.

Iteration 1 -Insights in natural language

Machine generated insights is quantitative data, whereas our minds make sense of this data qualitatively. We want an insight card to bridge this gap, and reduce the interpretation time required.

Pros
  • Easier to read

  • Very effective if accurate

Cons
  • Subjective and open to misinterpretation

  • Lack of additional necessary contextual data

Iteration 2 -
Insights under a panel

We were inspired by Google Analytics’ AI insights panel. Before it could even be tested, it was shot down by development due to framework scalability and feasibility issues.

Panel based insights

Pros
  • Within context of dashboard data

  • Can be shown or collapsed, as per preference

Technically feasible

Cons
  • Not scalable for high number of insights

  • Constrained to use in interactive or touch screen devices only

Iteration 1
Insights in natural language

Machine generated insights is quantitative data, whereas our minds make sense of this data qualitatively. We want an insight card to bridge this gap, and reduce the interpretation time required.

Pros
  • Easier to read

  • Very effective if accurate

Cons
  • Subjective and open to misinterpretation

  • Lack of additional necessary contextual data

Iteration 2 -
Insights under a panel

We were inspired by Google Analytics’ AI insights panel. Before it could even be tested, it was shot down by development due to framework scalability and feasibility issues.

Panel based insights

Pros
  • Within context of dashboard data

  • Can be shown or collapsed, as per preference

  • Technically feasible

Cons
  • Not scalable for high number of insights

  • Constrained to use in interactive or touch screen devices only

Iteration 3 -
Insights as a card component
Algorithm outputs -> visual information

A single glimpse of this card can help the users prepare for adverse events, or analyse events that have occurred. It helps answer the following questions:

  1. What is the insight and when did it happen/will occur?

  2. Why did the algorithm predict or analyze this insight?

  3. How confident is the algorithm?

  4. What should the user do next?

Metric Value
Description Text

Describes the event occured, possible cause and impact

Card Title
Correlated events

Lists other events occured at the same time that may have some relation with this event

Confidence Factor
Suggested Action
Time Series Chart

Users can provide feedback to the system to maintain accuracy

Percentage value that defines how confident the model is about the prediction

Lists other events occured at the same time that may have some relation with this event

Describes the event occured, possible cause and impact

Timestamp
Feedback Mechanism

Graphical representation

of the prediction

Based on the prediction,

the system suggests action

to be performed next

Based on the prediction,

the system suggests action

to be performed next

These are information heavy components, and required several iterations to reach a point of easy scannability and legibility.

Iteration 1

Title unclear

Confidence factor is not seen

Iteration 2

More context required

Confidence factor taking too much space

Iteration 3

Make title clearer

We can have multiple card types

Final

More examples of the insight cards
  1. AI Management Dashboard
Why do we need this?

An insight card is great for users who frequently monitor the dashboard and have a low ML expertise. Our client companies often have Data Scientists and ML Engineers, who would likely want a more granular view of the algorithms. Based on these varying needs, we designed an ML management dashboard which has 3 layers.

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Layer 2 - Model Performance

Used by mid level ML users to understand performance, training and input data quality. They can further drill down to know more by clicking the ‘More’ button.

Layer 3 - Individual Model Details

Highest level of granularity. Can be used by high level ML users to redesign, analyse or fix algorithms.

Takeaways
  1. This project made me a research first designer.

    A strong foundation of user research helps make ambiguous problems less challenging. AI is changing the world and how we work at light speed, and design plays an important role in bringing that power to people. Trust, transparency and explainability are crucial concepts in designing for AI.

  2. Designing at startups is like shapeshifting in and out of different teams

    I learned early on that each team speaks a different language. Especially in enterprise teams, the voice of the user can be easily forgotten.

  3. Presenting my work regularly helped me soothe my defensiveness

    I'll admit that often, I unintentionally get attached to an idea or concept. Presenting and sharing my work opened a safe space for ideas to flow through, failing early on bad ideas and building up quickly on the good ones.

Iteration 3 -
Insights as a card component
Algorithm outputs -> visual information

A single glimpse of this card can help the users prepare for adverse events, or analyse events that have occurred. It helps answer the following questions:

  1. What is the insight and when did it happen/will occur?

  2. Why did the algorithm predict or analyze this insight?

  3. How confident is the algorithm?

  4. What should the user do next?

These are information heavy components, and required several iterations to reach a point of easy scannability and legibility.

More examples of the insight cards

Made with love, figma and framer.

back

menu

Made with love, figma and framer.