Designing AI business insights for Vunet Systems, a B2B SaaS finance tech company.

In this fast-paced mid-sized AI startup, I was a solo designer. I designed AI interfaces focused on transparency and explainablity. I worked closely with the CEO, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

Designing AI business insights for Vunet Systems, a B2B SaaS finance tech company.

In this fast-paced mid-sized AI startup, I was a solo designer. I designed AI interfaces focused on transparency and explainablity. I worked closely with the CEO, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.

The Problem

The Problem

How might we design AI insights for data heavy finance dashboards?

How might we design AI insights for data heavy finance dashboards?

Ideation & Iterations

How much information do we communicate to the end users and what form should this component take on the dashboard interface?

Perfecting the Insight Card

There is a lot of information on this insight card. How might we prioritize the data, and communicate it in a visually appealing and digestable way for our end users?

Going the Extra Mile

I wanted to take this project further, and address our users' concerns about trust. It made me think - AI can be imperfect, it can fail, mislead or misdirect. To trust it, we must be able to periodically check its health.

Generative Research

I like to tackle initial ambiguity with qualitative research methods that give me a clear idea of what a day in the life of our end users looks like.

The Solution

The Solution

Insight Cards on Dashboard

Proactive and predictive insights that explain what could go wrong, and what went wrong. Confidence score, correlated events and historical data charts make it explainable.

ML Management Dashboard

Managing ML models for users wishing to learn more about or manage the algorithms. Having 3 levels of abstraction make it explainable to people with different technical skills.

Generative Research

I like to tackle initial ambiguity with qualitative research methods that give me a clear idea of what a day in the life of our end users looks like.

Contextual Inquiries

  • Interviews with various stakeholders - CXOs, Development team, Sales team

  • 5 semi-structured interviews & cognitive walkthroughs with our (proxy) users

  • Building user flows, touch points, time and clicks per task

Research Findings

Our research revealed an important opportunity that made us revisit the problem - trust. End users had a mixed perception of AI. Too much trust and too little trust, both are a result of not understanding how algorithms make decisions for a job that our end user is accountable for.

Opportunity

Trust, ethics and explainability are important factors to consider when implementing AI in the real world. Our research showed us an opportunity to make AI explainable to all user types - with low, medium and high AI expertise.

Significance

AI can attract a lot of new customers, but having a trustworthy product makes them stay. A human centred approach to AI design helps Vunet stand out amongst competitors. We soften the stance of perfection placed on AI algorithms by showing users how decisions and predictions are made.

Monitoring happens manually, a process called ‘eyeball monitoring’

When failures happen, the recovery is also a very human and manual process

Ideation & Iterations

How much information do we communicate to the end users and what form should this component take on the dashboard interface?

The Iterations

The Iterations

Iteration 1

Insights in natural language

Machine generated insights is quantitative data, whereas our minds make sense of this data qualitatively. We want an insight card to bridge this gap, and reduce the interpretation time required.

Pros

Easier to read

Very effective if accurate

Cons

Subjective and open to misinterpretation

Lack of contextual data

Iteration 2

Insights under a panel

We were inspired by Google Analytics’ AI insights panel. Before it could even be tested, it was shot down by development due to framework scalability and feasibility issues.

Pros

Maintains context

Can be shown or collapsed

Technically feasible

Cons

Not scalable

Non-conventional

Iteration 3

Insights as a card component

Algorithm outputs -> visual information

A single glimpse of this card can help the users prepare for adverse events, or analyse events that have occurred. It helps answer the following questions:

What is the insight and when did it happen/will occur?

Why did the algorithm predict or analyze this insight?

How confident is the algorithm?

What should the user do next?

Perfecting the Insight Card

There is a lot of information on this insight card. How might we prioritize the data, and communicate it in a visually appealing and digestable way for our end users?

Perfecting the Insight Card

I worked closely with ML and Product team to identify the most important metrics that users need to know. My background in CS also helped me make a lot of decisions on the fly. We designed an insight card that answers all the key questions a user would ask when analyzing an event.

  1. What is the insight and when did it happen/will occur?

  2. Why did the algorithm predict or analyze this insight?

  3. How confident is the algorithm?

  4. What should the user do next?

There were multiple iterations in the process of perfecting the insight card. These were mainly discussions on visual cues, information architecture and data visualization.

There were multiple iterations in the process of perfecting the insight card. These were mainly discussions on visual cues, information architecture and data visualization.

Going the Extra Mile

I wanted to take this project further, and address our users' concerns about trust. It made me think - AI can be imperfect, it can fail, mislead or misdirect. To trust it, we must be able to periodically check its health.

Why do we need this?

An insight card is great for users who frequently monitor the dashboard and have a low ML expertise. Our client companies often have Data Scientists and ML Engineers, who would likely want a more granular view of the algorithms. Based on these varying needs, we designed an ML management dashboard which has 3 layers.

Layer 1 - Overall Health

These components can be used by low ML users to escalate issues if the system health is down

Layer 2 - Model Performance

Used by mid level ML users to understand performance, training and input data quality. They can further drill down to know more by clicking the ‘More’ button.

Layer 3 - Individual Model Details

Highest level of granularity. Can be used by high level ML users to redesign, analyse or fix algorithms.

Impact

I led UX across all of VuNet's products

I redesigned the live web platform for the product vuSmartMaps. This entailed quick deployment and bug fixing with dev team. It also included a lot of data visualization. Created dashboard design guidelines for and with solution engineers.
I also designed and shipped a mobile app using a 0 to 1 approach. I collaborated with the developers and PM

Led external engagement with UX design studio

Being at center of the different teams at the company made me a good liaison for external engagements. Good communication and translating the languages spoken by different teams is as important as designing.

Introduced the company to UX research

I pitched the importance of UX Research methods by highlighting how differently every department in our company thought about our users and their goals. I created standardised research questionnaires into our roadmap, as a part of a user research document which included personas, journey maps, task flows and product touch points. Making this accessible to all teams helped everyone in the company see the single source of truth when it came to our users.

Takeaways

This project made me a research first designer

A strong foundation of user research helps make ambiguous problems less challenging. AI is changing the world and how we work at light speed, and design plays an important role in bringing that power to people. Trust, transparency and explainability are crucial concepts in designing for AI

Designing at startups is like shapeshifting in and out of different teams

I learned early on that each team speaks a different language. Especially in enterprise teams, the voice of the user can be easily forgotten.

Presenting my work regularly helped me soothe my defensiveness

I'll admit that often, I unintentionally get attached to an idea or concept. Presenting and sharing my work opened a safe space for ideas to flow through, failing early on bad ideas and building up quickly on the good ones.

Made with love, figma and framer.

© Mudra Nagda 2024