Making AI Explainable - B2B SaaS
In this fast-paced mid-sized AI startup, I was a solo designer. I designed AI interfaces focused on transparency and explainablity. I worked closely with the CEO, the ML Team to understand algorithm metrics and the Development team to understand what is possible to build.
B2B SaaS product in the financial sector
Vunet's products are used by major Indian banks, for end to end monitoring of business journeys. We wanted to introduce the insights generated by our AI algorithms into the user workflow. We wanted to explore different ways that this ‘AI - human’ interface could be designed.
“How can we introduce AI powered insights onto our primarily dashboard centric platform?”
AI Insight Cards
Insights on dashboards
For users interested in insights about business journeys
Confidence score, correlated events and historical data charts make it explainable
AI Management Dashboard
All new dashboard
For users wishing to learn more about or manage the algortihms
Having 3 levels of abstraction make it explainable to people with different
Our initial goal was to find a suitable form for the insights that are delivered by the AI algorithm. However, our research revealed an important opportunity that made us revisit the problem - trust.
Our users had a mixed perception of AI. Too much trust and too little trust, both are a result of not understanding how algorithms make decisions for a job that our end user is accountable for.
Trust, ethics and explainability are important factors to consider when implementing AI in the real world. Our research showed us an opportunity to make AI explainable to all user types - with low, medium and high AI expertise.
AI can attract a lot of new customers, but having a trustworthy product makes them stay. A human centred approach to AI design helps Vunet stand out amongst competitors. We soften the stance of perfection placed on AI algorithms by showing users how decisions and predictions are made.
Ideation, Design & Prototyping
In collaboration with the ML team and the Product team, we went through 3 iterations of AI insights. The ideation process was very quick. We would conduct brainstorm sessions, after which I would quickly prototype our ideas. We meet again, review the design together and discuss feedback for the next iteration.
AI Management Dashboard
Why do we need this?
An insight card is great for users who frequently monitor the dashboard and have a low ML expertise. Our client companies often have Data Scientists and ML Engineers, who would likely want a more granular view of the algorithms. Based on these varying needs, we designed an ML management dashboard which has 3 layers.
Layer 1 - Overall Health
These components can be used by low ML users to escalate issues if the system health is down
Layer 2 - Model Performance
Used by mid level ML users to understand performance, training and input data quality. They can further drill down to know more by clicking the ‘More’ button.
Layer 3 - Individual Model Details
Highest level of granularity. Can be used by high level ML users to redesign, analyse or fix algorithms.
This project made me a research first designer.
A strong foundation of user research helps make ambiguous problems less challenging. AI is changing the world and how we work at light speed, and design plays an important role in bringing that power to people. Trust, transparency and explainability are crucial concepts in designing for AI.
Designing at startups is like shapeshifting in and out of different teams
I learned early on that each team speaks a different language. Especially in enterprise teams, the voice of the user can be easily forgotten.
Presenting my work regularly helped me soothe my defensiveness
I'll admit that often, I unintentionally get attached to an idea or concept. Presenting and sharing my work opened a safe space for ideas to flow through, failing early on bad ideas and building up quickly on the good ones.