Product Designer, June 2020 - 2021
My most notable work at Vunet was conceptualizing business insight cards with the lens of explainability of our AI predictions. In advocating for increased user transparency, I also designed the ML Model Management experience for users of various expertise levels.
how might we
"For an AI driven analytics product, how might we design insights that establish trust & transparency between user and the system?"
01 - define
02 - research
User Persona Document
03 - design
Insight Card Structure
I conducted audits and critiques of business insight platforms and organized brainstorm sessions with the cross functional stakeholders such as ML Engineers, CXOs & the Product Teams.
1. Unexplainable algorithms
Typical outputs of AI tech involves probabilities and numbers. AI systems that translate numerical insights into a paragraph of text are useful because users find it easier to comprehend textual information.
2. Context of the parameter being measured
In transaction analysis, one event can have an effect on multiple events. End users can benefit from seeing the historical data and other correlated events that the model considered before making a certain decision.
3. Credibility of the prediction
AI Tech can often be put on a pedestal to always be considered as accurate. These systems must have a way for the users to easily understand the credibility of each decision made by the system.
Research showed that managing ML Models requires thorough knowledge about the technical metrics and parameters that measure the health of these systems. While this imposes a constraint, it is a cause for the lack of trust and transparency between the system and the user, as it excludes the user from checking the credibility of the decision making process - something that users once did when they relied on traditional eyeball monitoring. To build transparency, I created a design concept based on two levels of abstraction.
The first level of abstraction is represented in the main dashboard screen, where any user irrespective of their technical expertise in Machine Learning, can understand the health of the models the system is using. This user can interpret the health, the probable cause (whether its performance, training or input data) and can use the suggested actions to perform corrective measures.
The second level of abstraction is represented in the pop-up that shows the details of each model. This section enlists the various nascent parameters that are important for ML Engineers, in order to understand the health of the model at a deeper level than regular users.
1. Structure beats ambiguity
Because the design + AI space is still unexplored, there is a lot of ambiguity around the notion of transparency in ML and what that looks like. Having structure in my process - right from the stakeholders I would contact, the design meetings and the design decisions that would taken - helped us unravel the ambiguity slowly over time.
2. Transparency is often overlooked
One of the most important things I learnt from this project was to always bring the power back to the user. From the research and interviews, I realised how easily users are likely to believe an AI system's prediction without looking at any performance or credibility metrics. This concerns me, and as a designer, I learnt to notice imbalanced technologies and learnt that the way to combat it is to include the user in the credibility process (for instance, we did this here with the feedback mechanism of the insight card).
3. Designing = Translation
During this project, I often found myself feeling like a translator of languages - ML to spoken human language. This made me realise the important of UX Copy in any UX work. We could have achieved transparency with focussing on cleaning and simplifying the UX Copy.