In the last blog, I talked about the challenges in building AI products as a product manager. This blog will highlight how technical alignment with human needs and expectations builds trust in the AI product.
I've worked with many generous & knowledgeable data bods who have brought me in on the inner workings of developing AI models. What's been clear across the board has been something from my computer vision days: rubbish in, rubbish out. In the real world, this becomes more complex.
Complexity of real-world data
Today, with AI we're recognising the power of multimodal data (images, text, test results, clinical codes) to support clinical decision-making. However, this wealth of information brings new complexities. Each data type requires careful consideration of its context, from how clinical staff record information to the representativeness of training datasets.
Where should the AI model fit into this puzzle? Should it be involved early on in the clinical pathway using raw data or closer to the decision-making point after other software has processed the data? And is it even feasible to use certain data types for decision support, both logistically and from a user perspective?
Beyond these questions, real-world data is often messy and incomplete, and that’s before we get to privacy concerns. These headaches can seriously mess with building a top-notch AI model!
To answer these technical questions we need to focus on the human side of AI in healthcare - the fun bit!
Building trust with users
In the real world, there's a lot more nuance. Take sick note requests, for example. It seems like a perfect use case for AI to sort and prioritize these, allowing patients to submit requests and staff to batch requests and save time. But when we spoke to GPs and patients, we found that they had reservations.
It turns out GPs do more than just look at the request. They consider the patient's history and might call to check in. Relying only on the request itself doesn’t take account of the whole picture. And when we expand to other data sources, people understandably have questions about how their information is used. One patient asked if their resolved health condition from years ago would affect decisions made about their health - should it?
It's clear that building trust in AI means understanding the nuances of how people work and what matters to them. We can't just rely on broad consent; we need to be transparent and respect people's concerns.
Translating the meaningful into technical
Building a successful AI product requires a deep understanding of users' needs and the context in which the product will be used. Collaborating closely with domain experts within communities will help identify the right data sources and needed outputs, mitigate biases, and optimise workflows.
Trust is paramount. Explain your AI development like you’re talking to a friend - or at least as a peer. Clinicians don’t need to know how the AI model works as data scientists. They need to understand how AI models arrive at their conclusions so that they can be confident in using it in the right situation. Letting them into the process reduces oversight and creates something that works for the users.
Co-designing the product with end-users means that the solution meets their actual needs and builds trust from the ground up. Co-designing an AI product presents challenges: it takes time to do it right and this can conflict with the urge to be one of the first on the market. The temptation is to build exciting AI solutions to wow the users and then fix any reservations users may have. However, without investing time in understanding user perspectives, the exciting AI solution will not be adopted in the first place.
To build trust and ensure adoption, product managers must:
Clean up your data act: Real-world data is messy, so figure out what's important and make it shine to give the best value to users
Get to know your users: collaborate with clinicians, healthcare staff and patients, learn their world - what are their needs, workflows and concerns?
Be totally transparent: Explain your AI like you're talking to a friend, not a robot. Let people in on the process.
Focus on what matters: Solve real problems for real people.
So, to create an AI product that people actually want to use and trust, you need to be both a tech wizard and a people person. Win those hearts and minds!
In the next blog, I'll delve deeper into a case study where I've used co-design.