Health inequality: people die 20% earlier in the North of England than the South, whilst the North gets less funding for health infrastructure and research. A recent report by the Northern Health Science Alliance highlighted these and other serious inequalities in health provision, research and outcomes between northern and southern England. The report made for compelling reading for the team at OneHealthTech Manchester, and so when we paired up with Manchester Futurists to run a joint event looking at Artificial Intelligence we wanted to focus on the potential of AI to address or perpetuate these inequalities.
OneHealthTech Manchester hub member Ruth Norris was our chair for the evening, and kicked us off by reflecting on the comments of a Harvard Law School professor, Jonathan Zittrain. Could AI be the next asbestos - installed everywhere, with unexpected side effects, and hard to remove?
We were joined by four fab speakers with different takes on the subject, and by an enthusiastic audience, kindly hosted by Autotrader Manchester.
Robots, algorithms, machine learning have an emerging role in our lives and healthcare. Gary Leeming, CTO at Connected Health Cities set the stage by exploring what AI is, and how machine learning might be used in healthcare. He began by observing that we have all become so used to the idea of robots being a part of the digital ecosystem, that we accept having to confirm to reCAPTCHA that we are not robots.
In terms of digital health, the foundations of AI are in robust, well proven, and explainable medical statistics. This has developed into the use of applied statistics, such as decision support algorithms: fixed, labour-intensive to develop, and equally labour-intensive to maintain, as populations and our understanding of treatments changes over time. The next step in this path is the use of machine learning and neural networks – good at pattern recognition, e.g. in identifying potential malignancies on a scan, and able to evolve over time, but much harder to unpick and understand how the machine makes its decisions.
Gary highlighted that the digital start-up philosophy of ‘Move fast and break things’, doesn’t work when there is a risk that the thing you break is a human. We have to look at the building blocks that go towards making AI – the people building it, the data used to train it, the wider environment it’s deployed into, and take a Systems thinking approach to deploying this new technology if we’re to get the best out of this technology.
After this excellent introduction to AI and its use in healthcare, Hannah Davies, Head of External Affairs at the Northern Health Science Alliance (NHSA), talked to us about North South health inequalities, and the role the NHSA play in trying to address these gaps. The starting point for this talk, and for our event, was the NHSA’s report on Health for Wealth, tackling the connection between low productivity and low health in the North. The first half of Hannah’s talk had some shocking statistics: the growing health inequality between north and south over the last 20 years, the fact that if you are ill in Northern England you are more than 39% more likely to lose your job than an equivalent person in the south. However she also highlighted how much fantastic research is already happening: we have more top 200 universities in the North than France, Spain and Italy combined. There are also excellent opportunities to make a difference – small increases in NHS budgets and research funding will have a big impact on job creation and reduction in lost working hours due to ill health. Hannah finished by asking ‘how can we help the health of the North?’ The answers are in public health prevention programmes, and getting people back to work, working with local SMEs on job retention, and pushing Central Government to act on NHS budgets and allocation of research funding.
Having heard the challenges the North faces in health inequality, we moved on to listening to Jess Morley AI Lead at NHS X to look at how AI might help or hinder improvements in equality. Jess began by apologising for being from Oxford, but assured us her grandparents from the North East would approve of the discussions! She talked about the use of AI to provide tools to manage individual care, support public and population health, but cautioned that the reality is that AI tools are nowhere near as advanced as the general press and public opinion would suggest. However, perhaps it’s a good thing that it’s early days – Jess talked about the mechanisms, policies and governance that need be developed in parallel to the technology, in order to prevent bias and inequity. She highlighted how critical the development of trust is, and that we need accountability, evidence of careful data use and clear benefits, in order to support trust. Jess also talked about the need to make rules, but then to monitor them and accept that we may need to change them if they are wrong. It is hard to make rules about emerging technologies, and hard for governments to admit when they get things wrong.
Gary and Jess both shared plenty of examples of AI projects that have suffered from accusations of bias, from Amazon's recruitment bias to Watson's USA centric model of cancer care , and it’s clear that AI tools can easily simply entrench existing biases and inequalities. To avoid this problem, we will have to actively choose to design, develop and deploy AI in a more considered and careful manner. Our final speaker, Malcolm Oswald, Director of Citizens Juries CIC, shared one mechanism for enabling citizens to participate in making challenging ethical decisions. Citizen’s Juries were developed by the Jefferson Center in the USA. The jury process brings together small groups of citizens with expert witnesses and facilitators, to deliberate on an issue. They have been used around the world as a mechanism for participatory democracy, involving citizens in shaping policy design and constitutional changes.
Malcolm described a recent cycle of two Citizens Juries run in Manchester, looking at the trade-off between accuracy of decision making by complex AI tools, versus the difficulty of explaining how AI has made those decisions. After hearing evidence from a series of subject matter experts the juries considered this this trade-off in four different scenarios, across healthcare, criminal justice and recruitment. Both juries, after listening to evidence and debating had generally become more favourable to AI. Jury members felt that explainability remained an important consideration in criminal justice, but that accuracy of diagnosis was more important in healthcare. Mechanisms such as Citizens Juries could clearly play a role in helping to shape public thinking and policies around the use and governance of AI.
So – my thoughts at the end of this event: perhaps it’s true that the hype around AI has oversold its immediate potential for impact in healthcare, but the technology is developing, and may yet bring the ability to help address public health and patient level inequalities in accessing and treating healthcare. That the technology isn’t quite there yet is not necessarily a bad thing – we haven’t always been great at anticipating the potential harms of new technologies (social media, smartphones, I’m looking at you..) but at least some people are attempting to consider the wider system impact of AI up front, rather than diving in head first.
Given all we’d heard over the course of the evening about AI’s potential applications in healthcare, the need for better healthcare and greater investment in the north, and the importance of public involvement in shaping governance and policies, it would be great to see more of this research and development taking place up here – hopefully a subject for a future OHT Manchester meet-up. We hope to see you there!
Sarah Thew and Ruth Norris, OHT Manchester