How do we innovate responsibly when it comes to AI? This was the focus of the recent Anthropology + Technology Conference at Bristol’s Watershed on 3 October, which I organised to foster collaboration between social scientists and technologists working on emerging technology projects, both in academia and industry. I posed the question – How do we design innovative new digital technologies that have a positive impact on people and society: making people’s lives better?
I’m delighted to be sharing some of the topics and discussions from that conference with the One HealthTech Bristol community by collaborating on their upcoming event, AI in Health and Care – How do we innovate responsibly on 6 November. In fact, two of the conference speakers, Dr Laura Sobola and Ellie Foreman, have agreed to give lightning talks based on their conference presentations.
Taking place as part of the Bristol Technology Festival, One HealthTech Bristol’s event is aimed at both subject newcomers and experts from diverse professional and personal background, and will take a look at practical applications as well as debating complex issues.
Social impact of algorithmic decision-making
Making people's lives better is certainly the focus of healthcare professionals and those working in the health tech space – or if not better, certainly not worse – supporting them in times of uncertainty or enabling them to achieve their health goals.
From my perspective as an anthropologist, the social impact of algorithmic decision-making and and the injustice of “being targeted by an algorithm...the sense that an electronic eye is turned towards you but you can’t put your finger on exactly what’s amiss”, as Eubanks writes in Automating Inequality, is something we should all care about. As Eubanks suggests, we don’t all experience “this new regime of digital data” in the same way. In one chilling quote, a working-class woman turns to Eubanks and says, “You should pay attention to what happens to us. You’re next”.
If we genuinely want to help those who are either in our care or whom we seek to serve through digital health technologies, we should take responsibility for creating technologies that don’t amplify the existing inequalities in society. Take, for example, the Glow app (designed by men), which failed to take into account that not all women want to track their menstrual cycle in order to get pregnant, and thus alienated a significant proportion of its potential customer base.
Whose ethics are we talking about?
While health professionals have to navigate, with skill and care, on an almost daily basis, other cultures’ ideas about health and medicine, those designing digital health technologies might be less mindful. So, as a social scientist, I feel strongly that the conversations around this “fourth industrial revolution” and its social impact should include the scientists who study culture and society, namely anthropologists and sociologists.
Furthermore, the current discourse around AI focuses on ethics. Ethics essentially is about what is right and what is wrong, some of which is enshrined in our laws and our religious texts, and the moral codes we live by, if we are not religious. But it’s important to recognise that ethics is cultural and when we talk about ethics, whose ethics are we referring to? Our Western ethics?
Collectively anthropologists and sociologists have “an enormous amount of knowledge about human lives”. Let’s use it.
Secure your ticket now
Registration is now open for AI in Health and Care – How do we innovate responsibly, taking place on 6 November, 2.30-5pm, at Engine Shed, Bristol.
Spaces are filling up quickly & we've heard rumours that there may be AI-themed baked d!