Humanising AI in Healthcare: Incorporating social sciences in algorithms

By Duncan Reynolds 

On January 19th 2024, Duncan Reynolds from the Apollo Social Science Team, and Lizzie Remfry from the Digital Environment Research Institute) (DERI) ran an event in central London called ‘Humanising AI in Healthcare: Incorporating social sciences in algorithms’. 

Artificial Intelligence (AI) systems promise much to healthcare, such as improved diagnostic accuracy, speeding up decision-making, saving resources, and much more. However, the use and creation of these systems open up many issues which cannot simply be answered in technical terms by data scientists and clinicians alone. Social scientists have an essential role to play in answering questions such as: ‘Whose voice is heard in the creation of AI?’, ‘How and why do people trust AI?’, ‘What social structures are built into algorithms?’, or ‘What happens when AI is implemented?’. However, social scientists are often not involved in creating or implementing AI systems. Through talks from academics at UK institutions, this event aimed to shine a light on what the social sciences can bring to algorithms in healthcare and to convince people of the importance of taking a social science lens to them. 

"Whose voice is heard in the creation of AI systems?"


"How and why do people trust AI?”

When Lizzie and I first came up with the idea for the event, we weren’t sure if it would have broader appeal beyond a social science audience, so we put a lot of effort into advertising it widely to clinicians, data scientists, people in industry, as well as social scientists. We were delighted (and a little bit surprised!) that the event sold out within a week, and by the time the event came around, the waitlist was longer than the number of original tickets we had available. We had attendees from multiple UK universities, the NIHR, Genomics England, industry, patients, and members of the public, which made for a fantastic and multidisciplinary event.

The format of the event was three talks from academics who work in the social sciences and research the use of artificial intelligence in healthcare, followed by a Q and A. 

Why AI systems fail – Dr Alina Geampana

The first speaker was Dr Alina Geampana (Durham University). Alina began her talk by saying that most AI systems implemented in healthcare fail. The reason for this is often seen to be technical; for example, the data on which the algorithm was built was not good enough, or there were bugs in the code. However, Alina argued that if we want to fully understand why these systems do not always work as intended, we must understand not just the tools, but the practices, techniques and contexts in which they exist. To show this, Alina gave examples from her research into the use of algorithms in IVF treatment in the UK. She made three main points about the importance of the social sciences: 

  1. Social science research can help us understand how people interact with technologies and the complex issues that arise during. 
  2. Social science research can show how and why AI technologies are successfully embedded in care practice (or not).
  3. The social sciences can uncover the impact of technologies beyond immediate users and intended purpose.

Real-world biases, inequalities, and AI – Prof David Leslie

In the second talk, Professor David Leslie (Alan Turing Institute and QMUL) talked about how biases and inequalities can be inscribed in algorithms. David argued that AI is well known to be prone to algorithmic biases, and if we uncritically deploy AI, these problems can be exacerbated. To reduce these unwanted biases, David stated that we must understand the iterative relationship between the real-world patterns of inequality and discrimination, which leads to discriminatory data, which can lead to biased AI design and implementation, which have applicational injustices, which themselves feed back into the real-world patterns of inequality. This led to a lively discussion on different biases and what can be done about them. An example was the lack representativeness in datasets, which can lead to higher error rates for marginalised communities. Communication of the limitations of AI models by data engineers and scientists was hoped to help understand inscribed biases, and to support changes in future that help reduce the potential of AI to increase inequalities.

Tackling ethical issues in the development of AI – Dr Duncan Reynolds

Finally, on the speakers’ front, Dr Duncan Reynolds (QMUL) spoke on a project he is involved with, attempting to create AI to help patients who take multiple medications and have multiple long-term conditions. Duncan’s ethnographic observations showed how, in practice, many moral and ethical decisions which need to be made when making AI for healthcare can become bureaucratic and technical. Instead of making difficult moral decisions about who to include and exclude from the algorithm, technical questions such as “What should the prevalence of a disease be for us to include it?” were asked and answered. Duncan spoke about how, at first, moral questions were replaced with technical ones, but over time, through collaboration between doctors, data scientists, and patients, the team came to realise they had to face the ethical questions head-on. This was done by no longer providing simply technical questions and answers (“what is the prevalence? How many decimal places shall we go to? What models works ‘best’ according to our scoring system?”), but by building consensus panels between doctors, data scientists, and patients to look at different types of questions, such as “are we perpetuating current health inequalities if we exclude this group of people?”.

"Are we perpetuating current health inequalities if we exclude this group of people”

Trying and testing a diagnostic AI – Foresight

As well as the talks and Q&As, we were very keen on incorporating an interactive element to the day to get the diverse audience thinking like social scientists. Lizzie introduced the Foresight tool, created by King’s College London (KCL). It describes itself as a “generative transformer model trained on ~1M patients from King’s College Hospital and ~20k patients from South London and the Maudsley Mental Health NHS Foundation Trust.” In simple terms, it is like ChatGPT for diagnosing diseases, or a much smarter Dr Google. KCL does stress that the model is not to be used for diagnostic purposes. It is publicly available to test the capabilities of the underlying models. The room split into groups and had vibrant discussions around why the system was created, what ethical considerations might have taken place when making the system, how and why would you trust (or not trust) the algorithm, and what implications implementation might have? The event concluded with a visit to the London Science Gallery’s exhibition “AI: Who’s looking after me?”, where we went around exhibitions such as “Does AI care?”, “Heartificial Intelligence”, and “Cat Royale”.

The event was a great success, and we were delighted at the kind and positive feedback we received. Overall, attendees rated the event 4.8/5, and everyone expressed an interest in future events. Some of the feedback (which we were happiest about!), included people saying that their main takeaway was seeing how AI is not purely technical, but social as well, and understanding this may help improve the implementation and creation of these systems in the future. People also noted that they really enjoyed the multidisciplinary audience, so they could speak to people they might not ordinarily interact with. And… We will also take on board the comment that said the event would have been improved if there had been more coffee!

We look forward to running similar events in the future, and hope to see you there!

Leave a comment