The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

AI can both strengthen and undermine trust in healthcare

Picture. Doctor looking at an x-ray.
Is the patient experience affected if test results are analysed by AI or a physician? This is one of several issues that researchers are investigating.

When used as a diagnostic aid, artificial intelligence (AI) can help physicians save time and make more accurate diagnoses. However, physicians should also understand and be able to explain the computer’s decision to the patient to avoid jeopardising trust, says AI researcher Stefan Larsson. AI also puts us at a crossroads: do we want to reflect the world or change it?

Stefan Larsson is a researcher at the Faculty of Engineering at Lund University (LTH) in the field of technology and social change. He is particularly interested in how autonomous decision-making systems, i.e., independent and self-learning systems, affect society and individuals.

 

With regard to medicine, researchers weigh the risks and benefits when AI is used to detect tumours in digital X-ray images, select treatment options for acute chest pain and draw conclusions from comprehensive records on people’s health. Key issues are reliability, transparency, representativeness, division of responsibilities – and trust. The aim is often to complement the good qualities of humans with the strong search ability of machines. 

Transparency may increase the prospect of trust

An issue that interests Stefan Larsson is trust and people’s experiences. Trust is a key concept, not least in the healthcare sector.

“The whole approach is based on trust. The patient, who is usually in a very weak and vulnerable position, allows treatment and interventions which may be considered very intrusive and which are carried out by individuals who, because of their profession, are in a strong position”, says Stefan Larsson.

Is trust jeopardised when the tools gradually become even more difficult to assess for ordinary people? A lot of research is taking place in this area. According to Stefan Larsson, not every patient needs to understand the rationale of each computer-generated recommendation, but exactly how information and decision-making are balanced is a key issue for the trustworthy use of AI. 

“Transparency is an element to understand better in relation to trust. We are trying to understand which part of the process is most important in terms of transparency, including explainability of the reasoning behind an individual decision or accountability in the whole system.”

One risk is that the approach becomes skewed, putting certain groups at a disadvantage...

Another recurring issue in these and other projects is bias, or "social bias" as it is sometimes referred to. Awareness of the risk of systematic distortion in relation to applied AI systems and machine-learning is relatively new, according to Stefan Larsson.

Just four years ago, an American study concluded that commercial facial recognition software proved to be considerably more precise if the face belonged to a white man compared to a dark-skinned woman. 

Another study has shown that software for detecting skin cancer works better on light skin than dark skin. A third study has shown how the use of a risk assessment algorithm by US courts automatically and wrongly assessed the risk of recidivism to be higher among African Americans than white Americans. 

“The policy response to these types of problems is often to call for more representative data for all types of groups. Most recently, this was seen in the EU Commission’s proposal for an AI act and in recommendations from the WHO. But the question is still when is it a solution to the problem, or if it may cause new challenges.”

…On the other hand, there is a risk that properly programmed AI reinforces injustices

As computer programs become increasingly better at reflecting reality to some extent, the next problem arises: do we want to reproduce biases in society, even if they happen to be correct? Do we want job advertisements for highly-paid jobs to be targeted to men, who more frequently search for highly-paid jobs and have higher wages? The consequence of a proper reflection of the state of affairs is that AI not only reflects injustices, but even risks reinforcing them.

“This issue has a different character. Social structures may be the source of the problem here. It raises a more normative issue that does not necessarily have a technical or optimised solution within reach”.

By contrast, people have become increasingly aware that cultural and social aspects need to be incorporated at an early stage.

“It is no longer sufficient to realise in the final phase that it would have been good to have an ethicist in the project. It appears that a multidisciplinary approach is required in order to build good AI products.”

Translation from Swedish: Shawana Badat

Version in Swedish at lu.se

AI-projects that Stefan Larsson contributes to:

Leads a WASP-HS-project on AI transparency and consumer trust, and participates as a researcher in AIR Lund, an AI-project based in registry research that is lead by Jonas Björn and Matthias Ohlsson.

He is the main supervisor for Charlotte Högberg, doctoral student in technology and society, LTH, in a project on transparency and fairness in applied AI.

Tillsammans med Charlotte Högberg ska han även undersöka hur AI-understödd mammografiscreening upplevs av både patienter och radiologer i det s.k. MASAI-projektet vilket leds av Kristina Lång.

One of the initiators of AI Lund, a university-wide research network, which, among other things, holds a conference on AI in the public sector at the Internet Days on 22 November. https://internetdagarna.se/event/ai-i-offentlig-sektor-mojligheter-och-…

Stefan Larsson at the Lund University research portal.