Show simple item record

FieldValueLanguage
dc.contributor.authorAquino, Yves SJ
dc.contributor.authorCarter, Stacey
dc.contributor.authorHoussami, Nehmat
dc.contributor.authorBraunack-Mayer, Annette
dc.contributor.authorWin, Khin Than
dc.contributor.authorDegeling, Chris
dc.contributor.authorRogers, Wendy A
dc.date.accessioned2023-03-13T04:44:49Z
dc.date.available2023-03-13T04:44:49Z
dc.date.issued2023en_AU
dc.identifier.urihttps://hdl.handle.net/2123/30194
dc.description.abstractBackground There is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race). Objectives Our objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias. Methodology The study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers. Results Findings reveal considerable divergent views on three key issues. First, views on whether bias is a problem in healthcare AI varied, with most participants agreeing bias is a problem (which we call the bias-critical view), a small number believing the opposite (the bias-denial view), and some arguing that the benefits of AI outweigh any harms or wrongs arising from the bias problem (the bias-apologist view). Second, there was a disagreement on the strategies to mitigate bias, and who is responsible for such strategies. Finally, there were divergent views on whether to include or exclude sociocultural identifiers (eg, race, ethnicity or gender-diverse identities) in the development of AI as a way to mitigate bias. Conclusion/significance Based on the views of participants, we set out responses that stakeholders might pursue, including greater interdisciplinary collaboration, tailored stakeholder engagement activities, empirical studies to understand algorithmic bias and strategies to modify dominant approaches in AI development such as the use of participatory methods, and increased diversity and inclusion in research teams and research participant recruitment and selection.en_AU
dc.language.isoenen_AU
dc.publisherBMJ Publishing Groupen_AU
dc.relation.ispartofJournal of Medical Ethicsen_AU
dc.rightsCreative Commons Attribution 4.0en_AU
dc.subjectDecision Makingen_AU
dc.subjectEthicsen_AU
dc.subjectInformation Technologyen_AU
dc.subjectPolicyen_AU
dc.titlePractical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectivesen_AU
dc.typeArticleen_AU
dc.identifier.doi10.1136/jme-2022-108850
dc.type.pubtypePublisher's versionen_AU
dc.relation.nhmrc1181960
usyd.facultySeS faculties schools::Faculty of Medicine and Health::Sydney School of Public Healthen_AU
usyd.citation.volume0en_AU
usyd.citation.spage1en_AU
usyd.citation.epage9en_AU
workflow.metadata.onlyYesen_AU


Show simple item record

Associated file/s

There are no files associated with this item.

Associated collections

Show simple item record

There are no previous versions of the item available.