Show simple item record

FieldValueLanguage
dc.contributor.authorAquino, Yves SJ
dc.contributor.authorRogers, Wendy A
dc.contributor.authorBraunack-Mayer, Annette
dc.contributor.authorFrazer, Helen
dc.contributor.authorWin, Than Win
dc.contributor.authorHoussami, Nehmat
dc.contributor.authorDegeling, Christopher
dc.contributor.authorSemsarian, Christopher
dc.contributor.authorCarter, Stacey
dc.date.accessioned2023-03-13T22:48:08Z
dc.date.available2023-03-13T22:48:08Z
dc.date.issued2023en_AU
dc.identifier.urihttps://hdl.handle.net/2123/30200
dc.description.abstractBackground: Alongside the promise of improving clinical work, advances in healthcare artificial intelligence (AI) raise concerns about the risk of deskilling clinicians. This purpose of this study is to examine the issue of deskilling from the perspective of diverse group of professional stakeholders with knowledge and/or experiences in the development, deployment and regulation of healthcare AI. Methods: We conducted qualitative, semi-structured interviews with 72 professionals with AI expertise and/or professional or clinical expertise who were involved in development, deployment and/or regulation of healthcare AI. Data analysis using combined constructivist grounded theory and framework approach was performed concurrently with data collection. Findings: Our analysis showed participants had diverse views on three contentious issues regarding AI and deskilling. The first involved competing views about the proper extent of AI-enabled automation in healthcare work, and which clinical tasks should or should not be automated. We identified a cluster of characteristics of tasks that were considered more suitable for automation. The second involved expectations about the impact of AI on clinical skills, and whether AI-enabled automation would lead to worse or better quality of healthcare. The third tension implicitly contrasted two models of healthcare work: a human-centric model and a technology-centric model. These models assumed different values and priorities for healthcare work and its relationship to AI-enabled automation. Conclusion: Our study shows that a diverse group of professional stakeholders involved in healthcare AI development, acquisition, deployment and regulation are attentive to the potential impact of healthcare AI on clinical skills, but have different views about the nature and valence (positive or negative) of this impact. Detailed engagement with different types of professional stakeholders allowed us to identify relevant concepts and values that could guide decisions about AI algorithm development and deployment.en_AU
dc.language.isoenen_AU
dc.publisherElsevieren_AU
dc.relation.ispartofInternational Journal of Medical Infomaticsen_AU
dc.rightsCreative Commons Attribution-NonCommercial 4.0en_AU
dc.subjectArtificial Intelligenceen_AU
dc.subjectAutomationen_AU
dc.subjectClinical Skillsen_AU
dc.subjectEthicsen_AU
dc.subjectHealthcareen_AU
dc.subjectMedicineen_AU
dc.titleUtopia versus dystopia: Professional perspectives on the impact of healthcare artificial intelligence on clinical roles and skillsen_AU
dc.typeArticleen_AU
dc.identifier.doi10.1016/j.ijmedinf.2022.104903
dc.type.pubtypePublisher's versionen_AU
dc.relation.nhmrc1181960
usyd.facultySeS faculties schools::Faculty of Medicine and Health::Sydney School of Public Healthen_AU
usyd.citation.volume169en_AU
usyd.citation.spage104903en_AU
workflow.metadata.onlyYesen_AU


Show simple item record

Associated file/s

There are no files associated with this item.

Associated collections

Show simple item record

There are no previous versions of the item available.