Show simple item record

FieldValueLanguage
dc.contributor.authorMontero-Manso, Pablo
dc.contributor.authorVazquez-Hernandez, Carlos
dc.date.accessioned2023-04-26T02:07:30Z
dc.date.available2023-04-26T02:07:30Z
dc.date.issued2023-04-26
dc.identifier.urihttps://hdl.handle.net/2123/31144
dc.description.abstractThe uncertainty behind new product in market makes judging its success a complex endeavour. The extant literature does not accurately explain whether with the help of an artificial intelligence (AI) such uncertainty can be managed. In our paper, we aim at measuring to which extent new product success judgments improve when information provided by an artificial intelligence model is present. We conducted three pilot experiments to measure the effects of different amounts of information given by the artificial intelligence. In the first experiment, participants are presented with the AI’s predicted probability of success. In the second experiment, participants are presented with AI’s probability of success coupled with an explanation how the AI reached its predictions based on variables of the product. In the third experiment, we measured participants improvement on their own judgments after (not while!) being exposed to the information provided by AI. We use new wine products as a context for the experiments. Ground-truth for success is based on a large database of historical product launches. Participants were recruited via a panel, and exposed to new product launch scenarios in an online service. We found that the predicted judgments are significantly improved (p-value: 0.011) when AI information is provided. We also found that participants significantly improved (p-value:0.05) after receiving the AI stimulus. However, we did not find strong evidence that exposing participants to an explanation is better than exposing them to just a probability of success. With our pilot experiments, we also identified required samples sizes and modifications in the experimental design to increase statistical power. Our findings contribute with empirical evidence on the affordances of AI in improving new product success judgments, and the effect of applying novel AI explainability techniques in real-world users. Further, our findings pave the way for further experimentation in human-AI interaction for augmenting new product judgments.en_AU
dc.language.isoenen_AU
dc.relation.ispartofIPDMC 2022en_AU
dc.rightsCopyright All Rights Reserveden_AU
dc.subjectJudgmentsen_AU
dc.subjectCognitive Augmentationen_AU
dc.subjectArtificial Intelligenceen_AU
dc.subjectBehavioural Scienceen_AU
dc.titleAugmented Judgments: The affordances of artificial intelligence in improving accuracy of new product launch decisions.en_AU
dc.typeConference paperen_AU
dc.subject.asrcANZSRC FoR code::52 PSYCHOLOGY::5204 Cognitive and computational psychology::520402 Decision makingen_AU
dc.type.pubtypeAuthor accepted manuscripten_AU
usyd.facultySeS faculties schools::The University of Sydney Business Schoolen_AU
workflow.metadata.onlyNoen_AU


Show simple item record

Associated file/s

Associated collections

Show simple item record

There are no previous versions of the item available.