Augmented Judgments: The affordances of artificial intelligence in improving accuracy of new product launch decisions.
Access status:
Open Access
Type
Conference paperAbstract
The uncertainty behind new product in market makes judging its success a complex endeavour. The extant literature does not accurately explain whether with the help of an artificial intelligence (AI) such uncertainty can be managed. In our paper, we aim at measuring to which extent ...
See moreThe uncertainty behind new product in market makes judging its success a complex endeavour. The extant literature does not accurately explain whether with the help of an artificial intelligence (AI) such uncertainty can be managed. In our paper, we aim at measuring to which extent new product success judgments improve when information provided by an artificial intelligence model is present. We conducted three pilot experiments to measure the effects of different amounts of information given by the artificial intelligence. In the first experiment, participants are presented with the AI’s predicted probability of success. In the second experiment, participants are presented with AI’s probability of success coupled with an explanation how the AI reached its predictions based on variables of the product. In the third experiment, we measured participants improvement on their own judgments after (not while!) being exposed to the information provided by AI. We use new wine products as a context for the experiments. Ground-truth for success is based on a large database of historical product launches. Participants were recruited via a panel, and exposed to new product launch scenarios in an online service. We found that the predicted judgments are significantly improved (p-value: 0.011) when AI information is provided. We also found that participants significantly improved (p-value:0.05) after receiving the AI stimulus. However, we did not find strong evidence that exposing participants to an explanation is better than exposing them to just a probability of success. With our pilot experiments, we also identified required samples sizes and modifications in the experimental design to increase statistical power. Our findings contribute with empirical evidence on the affordances of AI in improving new product success judgments, and the effect of applying novel AI explainability techniques in real-world users. Further, our findings pave the way for further experimentation in human-AI interaction for augmenting new product judgments.
See less
See moreThe uncertainty behind new product in market makes judging its success a complex endeavour. The extant literature does not accurately explain whether with the help of an artificial intelligence (AI) such uncertainty can be managed. In our paper, we aim at measuring to which extent new product success judgments improve when information provided by an artificial intelligence model is present. We conducted three pilot experiments to measure the effects of different amounts of information given by the artificial intelligence. In the first experiment, participants are presented with the AI’s predicted probability of success. In the second experiment, participants are presented with AI’s probability of success coupled with an explanation how the AI reached its predictions based on variables of the product. In the third experiment, we measured participants improvement on their own judgments after (not while!) being exposed to the information provided by AI. We use new wine products as a context for the experiments. Ground-truth for success is based on a large database of historical product launches. Participants were recruited via a panel, and exposed to new product launch scenarios in an online service. We found that the predicted judgments are significantly improved (p-value: 0.011) when AI information is provided. We also found that participants significantly improved (p-value:0.05) after receiving the AI stimulus. However, we did not find strong evidence that exposing participants to an explanation is better than exposing them to just a probability of success. With our pilot experiments, we also identified required samples sizes and modifications in the experimental design to increase statistical power. Our findings contribute with empirical evidence on the affordances of AI in improving new product success judgments, and the effect of applying novel AI explainability techniques in real-world users. Further, our findings pave the way for further experimentation in human-AI interaction for augmenting new product judgments.
See less
Date
2023-04-26Source title
IPDMC 2022Licence
Copyright All Rights ReservedFaculty/School
The University of Sydney Business SchoolShare