A validity study of a medical student objective structured clinical examination using the many-facet Rasch model
Field | Value | Language |
dc.contributor.author | Rothnie, Imogene Phylsa | |
dc.date.accessioned | 2021-05-18T03:53:25Z | |
dc.date.available | 2021-05-18T03:53:25Z | |
dc.date.issued | 2020 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/25065 | |
dc.description.abstract | Performance-based assessments of clinical ability, such as the Objective Structured Clinical Examination (OSCE) play a significant role in certifying student progress in medical education programs. In high-stakes assessment situations, decisions about student performance infer levels of competence and have consequences for student progression. It is incumbent upon the developers of performance-based assessment to provide evidence that the interpretations of performance ratings and decisions based on these are valid; that they are accurate and fair representations of a student's clinical ability. To produce such assurances and improve assessment practice where necessary, medical education research and practice engage with theories of competence, validity and measurement. All of these areas are dynamic and evolving. However, the prevailing approach to validation studies in performance-based assessment practice and research predominantly invokes outdated definitions of validity, evaluated using classical test theory as a measurement model. Whilst traditional approaches have provided many useful insights, recent developments in validity theory and modern measurement theory offer perspectives that overcome some of the recognised issues in these traditional approaches. The research presented in this thesis provides an empirical example of using a contemporary argument-based approach to a validation study of examiner ratings from a medical student OSCE using the many-facet Rasch model (MFRM) from Rasch measurement theory as a measurement-based approach to the evaluation of the factors that impact the validity of these ratings. Together these themes constitute the conceptual framework for the study. The focus of this study was a summative medical student OSCE administered to 281 students at an Australian medical school in November 2015. The study applied Kane's two-step approach to validation studies by specifying an interpretation use argument for the assessment ratings and then evaluating the claims to validity for the argument. A relevant MFRM was specified for the study and provided the evaluative lens, the criteria for the measurement-based validity argument and the sources of evidence. "Facets" - specialised software for MFRM investigations was used to run the statistical process. Key findings from the study showed that instruments and processes used to assess students in the OSCE could produce measures of proficiency related to a single construct of competence as defined by the rating scales. The study also found, however, that there were relative differences in the way examiners applied the rating scales which threatened the validity of interpreting those ratings as accurate measures of students' clinical competence. Different types of examiner behaviours were detected in the evidence produced by the MFRM analysis of rating data. The study also found that the standard-setting process used in the assessment unnecessarily isolated domains of competence, and the accuracy of decisions about competence could be improved applying techniques that could quantify and control for measurement error introduced by examiner behaviours. Implications of the findings for practice and theory development included recommendations to improve scale targeting, using students' overall proficiency measures for decision making and providing feedback to examiners who demonstrate inconsistent behaviour in rating performances. This research adds to the conversation in medical education by demonstrating the critical importance of identifying and making explicit conceptualisations of validity and the assumptions that underpin the interpretation of the assessment scores. The research also shows how applying the MFRM can create a single, theory-based frame of reference to view different kinds of evidence in validation studies of performance-based assessments. | en_AU |
dc.language.iso | en | en_AU |
dc.subject | validation studies | en_AU |
dc.subject | medical education | en_AU |
dc.subject | performance based assessment | en_AU |
dc.subject | many facet Rasch model (MFRM) | en_AU |
dc.title | A validity study of a medical student objective structured clinical examination using the many-facet Rasch model | en_AU |
dc.type | Thesis | |
dc.type.thesis | Doctor of Philosophy | en_AU |
dc.rights.other | The author retains copyright of this thesis. It may only be used for the purposes of research and study. It must not be used for any other purposes and may not be transmitted or shared with others without prior permission. | en_AU |
usyd.faculty | SeS faculties schools::Faculty of Medicine and Health::Northern Clinical School | en_AU |
usyd.degree | Doctor of Philosophy Ph.D. | en_AU |
usyd.awardinginst | The University of Sydney | en_AU |
usyd.advisor | Roberts, Christopher |
Associated file/s
Associated collections