Dataset for 'Questionable and open research practices in criminology'
Field | Value | Language |
dc.contributor.author | Chin, Jason | |
dc.coverage.temporal | 2010-2020 | en_AU |
dc.date.accessioned | 2021-05-27T04:02:14Z | |
dc.date.available | 2021-05-27T04:02:14Z | |
dc.date.issued | 2021 | en_AU |
dc.identifier.uri | https://hdl.handle.net/2123/25107 | |
dc.description.abstract | Dataset - To provide initial evidence about how criminologists view QRPs and OSPs, and about whether they use them, we conducted a preregistered study of researchers who publish criminological science. The population of interest for our study was researchers who published in criminology and criminal justice journals during the past 10 years. Our study, the first survey research on QRPs and OSPs in criminology, can be used to shed light on whether there are particular strengths and weaknesses in criminology’s current practices. The findings can also be used as benchmarks to be revisited as the field changes | en_AU |
dc.language.iso | en | en_AU |
dc.publisher | The University of Sydney Law School | en_AU |
dc.relation.ispartof | https://doi.org/10.31235/osf.io/bwm7s | |
dc.rights | Creative Commons Attribution 4.0 | en_AU |
dc.title | Dataset for 'Questionable and open research practices in criminology' | en_AU |
dc.type | Dataset | en_AU |
dc.subject.asrc | 1602 Criminology | en_AU |
dc.description.method | To provide initial evidence about how criminologists view QRPs and OSPs, and about whether they use them, we conducted a preregistered study of researchers who publish criminological science. The population of interest for our study was researchers who published in criminology and criminal justice journals during the past 10 years. Our study, the first survey research on QRPs and OSPs in criminology, can be used to shed light on whether there are particular strengths and weaknesses in criminology’s current practices. The findings can also be used as benchmarks to be revisited as the field changes. As we will describe, we asked participants about 10 QRPs (Table 3) that have been widely studied elsewhere, which include two that border on research fraud (filling in missing data without reporting it and hiding known problems with the data) and 5 OSPs recently studied in surveys of education (Makel et al., 2021) and communication (Bakker et al., 2020) researchers. As stated in our preregistration (https://osf.io/fbhkq), our study’s primary aim was descriptive. Specifically, we aimed to provide estimates of criminologists’ self-reported use of the 10 QRPs and 5 OSPs examined (“use”), their perceptions of other criminologists’ use of these practices (“prevalence”), and their levels of endorsement of these practices (“support”). We also specified two hypotheses in advance of data collection. Our first hypothesis was that use of and support for QRPs would be negatively correlated with use of and support for OSPs. This hypothesis flows from a deterrence theory of open practices; they arose, in part, to make transparent, and therefore discourage, QRP use (Simmons et al., 2012, 1362-63). Our second hypothesis was that methodological training would be associated with use of and support for both QRPs and OSPs, independent of career stage. Training might make researchers more aware of the negative effects of QRPs (and benefits of OSPs). Alternatively, QRP use could be enabled by greater methodological knowledge and skill. Given the effect of training could plausibly go in either direction, we refrained from making a directional hypothesis. Methodology Sample Our research design follows those used to study QRPs and OSPs in other fields (Table 1). Our materials and de-identified data are publicly available in the Open Science Framework (OSF) repository (https://osf.io/qvcdg/). Our study received human ethics approval from the University of Sydney (https://osf.io/n5svq/). We used a computerized, self-administered survey because research suggests that it is the best mode for obtaining honest answers (Tourangeau, Conrad, & Couper, 2013). Our population of interest was active researchers in criminology, defined as researchers who had published at least one article in a criminology or criminal justice journal in the previous 10 years. Defining the population of interest this way is similar to Fraser et al. (2018), Makel et al. (2021), and Bakker et al., (2020), who also surveyed researchers who had published in journals in their field(s) of interest. We selected criminology journals using the Web of Science’s “Criminology and Penology” category (Web of Science, 2018) and two academic studies of criminology journals (DeJong & St. George, 2018; Sorenson, 2009). From these lists, we excluded 23 journals we determined were not sufficiently related to criminology (e.g., Journal of Forensic Psychiatry & Psychology), and 14 journals for other reasons (e.g., language other than English). As a result, we sampled from 67 criminology journals. This process and exclusion justifications were detailed in our preregistration (https://osf.io/fbhkq). They are further explained in our supplementary materials (https://osf.io/myhx9/). From the 67 journals, we extracted 16,157 unique author email addresses. For journals indexed by the Web of Science, we obtained emails through its database of article information. For others, we adapted code written by Makel et al. (https://osf.io/83mwk/) that scrapes journal websites for e-mail addresses (https://osf.io/qvcdg/). In some cases, we also obtained email addresses by hand-coding author information (https://osf.io/myhx9/). Survey invitations and follow-up reminders were sent on August 10, 20, and 28, 2020. We closed data collection on September 12, 2020. Of the 16,157 obtained email addresses, 17 failed, and 2,370 bounced back, resulting in a total of 13,770 successful email account contacts. However, some of those accounts may not have been actively monitored by their owners during the time period of our survey (August, 2020) because, for instance, some may have retired. In total, we received 1,612 responses. This response rate (12%) is small, but similar to other recent studies sampling authors or editors (Makel et al., 2021; Hopp & Hoover, 2017; Horbach & Halffman, 2020), and exceeds those often obtained by professional polling organizations (Keeter et al., 2017). A large body of research shows that “nonresponse bias is rarely notably related to [the] nonresponse rate” (Krosnick et al., 2015, p. 6). However, in our survey, given its topic (research behavior), nonresponse may have resulted in bias. Any nonresponse bias, however, is likely to result in underestimates of QRP use and support, and in overestimates of OSP use and support, because, if anything, support for the credibility revolution would likely have increased individuals’ likelihood of responding to our survey. As in Makel et al. (2021), we asked respondents at the start of the survey: “Have you conducted quantitative research that involves null-hypothesis significance testing?” Unlike Makel et al., (2021), we excluded from our main report those who reported they did not do quantitative research involving null-hypothesis significance testing (n = 479), because they were not asked all of the questions (they were asked about: HARKing, underreporting results, hiding data problems, hiding imputation and all the OSPs). This exclusion is not listed in our preregistration because we did not anticipate the difficulties created by only asking a subset of the questions to the subsample of non-quantitative respondents. After collecting the data, but before looking at the results, we decided it would increase comparability to limit the analysis to respondents who received the same questionnaire. However, the data for all respondents, quantitative and non-quantitative, is provided online in the supplementary materials (https://osf.io/8me9w/) and, where possible, the analyses below have been reproduced on the whole dataset and on the non-quantitative sample. Another 50 respondents are excluded because they indicated they did not want their data used. Finally, there was item non-response, which further reduced the full analytic sample to between 579 and 711, depending on the analysis. To provide a better idea of the composition of our sample, Table 2 breaks down respondents’ career level and the number of statistics and methods courses they reported having taken. As can be seen, our sample was predominantly mid-career and senior researchers with a high degree of methods or statistical training. The modal categories in our sample were senior researchers who had taken ten or more methods courses—an important subsection of quantitative criminologists who are likely to publish regularly and to be influential in the discipline. (Details on the non-quantitative sample are included in the supplementary materials.) [Insert Table 2 about here] Measures We asked participants about 10 QRPs (Table 3) that were also included in prior surveys in other fields (Table 1). These practices likely vary in the degree to which we would expect the community to proscribe them. For instance, it is easier to construct innocent explanations for rounding down p-values (e.g., .054 to .05) than filling in missing data. We also asked about five OSPs (Table 4) that Makel and colleagues (2021) included in their survey. The order of the presented practices was randomized between participants. Tables 3 and 4 provide the exact question wording for the specific QRPs and OSPs, along with the abbreviations (variable names) that we use in the figures. For each practice, as in prior research, we measured self-reported use, perceived prevalence, and support. Use was measured with two questions. The first asked: “Have you ever engaged in this practice?” (1 = yes, 0 = no). The second was a contingency question asked to those who answered affirmatively to the first question: “What PERCENT of studies you have conducted—that is, how many out of 100—would you say that you used this practice?” In the descriptive analysis, we separately analyzed responses to these two behavioral questions, but for the correlational analysis we combined them by coding respondents who reported not doing the practice as “0%” on the percent of studies variable. Perceived prevalence was measured with the question: “What percent of criminologists—that is how many out of 100—would you say have engaged in this practice on at least one occasion?” Finally, support for the practice was measured by asking: “How frequently SHOULD criminologists use this practice?” There were four response options: Almost always (coded 4), often (3), rarely (2), and never (1). To maintain respondents’ anonymity, we asked only two background questions. The first assessed their career stage: “Which of the following best describes your current position?” There were four response options: Senior research academic/researcher (coded 4), mid-career academic/researcher (3), earlier career academic/researcher (including post-doctoral fellows) (2), and graduate student (1). The second question measured methodological training: “How many university courses (undergraduate or graduate) on methodology or statistics have you taken?” There were eleven numerical response options, ranging from “0” to “10 or more.” | en_AU |
usyd.faculty | SeS faculties schools::The University of Sydney Law School | en_AU |
workflow.metadata.only | No | en_AU |
Associated file/s
Associated collections