Testing Methods of Testing Replication Success
dc.contributor.author | Kim, Sheri | |
dc.date.accessioned | 2021-03-29T07:49:43Z | |
dc.date.available | 2021-03-29T07:49:43Z | |
dc.date.issued | 2021 | |
dc.description.abstract | The Replication Crisis refers to a time of doubt on the trustworthiness of the published research findings. Replicability at its simplest form is taken to mean the consistency of a finding. The crisis in confidence in research findings assumes that consistency of a finding confirms an effect and an inconsistent finding disputes an effect. However, the performance of the methods used to test replicability have been largely ignored. Different tests of replicability give different outcomes for the same data, which shows that these tests do not measure the same thing. This thesis explored how sampling variability and study variables (such as sample size, effect size index, base-rates) impact expected rates of replication success and failure. Four approaches to testing replicability were explored using simulation in R: comparing p-values, comparing effect sizes, using heterogeneity tests and comparing Bayes Factors across original and replication studies. The relationship between sampling variability and various study characteristics on the expected outcomes of these tests was complex. Unlike the assumption that consistency in findings confirms an effect and inconsistency in findings disputes an effect, the simulation studies in this thesis demonstrated that under certain combinations of study characteristics, replication success was unlikely to occur and replication failure was likely to occur even though the effect was truly consistent. When a replication failure occurs, it is difficult to interpret whether this is due to true and meaningful inconsistency in effects, or due to other factors unrelated to the consistency of the effect. Replication cannot conclusively verify or disconfirm prior findings due to this difficulty in interpreting replication outcomes. It is suggested that the importance of replication in psychology is in its role in improving research practice, not in its ability to conclusively verify prior results. | |
dc.identifier.other | b71501381 | |
dc.identifier.uri | http://hdl.handle.net/1885/228671 | |
dc.language.iso | en_AU | |
dc.title | Testing Methods of Testing Replication Success | |
dc.type | Thesis (PhD) | |
local.contributor.authoremail | u4837961@anu.edu.au | |
local.contributor.supervisor | Smithson, Michael | |
local.contributor.supervisorcontact | u9700675@anu.edu.au | |
local.identifier.doi | 10.25911/CNF9-7T43 | |
local.identifier.proquest | No | |
local.identifier.researcherID | ORCID 0000-0003-2955-2645 | |
local.mintdoi | mint | |
local.thesisANUonly.author | 641c7d90-7dee-4391-8324-f47f959ed9d8 | |
local.thesisANUonly.key | c645e353-b27b-5d87-5f90-03ef176bcd2e | |
local.thesisANUonly.title | 000000013334_TC_1 |
Downloads
Original bundle
1 - 1 of 1
Loading...
- Name:
- PhD_KIM_Corrected FULL.pdf
- Size:
- 7.57 MB
- Format:
- Adobe Portable Document Format
- Description:
- Thesis Material