Testing Methods of Testing Replication Success

dc.contributor.authorKim, Sheri
dc.date.accessioned2021-03-29T07:49:43Z
dc.date.available2021-03-29T07:49:43Z
dc.date.issued2021
dc.description.abstractThe Replication Crisis refers to a time of doubt on the trustworthiness of the published research findings. Replicability at its simplest form is taken to mean the consistency of a finding. The crisis in confidence in research findings assumes that consistency of a finding confirms an effect and an inconsistent finding disputes an effect. However, the performance of the methods used to test replicability have been largely ignored. Different tests of replicability give different outcomes for the same data, which shows that these tests do not measure the same thing. This thesis explored how sampling variability and study variables (such as sample size, effect size index, base-rates) impact expected rates of replication success and failure. Four approaches to testing replicability were explored using simulation in R: comparing p-values, comparing effect sizes, using heterogeneity tests and comparing Bayes Factors across original and replication studies. The relationship between sampling variability and various study characteristics on the expected outcomes of these tests was complex. Unlike the assumption that consistency in findings confirms an effect and inconsistency in findings disputes an effect, the simulation studies in this thesis demonstrated that under certain combinations of study characteristics, replication success was unlikely to occur and replication failure was likely to occur even though the effect was truly consistent. When a replication failure occurs, it is difficult to interpret whether this is due to true and meaningful inconsistency in effects, or due to other factors unrelated to the consistency of the effect. Replication cannot conclusively verify or disconfirm prior findings due to this difficulty in interpreting replication outcomes. It is suggested that the importance of replication in psychology is in its role in improving research practice, not in its ability to conclusively verify prior results.
dc.identifier.otherb71501381
dc.identifier.urihttp://hdl.handle.net/1885/228671
dc.language.isoen_AU
dc.titleTesting Methods of Testing Replication Success
dc.typeThesis (PhD)
local.contributor.authoremailu4837961@anu.edu.au
local.contributor.supervisorSmithson, Michael
local.contributor.supervisorcontactu9700675@anu.edu.au
local.identifier.doi10.25911/CNF9-7T43
local.identifier.proquestNo
local.identifier.researcherIDORCID 0000-0003-2955-2645
local.mintdoimint
local.thesisANUonly.author641c7d90-7dee-4391-8324-f47f959ed9d8
local.thesisANUonly.keyc645e353-b27b-5d87-5f90-03ef176bcd2e
local.thesisANUonly.title000000013334_TC_1

Downloads

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
PhD_KIM_Corrected FULL.pdf
Size:
7.57 MB
Format:
Adobe Portable Document Format
Description:
Thesis Material