Standardizing measurement in psychological studies: On why one second has different value in a sprint versus a marathon

Date

2020-04-27

Authors

Goodhew, Stephanie Catherine
Dawel, Amy
Edwards, Mark

Journal Title

Journal ISSN

Volume Title

Publisher

Springer Verlag

Abstract

Here we highlight the importance of considering relative performance and the standardization of measurement in psychological research. In particular, we highlight three key analytic issues. The first is the fact that the popular method of calculating difference scores can be misleading because current approaches rely on absolute differences, neglecting what proportion of baseline performance this change reflects. We propose a simple solution of dividing absolute differences by mean levels of performance to calculate a relative measure, much like a Weber fraction from psychophysics. The second issue we raise is that there is an increasing need to compare the variability of effects across studies. The standard deviation score (SD) represents the average amount by which scores differ from their mean, but is sensitive to units, and to where a distribution lies along a measure even when the units are common. We propose two simple solutions to calculate a truly standardized SD (SSD), one for when the range of possible scores is known (e.g., scales, accuracy), and one for when it is unknown (e.g., reaction time). The third and final issue we address is the importance of considering relative performance in applying exclusion criteria to screen overly slow reaction time scores from distributions.

Description

Keywords

correlational, difference scores, experimental, measurement, methodology, reaction time, relative performance, validity

Citation

Source

Behavior research methods

Type

Journal article

Book Title

Entity type

Access Statement

Open Access

License Rights

DOI

10.3758/s13428-020-01383-7

Restricted until