View on GitHub

ERP Reliability Analysis Toolbox

Matlab toolbox for obtaining dependability estimates for ERP measurements

Download this project as a .zip file

The ERP Reliability Analysis (ERA) toolbox is an open-source Matlab program that uses generalizability (G) theory to evaluate the reliability of ERP data. The purpose of the toolbox is to characterize the dependability (G-theory analog of reliability) of ERP scores to facilitate their calculation on a study-by-study basis and increase the reporting of these estimates.

The ERA Toolbox provides information about the minimum number of trials needed for dependable ERP scores and describes the overall dependability of ERP measurements (Clayson & Miller, 2017a). All information provided by the ERA Toolbox is stratified by group and condition to allow the user to directly compare dependability (e.g., a particular group may require more trials to achieve an acceptable level of dependability than another group). The code used by the toolbox was based on the formulas discussed in Baldwin, Larson, and Clayson (2015). The algorithms and their application by the ERA Toolbox are also covered in detail in Clayson & Miller (2017a).

Check out the wiki.

Why another toolbox?

Reliability is a property of scores (the data in hand), not a property of measures. This means that P3, error-related negativity (ERN), late positive potential (LPP), (insert your favorite ERP component here) is not reliable in some "universal" sense (Clayson & Miller, 2017b). Since reliability is context dependent, demonstrating the reliability of LPP scores in undergraduates at UCLA does not mean LPP scores recorded from children in New York can be assumed to be reliable. Measurement reliability needs to be demonstrated on a population-by-population, study-by-study, component-by-component basis.

The purpose of the ERA Toolbox is to facilitate the calculation of dependability estimates to characterize observed ERP scores. ERP psychometric studies have been useful in suggesting cutoffs and characterizing the overall reliability of ERP components in those studies. When designing a study, that information can help guide decisions about, for example, the number of trials to present to a participant for a given population. However, just because the observed data meet the previously recommended trial cutoff does not mean that the ERP scores are necessarily reliable (Clayson & Miller, 2017b). ERP score reliability cannot be inferred from trial counts.

My hope is that the ERA Toolbox will make it easier to demonstrate the reliability of ERP scores on a study-by-study basis.

Mismeasurement of ERPs leads to misunderstood phenomena and mistaken conclusions (Clayson & Miller, 2017b). Poor ERP score reliability from mismeasurement compromises validity. Improving ERP measurement, by ensuring score reliability, can improve our trust of the inferences drawn from observed scores and the likelihood of our findings replicating.

SPR Poster

Here is a link to a .pdf for the poster that I presented at SPR in Minneapolis.

A Little History

This project was started in December 2015 by Peter Clayson (@peclayson). What started as some in-house code turned into a flexible gui that I hope can help others. After all, there's no need to rewrite the wheel!

A Little More

I've also created a new web page for the toolbox on my personal site if you're interested. I'll do my best to keep them both up to date, for now.