Front | Back |
Reliability
|
|
Validity
|
|
Relevance
|
The degree to which a test pertains to its objectives.
|
Observed score (X) function of 2 components:
|
A true score (T) and an error component (E)
X = T ± E
|
measurement error
|
The difference between the true value and the observed value
|
Systematic error
|
Is a predictable measure of error; it is constant and usually can be corrected.
|
Random error
|
Due to chance and can effect a subject’s score in an unpredictable way from trial to trial eg., fatigue, inattention, mechanical inaccuracy, simple mistakes.
Random issues that will hopefully cancel out include fatigue, motivation, temperature, noise.
Reliability focuses on the degree of random error that is present within a measurement system.
|
SOURCES OF MEASUREMENT ERROR
Attributed to 3 components of the measurement system:
|
|
Measurement error can be avoided by:
|
|
Regression Toward The Mean
Suggests
|
That extreme scores on a pretest are expected to move closer, or regress, toward the group average on a second test.
|
Reliability Coefficients
|
Variance is a measure of the variability of differences among scores within a sample. The larger the variance, the greater the dispersion of scores. The smaller the variance, the more homogeneous the scores.
|
Reliability Equation
|
T/T+E = True score variance
True score variance + error variance
With zero error the ratio will produce a coefficient of 1.00.
> .50 = poor
.50 to .75 = moderate
> .75 = good
|
Types of Reliability
Test-retest reliability
|
· Stability of the measuring instrument
· Repeated administrations of a test by one rater
· One sample tested repeatedly
· Depends on the stability of the phenomenon being measured eg., pain
· The assumption is that the phenomenon being measures remains the same from test to test and that any change is due to random error
· Time intervals important – want to avoid genuine changes but also fatigue etc.
· Carryover and Testing Effects:
o Carryover – training, motivation
o Testing effects – pain ensues, stretching of soft tissues
· Reliability coefficients:
o Intraclass correlation coefficient (ICC) model 3
o With nominal data would determine percent agreement and kappa statistic
|
Rater reliability
Intrarater Reliability
|
1. One individual over 2 or more trials
2. Usually over short interval of time
|
Rater reliability
Interrater Reliability
|
1. 2 or more raters measuring the same group of subjects
2. Best done during a single trialImportant for generalizability
|