Reliability and Validity

The second le cture

23 cards   |   Total Attempts: 192
  

Related Topics

Cards In This Set

Front Back
Reliability
  • - Is the extent to which a measurement is consistent and free from error.
  • - Is an indicator of the ability of an instrument to produce similar scores on repeated testing occasions under similar conditions.
  • - AKA accuracy, consistency, stability, precision, reproducibility or dependability.
  • - A tests is reliable under particular circumstances, administered in a particular way, and with a specific group of people
Validity
  • - Is a test measuring what it is intended to measure?
  • - It is the degree of truthfulness of a test score.
  • - Validity is dependent upon 2 characteristics:
    • 1) Reliability
    • 2) Relevance
Relevance
The degree to which a test pertains to its objectives.
Observed score (X) function of 2 components:
A true score (T) and an error component (E) X = T ± E
measurement error
The difference between the true value and the observed value
Systematic error
Is a predictable measure of error; it is constant and usually can be corrected.
Random error
Due to chance and can effect a subject’s score in an unpredictable way from trial to trial eg., fatigue, inattention, mechanical inaccuracy, simple mistakes. Random issues that will hopefully cancel out include fatigue, motivation, temperature, noise. Reliability focuses on the degree of random error that is present within a measurement system.
SOURCES OF MEASUREMENT ERROR Attributed to 3 components of the measurement system:
  1. 1)The individual taking the measurement (tester or rater)
  2. 2)The measuring instrument
  3. 3)Variability of the characteristics being measured
Measurement error can be avoided by:
  1. 1)Detailed procedures
  2. 2)Training and practice
  3. 3)Careful planning
  4. 4)Clear operational definitions
  5. 5)Inspection of equipment
Regression Toward The Mean Suggests
That extreme scores on a pretest are expected to move closer, or regress, toward the group average on a second test.
Reliability Coefficients
Variance is a measure of the variability of differences among scores within a sample. The larger the variance, the greater the dispersion of scores. The smaller the variance, the more homogeneous the scores.
Reliability Equation
T/T+E = True score variance True score variance + error variance With zero error the ratio will produce a coefficient of 1.00. > .50 = poor .50 to .75 = moderate > .75 = good
Types of Reliability Test-retest reliability
· Stability of the measuring instrument · Repeated administrations of a test by one rater · One sample tested repeatedly · Depends on the stability of the phenomenon being measured eg., pain · The assumption is that the phenomenon being measures remains the same from test to test and that any change is due to random error · Time intervals important – want to avoid genuine changes but also fatigue etc. · Carryover and Testing Effects: o Carryover – training, motivation o Testing effects – pain ensues, stretching of soft tissues · Reliability coefficients: o Intraclass correlation coefficient (ICC) model 3 o With nominal data would determine percent agreement and kappa statistic
Rater reliability Intrarater Reliability
1. One individual over 2 or more trials 2. Usually over short interval of time
Rater reliability Interrater Reliability
1. 2 or more raters measuring the same group of subjects 2. Best done during a single trialImportant for generalizability