Front | Back |
Validity
|
The degree an instrument measures what it is supposed to measure
-content -criterion-related -construct |
Content validity
|
Established when the researchers know that the instrument measures the concept intended to be measured.
- give the instrument to a panel of experts, who judge the instrument by rating each item being meausred |
Content validity: Face validity
|
-established by letting other colleagues or subjects examine the instrument and indicate whether it appears to measure the concept
-less desirable b/c uses an intuitive approach |
Criterion-related Validity
|
The degree to which the observed score and the true score are related.
-tested for: concurrent validity and predictive validity |
Criterion-related validity: Concurrent validity
|
-researchers simultaneously administer two different instruments measuring the same concept.
-use correlations to compare scores from the 2 insturments (high correlations=agreement, and low correlations= the instruments are measuring different concepts.) |
Predictive validity
|
Refers to whether a current score is correlated with a score obtained in the future
-give an instrument today and again in 2 weeks---should see less of a correlation |
Construct Validity
|
-focuses on theory
-theoretical concepts that are tested empirically -to what extent does the instrument measure the theoretical concept or trait? |
|
|
Construct validity: Hypothesis testing
|
Researchers use theories to make perdictions about the concept being measured
-data are gathered, and a determination is made as to whether the findings support the hypothesis |
Convergent Testing
|
-researchers use two or more instruments to measure the same theoretical component
-focuses on how the observed score compares to the theory -similar to concurrent testing |
Divergent Testing (discriminant)
|
Comparing scores from two or more instruments that measure different theoretical constructs
-ex: depression and happiness |
Multitrait-multimethod testing
|
-when convergent and divergent testing are combined
-helpful in reducing systematic error -has 3 instruments |
Known group approach
|
-administer instruments to indiv. known to be either high or low on the characteristic being measured
-expect there to be significant difference between them |
Factor analysis
|
-used to identify questions that group around different factors
-a statistical approach -items that group together have high correlations -questions that do not fit into a group are altered or eliminated. |
Realiability
|
-when researchers obatin consistent measurements over time.
-the instrument can be reliable w/o being valid -ex: scale measuring your weight but not your anxiety expressed in correlation coefficients (0-1) 0= no correlation, 1= perfect correlation 0.70 or greater is considered acceptable |