Exam 2

Question/ Term Definition
What makes up a good test or statistics measure? 1. Norm-reference test
2. Validity
3. Reliability
Norm-reference test comparing individuals to others
Validity is it measuring the right thing or what it is supposed to?
Is it appropriate for a the intended target/ population?
Reliability Is it constant?
FACE VALIDITY 1. Just looking at measures
2. Content-related and Logical
Content-related Validity 1. Does the test adequately represent all of the important areas of measure?
2.Cognitive and affective domains
Steps to create a content-related 1. take the test yourself, determine any areas of confusion
2. ask questions: are important contents missing? are unimportant areas included? Should certain areas receive more emphasis than others? Are all areas educationally important?
Logical Validity 1.The extent to which a test measures the most important components of skills necessary to perform a motor task
2. Psychomotor domain
Steps to create Logical Validity 1. need to define a good performance
2.determine components of a good performance
3. If it measures each component, then it is a good test
4. Domain reference
CRITERION-RELATED VALIDITY 1. Comparing the performance of one test with the direct measures of the characteristics or behaviors in question
2. Concurrent, Predictive & Decision Validity
Concurrent Validity 1. the degree to which a test correlates with a criterion test
Concurrent Validity Steps 1. Criterion must be proven valid, a "gold standard of measurement 2. Administering criterion may be inappropriate 3. Have appropriate sample complete both tests, correlate them 4. Standard for the adoption of a test is a correlation coefficient of .8+
Predictive Validity 1. Degree to which the criterion can be predicted using a score from the predictor test
2. Predictive formula is usually included
Predictive Validity Steps 1. Know the time it took a student to do a 1 mile run, an equation may be used to predict VO2 Max
2. represented by how well the predicted score correlates to the actual score of the criterion test
3. Formula must be appropriate for the sample tested
Decision Validity 1. Is the test able to correctly classify masters/ non-masters of criterion behavior
2. Treatment/ no treatment approach
3. Judgmental Empirical Method
Treatment/ No treatment Approach 1. Give test to both trained and untrained individuals in a certain behavior
2. The point at which the 2 group scores overlap, is the cut off or criterion score
Judgmental Empirical Approach 1. Instructor determines performance expectations based on ability, previous experiences
2. Same approach as instructed/ uninstructed
3. With info, determine a coefficient of validity using contingency table
CONSTRUCT-RELATED VALIDITY 1. Degree to which a test measures an attribute that cannot be directly measured
2. Discriminate Validity, Convergent & divergent Validity
Discriminate Validity 1. Is the test able to distinguish different groups
2. Independent Sample T-test
Convergent test 1. Use multiple indicators of an attribute
2. tests should correlate well
Divergent test 1. use indicator of an opposite or different behaviors
2. Test should have no relationship
3 types of reliability 1.split test
2. test retest
3.internal consistency
Split reliability test 1. One occasion: split the test into parts then calculate the correlation between the parts of the test
2. First half/ Second half & odd/even split
Spearman Brown Prophecy 1. estimate the reliability of an entire test
2. Rxx*= {(n) (Rxx)}/ {1+(n-1)(Rxx)
Internal Consistency reliability 1. intraclass correlations
2. total correlation among all test factors
3. Alpha must be above .70 to be considered acceptable
Factor influencing reliability 1.Type of test: difficult task= less reliability
2. Length of the test: longer = more reliable
3. Ability levels: assess broad ranges = unreliable
Objectivity/ Bias 1. Intra-rater reliability
2. Inter-rater reliability
Intra-rater reliability 1. single judge assesses the same test or trial on multiple occasions
Inter-rated reliability 1. correlation of the score given for the same test by multiple judges
Contingency Table 1. used instead of a correlation coefficient as would for norm-ref.
2. if randomly assigned, roughly 50% portion of agreement by chance
Kappa Coefficient 1. accounts for the influence of chance
Steps for constructing a psychomotor test 1. write objectives
2. review literature
3. Design test – determine how to assess skill components, scale, criterion or norm reference
4. Standardize directions – anyone can replicate
5. piolet test
6. Validity/reliability
7. Develop Norms: normref
Rating scale 1. examine process that is hard to objectively measure
2. Checklist: indicates presence or absence of skill
3. assign rating level of performance
4. subjects understand skill components & see other descriptions
Rating Scale Steps 1. Determine purpose of the scale
2. Identify the important components
3. weigh the components
4. identify # of categories/ ratings
5. assess numerical values
6. Develop rating sheet
7. prepare raters
Exercise recommendations 1. 30-60 mins moderate exercise 5 days per week
2. 20-60 mins vigorous exercise 3 days per week
3. one continuous session & mult. short ones at least 10 mins
4. 2-3 days/week nueromotor
Fitness Assessment 1. must be valid, reliable & practical
2. physiological or self report