Week 2: Guidelines for measurement and evaluation process; Validity, reliability and objectivity Flashcards

1
Q

Describe a needs assessment (14 points)

A
  • Initial assessment to determine needs of the program - What do we want to improve and why is that important?
  • Exercise science:
    • Initial fitness levels
    • Skill levels
    • Prevention of injuries
    • Knowledge
  • Sports management:
    • Attendance
    • Decrease cost associated with injury
    • Merchandise sales
    • Public perception of your club
    • Participation levels
  • Serve as an initial pre-test to obtain baseline data
  • Enables progress to be monitored
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe program development process (7 points)

A
  • An active plan of program development should be in place in any activity based setting.
  • Ongoing process is:
    1. Establish program philosophy
    2. Develop program goals
    3. Plan program activities
    4. Deliver the program
    5. Measure and evaluate the program
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe establishing a program philosophy (8 points)

A
  • Perhaps the most important task- Ask yourself why.
  • Several philosophical aims would likely be included in any quality program:
    • Provide optimal participation for clientele
    • Educate the participants about benefits of regular activity
    • Systematic evaluation and improvement of the program
  • Philosophy of a program should reflect characteristics and needs of the participants.
  • Need to consider how much emphasis on each of the learning domains.
  • A logical end to this stage = preparation of a mission statement.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe developing program goals (4 points)

A
  • Program goals evolve from the philosophical aims
  • Primary link between philosophy and activities of program
  • Program goals should be ‘SMART’
  • Start with one main goal that can then be broken down into smaller goals
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Describe planning program activities (9 points)

A
  • When choosing activities:
    • Sequence activities in logical order
    • Determine desirable outcomes for each activity
  • Must consider :
    • Participant needs, characteristics and interests
    • Availability of facilities and equipment
    • Length of program
    • Instructor expertise
    • Climate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe delivering the program (3 points)

A
  • Constantly monitor the program throughout the delivery phase.
  • Make modifications as necessary based on assessments and professional observations.
  • For example, Coaches constantly assess their players in practice and game situations. During the season, the coach and athletes make constant adjustments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe Evaluating & Improving the Program (8 points)

A
  • Complete evaluation after program has been delivered. Need to be done consistently and regularly
  • When is the best time to evaluate in:
    • School/ academic settings = end of year/semester
    • Management = end of financial year
    • Adult fitness settings = six weeks
    • Athletic training programs = at the end of a training block (mesocycle)
  • Evaluating the effectiveness of a program is an ongoing process. Continually needs fine tuning for improvement and to remain current
  • With the wide range of measurement tools available, need to ensure you use assessment procedures that are appropriate.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe the Guidelines for the Measurement & Evaluation Process (9 points)

A
  • Relevant – to program outcome statements
  • Learning experience for client
  • Enjoyable – motivational
  • Discriminative – continuum – not S/U
  • Economically feasible - cost efficient
  • Independent – items in battery measure different variables
  • Gender appropriate – account for differences in a nondiscriminatory way
  • Performance not reliant on another’s performance
  • Safety – physical e.g. exhaustion, psychological
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe quantitative and qualitative tests (4 points)

A
  • Quantitative = number
  • Examples - survey, multiple choice tests, performance statistics
  • Qualitative = words
  • Examples - interview, focus group, open-ended questionnaire
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe Criterion-Referenced Measurement and Norm-Referenced Measurement (5 points)

A

Criterion-Referenced Measurement
- Compare against a standard or criterion.
- Not compared to the performance of other individuals
- Example: assessment rubric for an assignment

Norm-Referenced Measurement
- Compares individuals’ performances.
- Examples of norms are percentiles, z-scores and T-scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the advantages and disadvantages of criterion referenced testing? (5 points)

A

Advantages:
- Performance linked to specific outcomes
- If standards not met – evaluations made to improve
- Competition is based on reaching standard, not bettering someone else’s performance

Disadvantages:
- Scores always involve some subjective judgement
- Motivation based on reaching/not reaching cut-offs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the advantages and disadvantages of norm referenced testing? (5 points)

A

Advantages:
- Administration procedures standardised
- Quality of test items generally high
- Assume statistical rigor

Disadvantages:
- The time the norms were established – need to be updated periodically
- Individuals may not see link between what they have learnt and assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

List the three criteria that must be met when developing and/or finding a measure (3 points)

A

Ensure that it is:

  1. VALID – measures what you intend.
  2. RELIABLE – gives consistent/trustworthy results.
  3. OBJECTIVE – reliable administration of test.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

List the types of validity (8 points)

A
  1. Content validity
  2. Construct validity
  3. Criterion validity
    1. Concurrent validity
    2. Predictive validity
  4. Ecological validity
  5. External validity
  6. Trustworthiness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe Content validity (5 points)

A
  • A.K.A - Logical or face validity
  • Relies on logic and comparison
  • The content of the test must measure all the aspects of what you want to know
  • Weakest test of validity
  • Example: when choosing a test to measure swimming ability, the test would involve swimming and not running
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Describe Construct validity (4 points)

A
  • Correspondence between the constructs for the characteristic and the actual measurement that is being used.
  • What is a construct? – An idea or theory containing conceptual elements.
  • For example: Sports management – “Service Quality”; “Social Responsibility”
  • For example: Exercise Science- “Muscle soreness”; “Fatigue”; “Wellness”
17
Q

Describe Criterion validity (3 points)

A
  • Correlating scores for a test to specified criteria that accurately measures the same property or attribute of interest.
  • How well does this test compare to an established test that measures the same thing?
  • Can be broken down into two sub components: concurrent and predictive validity
18
Q

Describe Concurrent validity (4 points)

A
  • REMEMBER PREFIX “CON” MEANS “WITH”
  • HOW DOES THIS TEST COMPARE WITH THE CURRENT?
  • New test is measured up against an established test
  • E.g. beep test and sub-max aerobic test
19
Q

Describe Predictive validity (4 points)

A
  • Score match up with some type of measure in the future:
  • Longitudinal
  • Strongest type of validity
  • Example: Pre-comp time trial, ATAR score future academic success
20
Q

Describe Ecological validity (1 point)

A
  • Testing situation must be similar to real-life situation that is being studied.
21
Q

Describe External validity (1 point)

A
  • Ability of results to be generalized to broader population.
22
Q

Describe Trustworthiness (9 points)

A
  • Triangulation – use of various forms of data to validate the trustworthiness of the data:
    1. Interview
    2. Document analysis
    3. Observation
  • Example: I interview coaches about their coaching style and philosophy. I then go perform an observation of training to see if they do what they say they do.
  • This is common in job hiring process:
    1. Interview
    2. Key selection criteria
    3. Observation in the setting
23
Q

Describe Reliability (8 points)

A
  • A test that gives consistent results is said to be reliable.
  • Some qualities or attributes can be measured more reliably than others.
  • Measurement techniques and conditions should be standardized to reduce measurement error.
  • A number of different methods have been developed to estimate reliability
  • Test-Retest Reliability is the most common Need at least two sets of scores to determine reliability
  • Two sets of data correlated: Pearson product-moment correlation and Intraclass correlation
  • Higher the correlation coefficient, the better the reliability
  • Period of time between test and retest can be influenced by factors such as fatigue, maturation, learning, or changes in physical condition
24
Q

Describe objectivity (9 points)

A
  • A subset of reliability, sometimes called inter-rater reliability
  • Reliability between different raters or judges
  • Certain forms of measurement are more objective than others. For example, True or False questions would be more objective than judges’ scores in gymnastics
  • Maximising objectivity:
    • Complete and clear instructions
    • Trained testers and administrators
    • Simple measurement procedures
    • Appropriate measurement tools
    • Results expressed as numerical scores