Lecture 10 Flashcards
What are the 9 steps of a research proposal
- introduction
- problem statement
- hypothesis
- literature review
- methods
- limitations
- significance
- references
- appendix items
(blank)-The extent to which a test (or indicator/instrument) accurately measures what it is supposed to – 4 types
validity
(blank) Validity
• Degree to which a measure ‘obviously’ involves the performance being measured
• Weakest type of validity
• The test “seems” to be valid
• No quantification about how well the test measures the dependent variable
– e.g., 50 m sprint used to assess running speed
• Taken at face value
face/logical
(blank) Validity
• Degree to which an instrument accurately measures a theoretical construct or trait it was designed to measure
– e.g., depression, anxiety, intelligence
• Used when the dependent variable is difficult to measure and there is no established gold standard
• Often assessed by:
– Correlation
– Known group difference method
• Comparing test scores between groups that should differ
construct
(blank) Validity
– Degree to which a measure/test is related to the criterion (gold standard)
– A method to establish the validity of a new test
– Both tests performed on the same sample at the same time (concurrently)
• Body fat: BIA vs. DXA
• CV Fitness: Step test vs. VO2max
• Physical & Mental Health: SF-8 vs. SF-36
concurrent
(blank) Validity
– Degree to which scores of predictor accurately predict criterion (can compare to gold
standard)
– A test is developed to predict a criterion measure
– Correlation between the test and criterion is used to determine validity
• Injury prediction: do scores on
Functional Movement Screen predict injury?
predictive
Which 2 types of validity are content and criterion related?
content related:
face validity
construct validity
criterion related:
concurrent validity
predictive validity
(blank)-Measures the consistency or repeatability of test scores or data.
– Keep in mind that measures can be reliable but NOT valid….BUT measures can never be valid if not reliable
• Methods of establishing – Stability (test - re-test) – Alternate Forms (parallel) – Internal Consistency – Inter-rater
reliability
(blank) Reliability
• Same test is administered on two-separate occasions
and the results are correlated
– Test-retest method
• Not good for tests where learning is a performance
factor
• Good to evaluate the measurement skill of a laboratory
device or technician
stability
(blank) Forms
• measures the correlation between two ‘equivalent’
versions of a test.
– You use it when you have two different
assessment tools or sets of questions designed
to measure the same thing.
• If there is a high correlation between the tests,
they can be said to be consistent/reliable
alternate
(blank) Consistency
• Used to show how consistent the scores of a test are
within itself
– Correlation between multiple items in a test intended
to measure the same construct
• The questions within themselves are consistent
• Split-Half Method
– A correlation is performed on the results of two
halves of one test. If they are highly correlated, the
test has internal consistency
• Good for written tests
• Numerous physical performance trials
internal
The (blank) method assesses the internal consistency of a test, such as psychometric tests and questionnaires. … This is done by comparing the results of one half of a test with the results from the other half. A test can be split in half in several ways, e.g. first half and second half, or by odd and even numbers.
split-half
(blank) Reliability
• Test of the objectivity between testers
inter-rater
• (blank) – Do the same thing twice, is it stable?
• (blank) – If you used another option, are scores
related?
• (blank) – Within itself it is reliable
• (blank) – multiple researchers making observations
or ratings about the same topic
stability
alternate
internal consistency
inter-rater
• When you step on a scale 1 minute later the number is
the exact same (blank)
• When two people evaluate someone’s performance after
the job interview, they rate on the same scale(blank)
• Whether you complete the Pittsburgh or Edinburgh
depression scale, you get the same diagnosis(blank)
stability
inter-rater reliability
alternate
(blank) correlation (r): relationship between two
variables
• Coefficient values may range from -1 to +1
• where 0 is weak relation and 1 is a perfect relation
pearson
Positive correlation
– (blank) number on variable X and Y
– E.g. Long jump: relationship between
distance and power test is positive. Why?
high
Negative correlation
– (blank) number on variable X
– (blank) number on variable Y
– E.g. Long jump: Relationship between jumping
distance and running time is almost always negative.
Why?
high
low
(blank) Coefficients (R) • comparing two values for same variable • The two scores are correlated and the reliability coefficient is produced For example: R >.85 (or higher) for maximal physical effort tests and precise laboratory tests
reliability
Cronbach’s (blank) α : Internal Consistency –Reliability of questionnaire items/scales – α >.70 (acceptable)
alpha
(blank) is concerned with getting the right assessment and (blank) is getting the assessment right
validity
reliability
4 sources of (blank):
– Participants
• Mood, motivation, health, fatigue, prior knowledge ,
familiarity with test
– Testing
• Standardization of all test activities for all participants
– Scoring
• Competence, experience, attention to detail of scorers
(RAs)
– Instrumentation
• Maintenance and calibration
error
While (blank) validity relates to how well a study is conducted, (blank) validity relates to how applicable the findings are to the real world
internal
external
• (blank)
– Assigning numbers to various levels of a particular
concept
– Provides an indirect measure of the concept of
interest
– e.g. On a scale of 1 – 5 rank your mood
• Used to obtain information on almost any topic, object,
or subject
– Attitude, opinion, behaviour, performance,
perception
scaling
- (blank) Scale
• Measures degree of agreement or disagreement
• Can be considered Ordinal or Interval – every score
has a meaning!
• 5 or 7-point Likert are most common
– Can have up to 9 points
• Provide wider choice of expression than yes/no
likert
(blank) Differential Scale
• Measures attitudes and concepts
• Interval score or ordinal – not assigned a meaning or #
to each score
• Uses bipolar adjectives describing a topic; usually along
a 7-point scale
semantic
(blank) Scale
• Numerical, verbal, checklist or ranking
• Items rated by selecting a point on the scale
corresponding to their impression of the item
rating
(blank) Order Scale
• Items ranked, usually in terms of preference or
importance
• Ordinal scores
• Best for ranking 5 to 7 items
– Higher numbers of ranking results in less accuracy
rank
(blank) Errors
• Leniency
– Overly generous rating
• Central tendency errors
– Most ratings in middle of scale
• i.e., Avoiding low or high ratings
• Halo effect
– Previous impressions/knowledge influence ratings
• Proximity errors
– Rate more characteristics similar when they follow in
close proximity
• Observer bias error
– Rating influenced by personal bias
• Observer expectation error
– Rating influenced by what you expect to see
rating
What is (blank)? • “the study of the distribution and determinants of health-related events or disease in specified populations, and the application of this study to the control of health problems”
epidemiology
• Distribution – Frequency • Prevalence: # of existing cases (proportion) – Tells us how much = burden of disease • Incidence: # of new cases (rate) – Tells us how fast something is spreading • Mortality rate: death rate – Patterns: Person, place, time
• Determinants
– Defined characteristics associated with change
in health
• Application
– Translation of knowledge to practice
What are these 3 characteristics of?
epidemiology
(blank)
measured using the case fatality ratio = case fatality rate = CFR
the number of deaths due to a disease as a proportion of the number of people diagnosed with the disease.
virulence
(blank) (IFR): The number of individuals
who die of the disease among all infected individuals
(symptomatic and asymptomatic).
infection fatality ratio
difference between prevalence and incidence?
Prevalence refers to proportion of persons who have a condition at or during a particular time period, whereas incidence refers to the proportion or rate of persons who develop a condition during a particular time period.
Development of (blank) Epidemiology • Early studies – *Framingham Heart Study – *London Busmen/British Civil Servants – Tecumseh Health Study – *Harvard Alumni Health Study – Minnesota studies • More recent health studies – INTERHEART Study – Nurses Health Survey – Canadian Community Health Survey – Canadian Health Measures Survey
exercise
Purposes of (blank) Methods
• Quantifying the magnitude of health problems
• Identifying the factors that cause disease
• Providing quantitative guidance for the allocation of
public health resources
• Monitoring the effectiveness of prevention strategies
using population-wide surveillance programs
epidemiologic
– (blank ) study design:
• Describes relationship between basic characteristics and disease states
• Useful for developing and crudely testing hypotheses
– (observational) study design
• The development of disease or health outcome is observed and compared among those that participate in different levels of physical activity.
– Levels of physical activity participation are self selected by the individual and not under control of the investigator.
– (experimental) study design
• Random assignment of physical activity levels to individuals without the disease or health outcome of interest
• These individuals are then followed for a period of time to compare their development of the disease or health outcome of interest.
Commonly Used (blank) Designs in Epidemiological Studies
• Observational study designs:
– Cross-sectional
– Case-control
– Cohort
• Experimental study design:
– Clinical trial
descriptive
observational
experimental
research