Exam 1 Flashcards

1
Q

Evaluating Evidence

A
  • thousands/millions of articles on many clinical research topics
  • multiple designs and many interventions
  • requires a system to evaluate the evidence
  • Many systems that rank evidence
  • no absolute correct or agreed upon system
  • use as guide
  • continually changing and being updated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Greenhalgh suggested what studies

A
  • systematic reviews and meta-analyses
  • RCTs with definitive results
  • RCTs with non definitive results
  • cohort studies
  • case-control studies
  • cross-sectional studies
  • case reports
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Oxford Centre for Evidence-Based Medicine 2011 Levels of evidence

A
  • Level 1: systematic review of RTCs
  • level 2: randomized trial or observational study with dramatic effect
  • level 3: non-randomized controlled cohort/followup study
  • level 4: case-series, case-control studies, or historically controlled studies
  • Level 5: mechanism-based reasoning (formerly known as expert opinion)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Evaluating Research Reports

A
  • critical analysis of research report (determines validity of report and applicability for clinical decisions)
  • guidelines for reporting of studies (consort statement, enables reader to better assess validity of the results, many others eg STROBE)
  • success of evidence-based practice dependent on incorporating research findings into clinical decision making
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Quality of the journal

A
  • when evaluating scientific merit of an article, consider journal’s reputation
  • peer-reviewed/refereed journals (content experts, accepted based on recommendation of reviewers, processes ensure that articles meet standards)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Evaluating Components of a study

A
  • clinicians and researchers evaluate literature from a number of perspectives
  • what is the study’s intent? (problem under investigation)
  • is the study sound in it’s methodology? (if no, results may not be valid, details of subjects how selected inclusion/exclusion, random assignment, blinding, reliable and valid measures, equal group treatment, etc)
  • are results meaningful? (was there effect of interventinon? clinically and statistically important?)
  • can results be applied to my pt.? (depends if pt. is similar to those studied, is treatment feasible in clinic, etc)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Searching for the evidence - good resources

A
  • pubmed
  • google scholar
  • cochrane library
  • PEDro
  • CINAHL
  • Embase
  • Scopus
  • PsycINFO
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is clinical research?

A
  • structured and systematic
  • objective process
  • examines clinical conditions and outcomes
  • establishes relationships among clinical phenomena
  • provides evidence for clinical decision making
  • provides impetus for improving practice
  • relates to clinical phenomena (MMT, ROM, fall risks, balance confidence, glucose level - - should be able to doc and track most of these things)
  • look at relationships…i.e. how strength/balance affects ADLs, how strength affects balance, etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Shifts in Research Priorities in 20th century

A
  • focus on OUTCOMES research to document EFFECTIVENESS
  • application of MODELS of health and disability
  • attention to EVIDENCE-BASED PRACTICE (EBP)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Measurement of Outcomes

A
  • typically rehabilitation outcomes were related to improvements in pathologies or impairments
  • Now outcomes include WHO definition of health to include physical, social, and psychological well-being
  • consider patient satisfaction, self-assessment of functional capacity, quality of life
  • now clinicians must document outcomes to substantiate effectiveness of treatment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Outcomes Research

A
  • how successful are our interventions in clinical practice specifically in terms of disability and survival
  • studies use large databases including info not only functional outcomes, but also utilization of services, insurance coverage, etc
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Outcome Measures

A
  • measure the effectiveness of treatment in terms of patient satisfaction and outcomes as well as in terms of revenue/costs; staff productivity
  • questionnaires are often used to measure outcomes in terms of function and health status
  • health status scales eg instruments such as the medical outcomes study short form 36 (SF36) reflects physical function, mental function, social function, and other (pain, etc)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Biomedical Model

A
  • focuses on relationship between pathology and impairments
  • physical aspects of health
  • no consideration for how patient is affected by illness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Disablement Model: Nagi

A
  • pathology: interference with normal bodily processes or structures
  • impairment: anatomical, physiological, or psychological abnormalities
  • functional limitation: inability to perform an activity in a normal manner
  • disability: limitation in performance of activities within socially defined roles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

ICF (international classification of functioning)

A
  • describes how people live with their health condition
  • has parallels to Nagi model
  • health condition: pathology
  • body function/structure: impairments
  • activity: functional limitation
  • participation: disability
  • includes references to environmental and personal factors affecting function (contextual factors)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

ICF Outcomes

A
  • outcomes may be related to (targeted to) the impairment level (improving tone, ROM, strength, balance)
  • but must also establish functional outcomes that influence performance at the activity or at the participation levels
  • i.e. increasing strength and balance will allow the person to ambulate in the community and socialize with friends (activity level improved ambulation, participation level enhanced socialization)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Evidence-Based Practice (EBP)

A
  • provision of quality care depends on ability to make choices that have been confirmed by sound scientific data, and that decisions are based on best evidence currently available
  • begins by asking a relevant clinical question related to: patient diagnosis, prognosis, intervention, validity of clinical guidelines, safety or cost-effectiveness of care
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what does “PICO” stand for: a good clinical question

A
  • P: patients/population
  • I: intervention
  • C: comparison/control
  • O: outcome of interest
  • i.e. In a patient 2-weeks post-hip replacement (P), is active exercise (I) more effective than passive ROM exercise (C) for improving hip ROM (O)?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

PICO

A
  • question is precursor to searching for the best evidence to facilitate optimal decision making about a patient’s care
  • terms in PICO can be used as search terms in a literature search for best evidence
  • Clinicians search and access literature
  • Critically appraise studies for validity
  • Determine if research applies to their patient
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Components of EBP for clinical decision making

A
  • clinical expertise
  • best research evidence
  • clinical circumstances and setting
  • patient values and preferences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Sources of knowledge for clinical decisions and to guide clinical research….How do we “know things”?

A
  • tradition (always done this way)
  • authority (expert opinion)
  • trial and error (try something and if it fails try something else)
  • logical reasoning
  • scientific method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Logical Reasoning

A
  • A method of knowing which combines experience, intellect, and thought
  • systematic process to answer questions and acquire new knowledge
  • 2 types: deductive and inductive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Deductive Reasoning

A
  • acceptance of a general proposition and the inferences that can be drawn in specific cases
  • general observation & specific conclusion
  • Exp: poor balance results in falls, exercise improves balance, therefore exercise will reduce risk of falls
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Inductive Reasoning

A
  • specific observation & general conclusion
  • Exp: patients who exercise don’t fall, patients who don’t exercise fall more often, therefore exercise is associated with improved balance and fewer falls
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Reasoning used in research manuscript

A
  • both forms of reasoning (deductive and inductive) are used in research to design studies and interpret data
  • Introduction section of research manuscript: deductive logic is used when developing research hypotheses from existing general knowledge
  • Discussion section of a research manuscript: inductive reasoning is used when researchers propose generalizations and conclusions from data in a study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What type of reasoning is used in the introduction section of reserach manuscript?

A

deductive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What type of reasoning is used in the discussion section of a research manuscript?

A

inductive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Scientific Method

A
  • rigorous process used to acquire new knowledge

- is empirical, systematic, has a control, and has critical examination

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

2 Assumptions the Scientific Method is based off of

A
  • nature is orderly/regular and events are consistent and predictable
  • events/conditions are not random and have causes that can be discovered
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

The Scientific Method is Empirical

A
  • in research refers to direct observation to document data objectively
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Scientific Method is Systematic

A
  • the systematic nature of research implies a sense of order

- logical sequence to identify a problem, collect and analyze data, interpret findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Scientific Method has a Control

A
  • control of extraneous factors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Scientific Method has Critical Examination

A
  • scrutiny of your findings by other researchers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Scientific Approach is defined as what

A
  • systematic, empirical, controlled and critical examination of hypothetical propositions about the associations among natural phenomena
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

How to classify research

A
  • based on a number of schema according to PURPOSES And OBJECTIVES
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Quantitative Research

A
  • measurement under standardized conditions
  • can conduct statistical analysis
  • what PTs do
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Qualitative research

A
  • understanding through narrative description
  • less structured
  • interviews
  • anthropologists
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Basic Reserach

A
  • bench research
  • not practical immediately
  • may be useful later in developing treatments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Applied Research

A
  • solving immediate practical problems

- most clinical research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Translational research

A
  • scientific findings are applied to clinical issues
  • also the generating scientific questions based on clinical issues
  • bedside to bench and back to bedside
  • collaboration among basic scientists and clinicans
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Experimental Research

A
  • researcher manipulates one or more variables and observes
  • major purpose is to compare conditions or intervention groups to suggest cause and effect relationships
  • RCT is the gold standard of experiemental designs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Quasi-Experimental research

A
  • limited control but can get interpretable results
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Non-experimental research

A
  • investigations that are descriptive or exploratory in nature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Exploratory research

A
  • examine a phenomenon of interest including its relationship to other factors
  • in epidemiology researchers examine associations to predict risk for disease by conducting cohort and case-control studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Exploratory Methodological studies

A
  • use correlational methods to examine reliability and validity of measuring instruments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Exploratory Historical research

A
  • studies reconstruct the past on the basis of archives and other records to suggest relationships of historical interest to a discipline
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Descriptive Research

A
  • describes individuals to document their characteristics, behaviors, and conditions
  • has several designs: descriptive surverys, developmental research, normative studies, qualitative research, case studies
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Descriptive Surveys (descriptive research design)

A
  • use questionnaires and interviews to gather data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Developmental research (descriptive research design)

A
  • patterns of growth and change over time in a segment of the population
  • natural history of a disease
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Normative studies (descriptive research design)

A
  • used to establish normal values for diagnosis and treatment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Qualitative research (descriptive research design)

A
  • interview and observation to characterize human experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Case study (descriptive research design)

A
  • or case series

- focus on one individual

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Types of Research (9)

A
  • quantitative
  • qualitative
  • basic
  • applied
  • translational
  • experimental
  • Non-experimental
  • Exploratory
  • Descriptive (has 5 of own types)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Collecting Data

A
  • collect data based on subject’s performance on defined protocol
  • surveys
  • questionnaires
  • secondary analysis of large databases: use data collected for another purpose to explore relationships
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

5 Major steps of the Research Process

A
  • identify the research question
  • design the study
  • methods
  • data analysis
  • communication
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Theory

A
  • created because we need to organize and give meaning to complex facts and observations
  • interrelated concepts, definitions or propositions that specifies relationships among variables and represents a systematic view of specific phenomena
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Scientific theory deals with what

A
  • empirical observation

- requires constant verification

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Why use Theory

A
  • to generalize beyond specific situations and to make predictions about what we expect to happen
  • provide framework for interpretation of observations
  • give meaning to research findings and observations
  • stimulate development of new knowledge
  • theoretical premise to generate new hypotheses which can be tested
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Concepts of Theories

A
  • building blocks of a theory
  • allow us to classify empirical observations
  • we “label” behaviors, objects, processes that allow us to identify them and refer to/discuss them
  • concepts can be non-observable: known as constructs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Constructs

A
  • non-observable concepts of theories

- they are abstract variables (i.e. intelligence)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Propositions

A
  • once concepts are identified they are formed into generalization or proposition
  • propositions state relationships between variables
  • hierarchial proposition (Maslow’s needs)
  • temporal proposition (stages of behavioral change)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Models

A
  • concepts can be highly complex, use models to simplify them
  • models are symbolic representations of the elements in a system
  • can represent processes (Eg ICF, Nagi models)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How is Development of Theories done?

A
  • by inductive or deductive processes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Inductive Theories

A
  • data based
  • begin with empirically verifiable observations
  • multiple studies and observations (patterns emerge)
  • patterns develop into a systematic conceptual framework which forms basis for generalizations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Deductive Theories

A
  • intuitive approach
  • hypothetical-deductive theory is developed with few or no observations (not developed from existing facts; must be tested constantly)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Most theories are formulated using what process? inductive or deductive?

A

BOTH!

- observations initiate theory and then hypotheses tested

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

“Testing” Theories

A
  • theories are not testable
  • test HYPOTHESES that are deduced from theories
  • if a hypothesis is supported, then theory from which it was deduced is also supported
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Theory as a foundation for understanding research findings

A
  • results of studies must be explained and interpreted by authors within the realm of theory
  • authors must help readers understand context within which results can be understood
  • researchers should offer interpretation of findings
  • contribute to the growth of knowledge
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Researcher’s Integrity

A
  • relevant research question
  • meaningful research
  • competent investigators
  • personal bias in measurement
  • misconduct (falsification of data, manipulation of stats)
  • publish findings
  • authorship
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

3 Principles of protection of human rights in research

A
  • autonomy
  • beneficence
  • justice
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Autonomy in research

A
  • self-determination and capacity of individuals to make decisions
  • authorized decision maker is available to make decisions for subjects such as those unable to understand
  • protection of human right!
72
Q

Beneficence in research

A
  • attend to well-being of individuals
  • risk/benefit
  • protection of human rightf!!
73
Q

Justice in research

A
  • fairness in research process
  • selection of subjects appropriate for a given study
  • also in randomization process
  • protection of human right!!
74
Q

Regulations for conduct of research with Humans

A
  • Nuremberg code of 1949
  • Declaration of Helsinki 1964
  • National Research ACt 1974, 1976
  • Belmont Report 1979
  • HIPAA 1996
75
Q

IRB

A
  • review research proposals
  • scientific merit
  • competence of investigators
  • risk to subjects
  • feasibility of the study
  • risk to benefit ration (risks to subjects are minimized and outweighted by potential benefits of the study)
76
Q

Expedited vs exempt review

A
  • full review: high risk studies
  • Expedited review: low risk studies
  • exempt review: surveys, interviews, studies of existing records provided data are collected in such a way that subjects can’t be identified
77
Q

Informed Consent - Information elements

A
  • subjects are fully informed
  • subject info confidential and anon
  • consent form in lay language
  • researcher answers questions at any time
78
Q

Informed Consent - Consent Elements

A
  • consent must be voluntary
  • special consideration for vulnerable subjects
  • mental illness
  • free to withdraw at any time
79
Q

Why measure

A
  • clinical decision making
  • compare
  • draw conclusions
  • process governed by rules
80
Q

Numeral

A
  • symbol or label
  • e.g. 1= elders, 2 = younger adults
  • numeral becomes a number when it represents a quantity e.g. 25 kg grip strength
  • variables differentiate objects or individuals, take on different values either qualitatively or quantitatively
81
Q

Continuous Variables

A
  • quantitative variable that can take on any value along a continuum
  • limited by precision (exactness) of the instrument
  • i.e. gait speed
82
Q

Discrete variables

A
  • described in whole units
  • i.e. HR (bpm)
  • qualitative variables represent discrete categories e.g. faller/non-faller. Dichotomous variables are qualitative variable with 2 values
83
Q

Precision

A
  • how exact is a measure
  • indicates the number of decimal places
  • relates to the sensitivity of the measuring instrument
  • also depends on the variable
  • Eg HR measure in whole numbers
  • Eg strength measure to half of a kilogram (or ¼)
84
Q

Indirect nature of measurement

A
  • some variables are measured directly: ROM, Height, Distance walked in the 6MWT
  • some variables are measured indirectly (not directly observable): Temperature, Balance , Strength of a muscle, Health/disability
85
Q

4 Scales of measurement

A
  • hierarchial based on relative precision of assigned values (nominal up to ratio)
  • NOIR
  • nominal/naming
  • ordinal
  • interval
  • ratio
86
Q

Nominal Scale

A
  • numbers are labels for ID
  • 0=male, 1=female
  • categories are mutually exclusive
  • can count each number in category (proportions/frequencies)
87
Q

Ordinal Scale

A
  • numbers indicate rank order
  • e.g. MMT
  • lack of arithmetical properties
  • ordinal values represent relative position only
  • i.e. MMT of 4 is not 2X greater than 2 in terms of strength
  • i.e. distance between grades aren’t equal
  • appropriate for descriptive analysis only
  • i.e. average rank of a group of subjects
88
Q

Interval Scale

A
  • has rank order characteristics of ordinal scale
  • has equal distances/intervals between units
  • i.e. temp, calendar
  • no true/natural zero
  • cannot say 20 deg is twice as hot as 10 deg
  • no ratios allowed
  • can quantify the difference between interval scale values i.e. from 20 to 25 means something
89
Q

Ratio Scale

A
  • interval scale with zero point that has empirical meaning
  • equal distances/intervals between units
  • zero means absence of what is being measured; start measuring at zero
  • no negative values
  • e.g. height, weight, strength, age
  • represent actual amounts being measured
  • can say age of 20 is twice as much as 10 years
90
Q

Reliability

A
  • to what extent a measurement is consistent and free from error
  • conceptualize as reproducibility, dependability, agreement
91
Q

Consistency without reliability

A
  • no confidence in the data

- can’t draw conclusions from the data

92
Q

Measurement Error

A
  • rarely 100% reliable
  • human element
  • instruments
  • observed score = true score +/- error
93
Q

Systematic Error

A
  • predictable errors
  • constant and biased
  • unidirectional
  • consistent so not a problem for reliability
  • can correct your readings i.e. subtract a constant amount from each reading you take
  • are a problem for validity as measurements do not truly represent quantity being measured
94
Q

Random Error

A
  • chance
  • unpredictable
  • due to mistakes, instrument inaccuracy, fatigue etc.
  • assume that if take a lot of measurements then random errors will eventually cancel each other out and average of a lot of scores will be a good estimate of the true score
95
Q

Sources of Measurement Error

A
  • individual rater/tester
  • measuring instrument
  • variability of the characteristic (response variable) being measured
  • environment (noise and temperature), subjects motivation and fatigue
  • we assume these factors are random and effect cancels out over time
  • minimized by training, equipment inspection, maintenence
96
Q

Reliability coefficients

A
  • range from 0.00 (no reliability) to 1.00 (perfect)
  • 0.50= poor reliability
  • 0.50 to 0.75 moderate
  • > 0.75 good
    These are guidelines only
97
Q

Correlation vs Agreement

A
  • correlation means association

- agreement = reliability

98
Q

Intrarater

A
  • stability of data recorded by 1 person on 2 or more trials
99
Q

Interrater

A
  • variation/agreement between 2+ raters who measure same group of subjects
  • preferably at same time
100
Q

Classification of the ICC

A
  • intraclass correlation coefficient (ICC)

- don’t worry about this

101
Q

Alternate forms reliability

A
  • reliability of 2 equivalent forms of a measuring instrument
  • sometimes referred to as parallel forms
  • based on administering 2 alternate forms of the test to a single group at a single session
  • correlating paired observations
  • usually used in educational/psych. testing
102
Q

Correlation Coefficients

A
  • have usually been used to evaluate alternate forms reliability
  • determination of limits of agreement can estimate range of error expected when using 2 different forms of an instrument
103
Q

Internal consistency or homogeneity

A
  • items in an instrument (e.g. a scale) measure various aspects of the same characteristic and nothing else
  • physical functioning scale should have items only relating to physical functioning
  • no items relating to psychosocial characteristics
  • if psychological characteristics were included then items are not homogeneous
104
Q

Cronbach’s coefficient alpha

A
  • most often assesses internal consistency
  • evaluates items in a scale to determine if they measure the same construct
  • Cronbach’s alpha can be used to determine which items could be removed from a scale to improve scale homogeneity
  • internal consistency can also be assessed conducting an item-to-total correlation
105
Q

Measurement Validity

A
  • instrument or test measures what its intended to measure
  • i.e. hand held dynamometer is a valid instrument for measuring strength because we can assess strength from pounds force, MSL is a valid measure of balance
  • validity: measurement is relatively free from error (i.e. valid tests are reliable, low reliability is automatic evidence of low validity)
  • invalid tests may be reliable
106
Q

Specificity of Validity

A
  • An instrument or test is usually valid for a given purpose or situation or population
  • validity is not a universal characteristic of an instrument
  • i.e. assessing disability in patients with PD with instrument A doesn’t mean that instrumental A is valid for assessing disability in people with SCI
107
Q

Is MMT a valid measure of strength in all patient populations?

A
  • NO

- i.e. parkinson’s patients have lead-pipe rigidity which is just tone, not actual strength

108
Q

Types of measurement validity

A
  • face validity
  • content validity
  • criterion-validity (concurrent, predictive)
  • construct
109
Q

Face Validity

A
  • instrument appears to test what its supposed to be testing and is it a plausible method
  • weakest form of validity: difficult to quantify how much face validity an instrument has (no standards to judge it), subjective
  • often established through direct observation (i.e. instrument that measures ROM; gait)
110
Q

Content Validity

A
  • variables have a universe of content (characteristics and information that are observable about that variable)
  • established if an instrument covers all parts of the universe of content (reflects the relative importance of each part, important in questionnaires, surveys, interviews, etc)
  • test must not include factors irrelevant to the purpose of measurement
  • test contains all the elements that reflect variable being studied (i.e. visual analog scale reflects only one element of pain vs. McGill Pain Questionnaire)
  • subjective process to establish content validity (no statistics available, experts determine if Qs cover content domain, when experts agree that content is adequately sampled)
111
Q

Criterion-Related Validity

A
  • most practical and objective approach to testing validity
  • test to be validated is the target test
  • this test compared to the “gold standard” or criterion measure that is already valid
  • correlations are calculated
  • high correlations imply the target test is a valid predictor of performance on the criterion test (can be used as a substitute for established test)
112
Q

Concurrent Validity

A
  • target and criterion measure are assessed at approximately the SAME TIME (e.g. MSL is a valid measure of balance)
  • Useful to establish concurrent validity when the target test may be more efficient, easier to administer, more practical, safer than the gold standard and can be used instead of gold standard
  • i.e. sensory screening (monofilaments; vibration versus electrophysiological testing)
113
Q

Predictive validity

A
  • a measure will be a valid predictor of some FUTURE criterion score
  • i.e. college admission criteria as a predictor of future success
114
Q

Construct Validity

A
  • ability of an instrument to measure an abstract concept (construct)
  • i.e. health, disability, confidence
  • constructs are not observable but exist as concepts to represent an abstract trait
  • constructs are usually multi-dimensional (i.e. how do you measure health or function)
  • support of construct validity when a test can discriminate between individuals who are known to have a condition and those who do not
  • construct validity of a test can also be evaluated in terms of how it relates to other tests of the same and different constructs
  • determine what a test does and doesn’t measure (convergence and discriminative)
115
Q

Known Groups Method of Construct Validity

A
  • construct validation/validity is determined by the degree to which an instrument can demonstrate different scores for groups known to vary
  • i.e. difficulty walking vs. no difficulty walking)
  • you would expect the average score with poor walking ability to be lower than those who have no difficulty walking when looking at ASCQ scores
116
Q

Convergent and Discriminant Validity

A
  • 2 measures reflecting same phenomenon will correlate (convergent validity)
  • convergence not enough for construct validity (must show that the construct can be differentiated from other constructs)
  • measures that assess different characteristics will have low correlations (discriminant validity)
117
Q

When asking a question what do you start with?

A
  • start with a topic of interest
  • Patient population (e.g. Geriatrics, HD, CMT)
  • Intervention
  • Clinical assessments
118
Q

Identifying a research problem

A
  • broad statement to focus the direction of the study
  • what is known and what is not about the topic
  • clinical experiences
  • clinical theory (i.e. motor control theory, effects of practice/repetition)
  • lit review
119
Q

Processes of Research

A
  • ask a question
  • start with topic of interest
  • identify a research problem
120
Q

Literature review

A
  • initially a preliminary review (not extensive; an orientation to the issues)
  • gaps (areas without information to make clinical decisions)
  • conflicts/contradictory findings/flawed studies
  • replication
121
Q

Reviewing the literature

A
  • initial review is very preliminary: general understanding of the state of knowledge in the area of interest
  • once research problem is formulated begin a full and extensive review
  • provides complete understanding of background to assist in formulating the research question
122
Q

What does the problem provide?

A
  • the problem provides foundation for a specific research question answerable in our study
123
Q

Components shaping a question

A
  • importance: results should be meaningful and useful
  • answerable: questions involving judgments and philosophy are difficult to study
  • feasible for study: researchers must have the necessary skill and resources…are there sufficient subjects?
124
Q

Target population

A
  • must be a well defined population

- obvious who will be included in the study

125
Q

Development of a research rationale

A
  • Begin a full review of the literature once the research problem is delineated which will establish the background for the research question
  • This clarifies the rationale for the study
  • research rationale presents a logical argument that shows why and how a question was developed
  • shows why question makes sense
  • provides a theoretical framework for the study
  • if no rationale, difficult to interpret results
126
Q

Variables to be studied in research

A
  • variables are building blocks of research question
  • independent variable: predictor variable (i.e. condition that patients have, intervention, characteristics of patients) that will predict or cause an outcome
  • outcome variable: dependent variable. varies depending on the independent variable (DOESNT HAVE LEVELS)
127
Q

Levels of independent variable

A
  • i.e. males/females

- levels of an independent variable is the groupings/characteristics

128
Q

Definition of Conceptual

A
  • general
  • leg flexibility is the degree of motion in the leg
  • no info on what measure of flexibility/motion is used in a particular research study i.e. ROM, SLR
129
Q

Definition of Operational

A
  • variable defined according to its meaning within a particular study
  • i.e. balance impaired: cannot stand on one leg over 5 secs in that specific study
130
Q

Research Objectives

A
  • clarifying objective of study is final step in delineating a research question
  • objectives may presented as: hypotheses, specific aims, purpose of research, research objectives
  • must specifically and concisely delineate what study is expected to accomplish
131
Q

4 general types of research objectives

A
  • descriptive: to characterize clinical phenomena or conditions in population
  • measurement properties of instruments: investigation of reliability and validity
  • explore relationships: to determine interactions among clinical phenomena (i.e. strength and balance)
  • comparisons: outlining cause and effect relationships using experimental model. evaluates group differences regarding effectiveness of treatment
132
Q

Hypotheses

A
  • in experimental studies and exploratory studies researcher proposes an educated guess about study outcomes (Statement that predicts relationship between independent and dependent variables)
  • Purpose of study is to test the hypothesis and provide evidence to either accept or reject it
  • not proposed on the basis of speculation
  • derived from theory
  • suggested from previous research, clinical practice/experience
133
Q

Stating the Hypotheses - research hypotheses

A
  • states researchers true expectations of results
  • more often than not research hypotheses propose a relationship in terms of a difference
  • some research hypotheses predict no difference between variables
  • analysis of data is based on testing the statistical (=null) hypothesis
134
Q

Non-directional hypothesis

A
  • do not predict a direction of change
135
Q

Directional hypothesis

A
  • predict a direction
136
Q

Simple hypothesis

A
  • includes 1 ind variable and 1 dependent var
137
Q

Complex hypothesis

A
  • has more than 1 Ind and Dep Variables
138
Q

Reviewing the Literature

A
  • Initial review is preliminary: General understanding of the state of knowledge in the area of interest
  • Once research problem is formulated begin a full and extensive review: Provides complete understanding of background to assist in formulating the research question
139
Q

Scope of Literature review

A
  • depends on your familiarity with the topic
  • how much has been done in the area and how many relevant references there are
  • review info on: patient population, methods, equipment used, operational definition of variables, statistical techniques used
140
Q

Primary vs. Secondary sources

A
  • primary: research articles (i..e gold standard. like pubmed)
  • secondary: review articles, text books
141
Q

Validity in experimental design

A
  • issues in experimental control that must be addressed so that we have confidence in the validity of the experimental outcomes
  • do you have confidence in this study?
  • does this study have good controls?
142
Q

Experiment

A
  • most rigorous form of investigation to test hypotheses is the scientific experiment (researcher manipulates and controls variables)
  • experiment: looking for cause and effect relationship between independent and dependent variable
  • experimental designed to control for extraneous (nuisance) variables that exert a confounding (contaminating) influence…types of treatment received simultaneously
143
Q

3 Characteristics of experiments

A
  • Independent variable is manipulated by the researcher
  • Random assignment
  • Control (comparison) group must be incorporated into the study design
144
Q

Independent variable is manipulated by the researcher

A
  • intervention (ind variable) is administered to one group (experimental) and not to another (control)
  • active variable: IV with levels that can be manipulated and assigned i.e. experimental treatment vs usual vare
  • attributed variable: IV with levels that cannot be manipulated as they represent subject characteristics (i.e. occupation, gender, age)
145
Q

Random Assignment

A
  • in assigning subjects to groups
  • each subject has an equal chance of being assigned to any group
  • no systematic bias exists that might affect differentially affect the dependent variable
  • if groups are equivalent at the beginning of the study, differences observed at the end of the study are not due to differences that existed before the study began
  • eliminates bias by creating a balanced distribution of characteristics across groups
146
Q

Control (comparison) group must be incorporated into the study design

A
  • to rule out extraneous effects, use control group to compare to experimental group
  • must assume equivalence between control and experimental groups at start of study
  • if we see a change in treatment group (experimental) but not control group, can assume it’s the treatment
147
Q

Reserach Protocol of Experimental Control

A
  • to control for extraneous (cofounding) factors in a study
  • eliminate them as much as possible, implies researcher must know what they are
  • standardize research protocol so that each subject has a similar testing experience
  • ensure they affect all groups equally
  • maximize adherence to protocol to limit loss of data (data losses compromise effect of random assignment, decreases power of study)
  • reasons for loss of data (i.e. subjects may drop out during study, subjects may cross over to another treatment, subjects may refuse assigned treatment after allocation, subjects may not be compliant with assigned treatments
148
Q

Experimental control: Blinding

A
  • to avoid observation bias
  • participant’s knowledge of treatment status or investigator’s expectations can influence performance or recording of outcomes
  • double-blind study: Subjects and investigators are unaware of the identity of groups until after study
  • single-blind study: Only investigator/measurement team is blinded
149
Q

Experimental control: randomized assignment

A
  • Eliminates bias by creating a balanced distribution of characteristics across groups
  • Not perfect and may not always work
  • Groups may not be balanced on important variables
  • Can use other strategies beside randomization
150
Q

Design studies for controlling intergroup variability

A
  • Choose homogeneous subjects: e.g. only males or only older adults
  • Matching (age and gender) across groups
  • Use subjects as own controls: Repeated measures design
  • Analysis of covariance (ANCOVA): Not a design strategy but a statistical strategy, and covariates
151
Q

Repeated Measures Design

A
  • (within subjects design)
  • benefits outweigh negatives….individuals can expect treatment but it decreases the variability between subjects
  • one participant does the control
  • and then the experimental
  • or vice versa in order
  • but aka the individual does both sides of the experiment
152
Q

4 Types of design validity that form a framework for evaluating experiments

A
  • statistical conclusion validity
  • internal validity
  • construct validity
  • external validity
153
Q

Is there a relationship between ind and dependent variables? Statistical validity

A
  • concerns potential inappropriate use of statistics leading to invalid conclusions about relationship between independent and dependent variables
  • threats to statistical conclusion validity
154
Q

Threats to statistical conclusion validity

A
  • low statistical power: power is ability of a test to reject null hypothesis (low power may be due to inadequate sample size)
  • violation of assumptions of statistical tests (i.e. too small of a sample size or using the wrong # for stat signif)
  • error rates due to multiple tests
  • unreliable measurements
155
Q

Is there evidence of a causal relationship between independent and dependent variables? internal validity

A
  • threats to internal validity refers to potential for confounding factors to interfere with relationship between independent and dependent variables
156
Q

Single group threats to internal validity

A
  • when only one group of subjects is tests
  • history effect
  • maturation effect
  • attrition effect
157
Q

Multiple group threats to internal validity

A
  • can have selection-interaction which is factors other than intervention that affect groups differently and can influence posttest differences between groups
  • selection-history effects
  • selection-maturation effects
  • selection-attrition effects
  • selection-testing effects
  • selection-instrumentation effects
158
Q

Social threats to internal validity

A
  • study results may be affected by interaction of subjects and investigations
  • social threats: refers to pressures that can occur in research situations that may lead to differences between groups
  • threats occur because those involved are aware of other group’s situation or are in contact with each other
159
Q

History effect single group threat

A
  • confounding effects of specific events other than the experimental treatment between pre and post test (another exercise program)
160
Q

Maturation effect single group threat

A
  • processes that occur as a function of time

- subjects respond differently on a second test because grew older, stronger

161
Q

Attrition effect single group threat

A
  • may result in differential loss of subjects (based on intervention) and introducing bias by changing composition of the sample
162
Q

Testing effects in single group threat

A
  • pretest learning

- familiarity

163
Q

Instrumentation effects in single group threat

A
  • reliability of the measurements (test-retest and rater reliability)
  • testers better at testing on posttest versus pretest
164
Q

Diffusion or imitation of treatment social threat

A
  • controls motivated to begin to exercise
165
Q

Compensatory equalization of treatments social threat

A
  • therapists work harder on those in the “lesser” group
166
Q

Compensatory rivalry and resentful demoralization of respondents receiving less desirable treatments social threats

A
  • patients work harder

- or patients work less hard

167
Q

Ruling out threats to internal validity

A
  • use a random assignment and control groups
  • cancel out threats when both groups are equivalent at start of study and are likely to be affected are equivalent at start of the study and are likely to be affected equally by events occurring during the study
  • Use blinding of subjects and investigators
168
Q

Construct validity of causes and effects

A
  • refers to the theoretical conceptualization of the treatment and outcome variables and whether these are developed sufficiently to allow interpretation and generalization of their relationship
169
Q

Operational definition of variables-

A
  • e.g. construct of pain
  • when measuring pain using 1 method, cant generalize results to general pain measures
  • also timeframe of pain: if assessing pain for 1 week, cant generalize to longer time frames
170
Q

Construct validity is affected when a study involves what?

A
  • when it involves multiple treatments or measurements
  • generalization of relationship between treatment and outcome variables is limited by the possibility of multiple treatment interactions creating a carry over or combined effect
171
Q

Order effects

A
  • subjects receive treatments in a specific order creating possible influences on responses/outcomes
  • cannot generalize responses to situation of only a single treatment given
172
Q

Length to follow ups

A
  • are trends maintained over time?

- can only conclude based on what the length of treatment was that was provided

173
Q

Hawthorne Effect

A
  • individuals singled out for special attention perform better because they are being observed
  • people act differently if know they are being observed (i.e. if know they are in experimental group)
  • testers may also act more positively to subjects in experimental group (verbal cues, smiling)
174
Q

External Validity

A
  • can results of a study be generalized to other people/settings/times??
  • can it be generalized beyond the experimental situation itself
175
Q

Threats to external validity

A
  • involve interaction of treatment with the specific types of subjects tested, the specific setting in which experiment is conducted in, and time when the study is done