Exam 1 Flashcards
What are the oxford centre for evidence based medicine levels of evidence?
- Level 1: systematic review of RCTs
- Level 2: Randomized trial or observational study with dramatic effect
- Level 3: Non-randomized controleed cohort/follow-up study
- Level 4: Case-series, case-control studies, or historically controlled studies
- Level 5: Mechanism-based reasoning (formerly known as expert opinion)
What do critical analysis of research reports do?
- Determines validity of the report
* Applicability for clinical decisions
What do the guidelines for reporting of studies do?
- CONSORT statement
- Enables reader to better assess validity of the results
- Many others (Ex. STROBE)
Describe evaluating research reports
- Critical analysis of research report
- Guidelines for reporting of studies
- Success of evidence-based practice dependent on incorporating research findings into clinical decision making
How do you distinguish the quality of the journal?
- When evaluating scientific merit of an article, consider journal’s reputation
- Peer-reviewed/refereed journals
- Content experts
- Accepted based on recommendation of reviewers
- Processes ensure that articles meet standards (importance of study, originality, methods, interpretations/conclusions)
What questions should be asked when evaluating components of a study?
- What is the study’s intent?
- Is the study sound in it’s methodology?
- Are results meaningful?
- Can the results be applied to my patient?
What is the question ‘what is the study’s intent’ looking at?
The problem under investigation
What should be considered when asking ‘is the study sound in it’s methodology’?
- If not, results may not be valid
- Details of subjects (how selected/inclusion/exclusion criteria)
- Random assignment? Blinding?
- Reliable and valid measures?
- Equal treatment of groups (apart from intervention)?
What should be considered when asking ‘are results meaningful’?
- Was there an effect of the intervention?
* Clinically significant and statistically significant
What should be considered when asking ‘can the results be applied to my patient’?
- Depends if your patient is similar to the patients studied
- Is treatment feasible in my clinic?
- Is treatment feasible for my patients based on their preferences?
What are the characteristics of clinical research?
- Structured and systematic
- Objective process
- Examines clinical conditions and outcomes
- Establishes relationships among clinical phenomena (ex. how strength affects balance)
- Provides evidence for clinical decision making
- Provides impetus for improving practice
What are examples of clinical phenomena?
Manual muscle testing, ROM, propensity for falls/balance, balance confidence
*Things we can document and keep track of over time
How does clinical research shift in the 20th century and what influenced the shift?
- the shift was after influenced research priorities
- Focus on outcomes research to document effectiveness
- application of models of health and disability
- attention to evidence based practice (EBP)
What did rehabilitation outcomes used to be related to?
*were related to improvements in pathologies or impairments
What do outcomes include now?
- WHO definition of health to include physical, social, and psychological well-being
- consider patient satisfaction, self-assessment of functional capacity, quality of life (QOL)
- Now clinicians must document outcomes to substantiate effectiveness of treatment
What do outcomes research do?
- How successful are our interventions in clinical practice specifically in terms of disability and survival
- Studies use large databases including info not only on functional outcomes, but also on utilization of services, insurance coverage etc.
- Measure the effectiveness of treatment in terms of patient satisfaction and outcomes as well as in terms of revenue/costs; staff productivity
- Questionnaires are often used to measure outcomes in terms of function and health status
- Health status scales (ex. instruments such as the Medical outcomes study short-form 36 reflects physical function, mental function, social function, and other
What are the models in research and what do they focus on?
- Biomedical Model
- Focuses on relationship b/w pathology and impairments
- Physical aspects of health
- No consideration for how patient is affected by illness
- Disablement Model
- Pathology, impairment, functional limitation, disability
In a disablement model: Nagi, describe pathology, impairment, functional limitation, and disability
- Pathology- interference with normal bodily processes or structures
- Impairment- anatomical, physiological, or psychological abnormalities
- Functional limitation- inability to perform an activity in a normal manner
- Disability- limitation in performance of activities within socially defined roles
What does the ICF model do?
- Describes how people live with their health condition
- Includes references to environmental and personal factors affecting function
- Contextual factors - Has parallels to Nagi model
What are the parallel between the ICF and Nagi model?
- Health condition: pathology
- Body function/structure: impairments
- Activity: functional limitation
- Participation: disability
Describe ICF outcomes
- Outcomes may be related to (targeted to) the impairment level
- BUT must also establish functional outcomes that influence performance at the activity or at the participation levels
- Ex. increasing strength and balance will allow the person to ambulate in the community and socialize with friends (activity level and participation level)
- BUT must also establish functional outcomes that influence performance at the activity or at the participation levels
What does evidence based practice (EBP) do?
*Provision of quality care depends on ability to make choices that have been confirmed by sound scientific data, and that decisions are based on best evidence currently available
How does EBP begin?
*It begins by asking a relevant clinical question related to Patient diagnosis, prognosis, intervention, validity of clinical guidelines, safety or cost effectiveness of care
PICO is a good clinical question, what does it stand for?
P = patients/population I = Intervention C = comparison/control O = outcome of interest
In a patient 2-weeks post hip replacement, is active exercise more effective than passive ROM exercise for improving hip ROM; what represents PICO in this inquiry?
P = patient 2 wks post hip replacement I = active exercise C = more effective than passive ROM exercise O = improving hip ROM
Describe what PICO does
- Question is a precursor to searching for the best evidence to facilitate optimal decision making about a patient’s care
- Terms in PICO can be used as search terms in a literature search for best evidence
- Clinicians search and access literature
- Critically appraise studies for validity
- Determine if research applies to their patient
What are the components of EBP for clinical decision making?
- Clinical expertise
- Best research evidence
- Clinical circumstances and setting
- Patient values and preferences
What are some sources of knowledge for clinical decisions and to guide clinical research?
- Tradition (always done this way)
- Authority (expert opinion)
- Trial and error (Try something and if it fails try something else)
- Logical reasoning
- Scientific method
What is logical reasoning?
- A method of knowing which combines
- Experience
- Intellect
- Thought
- Systematic process to answer questions and acquire new knowledge
- 2 types of reasoning
- Deductive
- Inductive
Describe Deductive reasoning
- Acceptance of a general proposition and the inferences that can be drawn in specific cases
- General observation- specific conclusion
- Ex.= poor balance results in falls, exercise improves balance, therefore exercise will reduce risk of falls
Describe Inductive reasoning
- Specific observation- general conclusion
- Ex.= Patients who exercise don’t fall, Patients who don’t exercise fall more often, Therefore exercise is associated with improved balance (and fewer falls)
What kind of reasoning is used in the introduction section of a research manuscript?
*Deductive logic is used when developing research hypotheses from existing general knowledge
What kind of reasoning is used in the discussion section of a research manuscript?
*Inductive reasoning is used when researchers propose generalizations and conclusions from data in a study
What is the scientific method?
- Rigorous process used to acquire new knowledge
- Based on 2 assumptions
- Nature is orderly/regular and events are consistent and predictable
- Events/conditions are not random and have causes that can be discovered
What is the scientific approach defined as?
*Systematic, empirical, controlled and critical examination of hypothetical propositions about the associations among natural phenomena
What does the systematic nature of research imply?
- Implies a sense of order to ensure reliability
- Logical sequence to identify a problem, collect and analyze data, interpret findings
In the scientific method, what are the empirical, control, and critical examination?
Empirical: in research refers to direct observation to document data objectively
Control: control of extraneous factors
Critical examination: scrutiny of your findings by other researchers
How do you classify research?
*Can classify research based on a number of schema according to purposes and objectives
What are qualitative and quantitative research?
- Quantitative: measurement under standardized conditions- can conduct statistical analysis
- Qualitative: understanding through narrative description; less structured- interviews
What are basic and applied research?
- Basic: ‘bench research’- not practical immediately, may be useful later in developing treatments
- Applied: solving immediate practical problems- most clinical research
What is translational research?
- Translational:
- Scientific findings are applied to clinical issues
- Also the generating scientific questions based on clinical issues
- ‘bedside to bench and back to bedside’
- Collaboration among basic scientists and clinicians
What is experimental research?
- Experimental: researcher manipulates one or more variables and observes
- major purpose is to compare conditions or intervention groups to suggest cause and effect relationships
- RCT is the gold standard of experimental designs
- Quasi-experimental: limited control but can get interpretable results
What is non-experimental research?
*Non-experimental: investigations that are descriptive or exploratory in nature
What is exploratory research?
- Exploratory: examine a phenomenon of interest including its relationship to other factors
- In epidemiology researchers examine associations to predict risk for disease by conducting cohort and case-control studies
- Methodological studies use correlational methods to examine reliability and validity of measuring instruments
- Historical studies reconstruct the past on the basis of archives and other records to suggest relationships of historical interest to a discipline
What is descriptive research?
- Descriptive: describe individuals to document their characteristics, behaviors and conditions
- Several designs
What are the several designs in descriptive research?
- Descriptive surveys: use questionnaires, interviews
- Developmental research: patterns of growth and change over time in a segment of the population, natural history of a disease
- Normative studies: to establish normal values for diagnosis and treatment
- Qualitative research: Interview and observation to characterize human experiences
- Case study or case series
How do you collect data?
- Collect data based on subject’s performance on defined protoccol
- Surveys
- Questionnaires
- Secondary analysis of large databases: use data collected for another purpose to explore relationships
What is a research process and what are the 5 major steps?
- Logical framework for a study’s design
- 5 major steps:
- Identify the research question
- Design the study
- Methods
- Data analysis
- Communication
Why were Theories created?
*Created because we need to organize and give meaning to complex facts and observations
What do theories entail?
*Interrelated concepts, definitions or propositions that specifies relationships among variables and represents a systematic view of specific phenomena
What does scientific theory deal with?
*Scientific theory deals with empirical observation and requires constant verification
Why do we use theory?
- Use theory to generalize beyond specific situations and to make predictions about what we expect to happen
- Provide framework for interpretation of observations
- Giving meaning to research findings and observations
- Stimulate development of new knowledge
- Theoretical premise to generate new hypotheses which can be tested
What are the components of theories?
- Concepts: Building blocks of a theory
- Allow us to classify empirical observations
- We ‘label’ behaviors, objects, processes that allow us to identify them and refer to/discuss them
- Concepts can be non-observable
- Known as constructs
- Constructs are abstract variables (Ex. intelligence)
What are propositions?
*Once concepts are identified they are formed into a generalization or proposition
What do propositions do and what are the kinds?
- They state relationships btwn variables
- Hierarchial proposition (Maslow’s needs)
- Temporal proposition (stages of behavioral change)
What are models and why do we use them?
- Models are symbolic representations of the elements in a system
- Can represent processes
- Ex. ICF, Nagi models
- Concepts can be highly complex so use models to simplify them
- Ex. double helix model in genetics
How do you develop theories?
- By inductive or deductive processes
- Most formulated using both processes
- Observations initiate theory and then hypotheses tested
What are inductive theories?
- Data based
- Begin with empirically verifiable observations
- Multiple studies and observations (Patterns emerge)
- Patterns develop into a systematic conceptual framework which forms basis for generalizations
What are deductive theories?
- Intuitive approach
- Hypothetical deductive theory is developed with few or no observations
- Not developed from existing facts, must be tested constantly
Do you test theories?
- Theories are not testable
- Test hypotheses that are deducted from theories
- If hypothesis is supported then theory from which it was deduced is also supported
What is theory a foundation for?
*It’s a foundation for understanding research findings
Describe the importance of authors in research
- Results of studies must be explained and interpreted by authors within the realm of theory
- Authors must help readers understand context within which results can be understood
- Researchers should offer interpretation of findings
- Contribute to the growth of knowledge
What constitutes researchers integrity?
- Relevant research question
- Meaningful research
- Competent investigators
- Personal bias in measurement
- Misconduct
- Falsification of data
- Manipulation of statistics
- Publish findings
- Authorship
What are the 3 principles that protect human rights in research?
- Autonomy
- Beneficence
- Justice
What is autonomy?
- Self determination and capacity of individuals to make decisions
- Authorized decision maker is available to make decisions for subjects such as those unable to understand
What is beneficence?
- Attend to well-being of individuals
- Risk/benefit
What is justice?
- Fairness in research process
- Selection of subjects appropriate for a given study
- Also in randomization process
What are some regulations for conduct of research with humans?
- Nuremberg Code of 1949
- Declaration of Helsinki 1964
- National Research Act 1974, 1976
- Belmont Report 1979
- HIPAA 1996
What does the IRB do?
- Reviews research proposals
- Scientific merit
- Competence of investigators
- Risk to subjects
- Feasibility of the study - Risk-Benefit ratio
- Risks to subjects are minimized and outweighed by potential benefits of the study
Compare Expedited vs. Exempt review
- Full Review: high risk studies
- Expedited: Low risk
- Exempt: surveys, interviews, studies of existing records provided data are collected in such a way that subjects can’t be identified
What are the information elements in informed consent?
- Information elements
- Subjects fully informed
- Subject info confidential and anonymous
- Consent form in lay language
- Researcher answers questions at any time
What are the consent elements in informed consent?
- Consent elements
- Consent must be voluntary
- Special consideration for vulnerable subjects
- Ex. Mental illness (Legal guardian consents) - Free to withdraw at any time
Why should you use measurements in research?
- Clinical decision making
- Compare
- Draw conclusions
- Process governed by rules
What is a numeral measurement?
- Symbol or label (ex. 1=older adults)
* Numeral becomes a number when it represents a quantity (ex. 25 kg grip strength)
What do variables do in research?
- They differentiate objects or individuals
- take on different values either quantitatively (ex. height in inches) or qualitatively (ex. gender, fall status)
What are the different types of variables?
- Continuous variable
- quantitative variable that can take on any value along a continuum
- limited by precision (exactness) of the instrument
- Ex. gait speed (TUG)
- Discrete variables
- Described in whole units
- Ex. HR
- Qualitative variables represent discrete categories (ex. faller/non-faller)
- Dichotomous variables are qualitative variables with 2 values
What is Precision?
- How exact is a measure
- it indicates the number of decimal places
- Relates to the sensitivity of the measuring instrument
- also depends on the variable
- Ex. HR measure in whole numbers
- Ex. Strength measure to half of a kg
What are examples of variables measured directly?
- ROM
- Height
- Distance walked in the 6MWT
What are examples of variables measured indirectly (not directly observable)?
- Temperature
- Balance
- Strength of a muscle
- Health/disability
What are the 4 scales of measurement and why do we need them?
- They’re a hierarchical based on relative precision of assigned values (nominal up to ratio)
- Nominal
- Ordinal
- Interval
- ratio
What is the Nominal scale?
- Numbers are labels for identification (Ex. 0=male)
- Categories are mutually exclusive
- Mathematically can count number in each category (proportions, frequencies, etc.)
What is the Ordinal scale?
- Numbers indicate rank order (ex. MMT scale)
- Lack of arithmetical properties
- Ordinal values represent relative position only
- Ex. MMT of 4 is not 2x greater than 2 in terms of strength
- Appropriate for descriptive analysis only (Ex. average rank of a group of subjects)
What is the Interval Scale?
- Has rank order characteristics of ordinal scale
- Equal distances/intervals btwn units (Temp, calendar)
- No true/natural zero
- So cannot say 20 dg is twice as hot as 10 dg
- ex. no ratios “allowed”
- Can quantify the difference between interval scale values (ex. from 20-25 means something)
What is the ratio scale?
- Interval scale with zero point that has empirical meaning
- Equal distances/intervals btwn units
- Zero means absence of what is being measured; start measuring at zero
- No negative values
- Ex. height, weight, strength, age
- Represent actual amounts being measured (can say age of 20 yrs is twice as much as 10 yrs)
What is reliability?
- It’s the extent to what a measurement is consistent, free from error
- You conceptualize as reproducible, dependability, agreement
- Things that are reliable are the patient, examiner, or instrument
What happens without reliability?
- No confidence in the data
* Can’t draw conclusions from the data
Why are measurements rarely 100% reliable?
- Human element/error
* Instruments
What is the observed score?
*True score +/- error
What is systematic error?
- Predictable errors
- Constant and biased
- Unidirectional
- Consistent so not a problem for reliability
- Can correct your readings (Ex. subtract a constant amount from each reading you take)
- Are a problem for validity as measurements do not truly represent quantity being measured
What is random error?
- Chance
- Unpredictable
- Due to mistakes, instrument inaccuracy, fatigue, etc
- Assume that if take a lot of measurements then random errors will eventually cancel each other out and average of a lot of scores will be a good estimate of the true score
What are the sources of measurement error?
- Individual (rater/tester)
- Measuring instrument
- Variability of the characteristic (response variable) being measured
- Ex. blood pressure, HR, balance confidence
- Environment (noise and temp), subjects motivation and fatigue
How are sources of error minimized?
- They’re minimized by training, equipment inspection, maintenance, etc.
- We assume these factors are random and effect cancels out over time
What are reliability coefficients?
*Range from 0.00 (no reliability) to 1.00 (perfect reliability)
*0.50 = poor reliability
*0.50-0.75 = moderate
*>0.75 = good
(These are guidelines only)
What is the relationship between correlation and agreement?
*Perfect correlation does not always mean agreement- which then means poor reliability
What is test-retest reliability?
- To establish that an instrument is capable of measuring with consistency
- Individuals given same test on 2 occasions under identical conditions
- Raters not involved
- Ex. self-report surveys of balance confidence (ABC scale); measures of mechanical or digital readouts
What is intrarater reliability?
*stability of data recorded by 1 person on 2 or more trials
What is interrater reliability?
- Variation/agreement btwn 2 or more raters who measure same group of subject(s)
- Preferably at exact same time
- They don’t discuss results with each other until done recording data
What is the classification of the ICC?
- Intraclass correlation coefficient
- Designated as ICC (1,1) ICC(2,1), or ICC(3,1)
- 1st number is the model and 2nd number is the form
What is alternate forms reliability?
- Reliability of 2 equivalent forms of a measuring instrument
- Sometimes referred to as parallel forms
- Ex. GRE or SAT- given a number of times a year but in different forms, need to establish reliability
- Correlation coefficients have been used to evaluate them
What is alternate forms reliability based on?
- Based on administering 2 alternate forms of the test to a single group at a single session
- Correlating paired observations
- Usually used in educational/psych testing
- Clinical examples: alternative forms (parallel forms) of strength evaluations and/or gait evaluation
what can determination of limits of agreement estimate?
*Can estimate range of error expected when using 2 different forms of an instrument
What is internal consistency (or homogeneity)?
- Items in an instrument (ex. scale) measure various aspects of the same characteristic and nothing else
- Physical functioning scale should have items only relating to physical functioning
- No items relating to psychological characteristics
- If psychological characteristics were included then items are not homogeneous
- Can be assessed conducting an item-to-total correlation
What is Cronbach’s coefficient alpha and how is it assessed?
- Internal consistency most often is used to assess this
- Evaluates items in a scale to determine if they measure the same construct
- Can be used to determine which items could be removed from a scale to improve scale homogeneity
What is measurement validity?
- Instrument or test measures what it’s intended to measure
- Ex. Hand held dynamometer is a valid instrument for measuring strength because we can assess strength from pounds force
- MSL is a valid measure of balance
- Invalid tests may be reliable
What is validity?
- Measurement is relatively free from error
- Ex. valid tests are reliable
- Low reliability is automatic evidence of low validity
What is specificity of validity?
- An instrument or test is usually valid for a given purpose or situation or population
- Validity is not a universal characteristic of an instrument
- Ex. Assessing disability in pts with PD with instrument A does not mean that instrument A is valid for assessing disability in people with SCI
- MMT isn’t a valid measure of strength bc of ppl with diseases
What are the types of measurement validity?
- Face validity
- Content validity
- Criterion-related validity
- Concurrent or predictive
- Construct validity
What is face validity?
- Instrument appears to test what it’s supposed to (a plausible method)
- Weakest form of validity
- Difficult to quantify how much face validity an instrument has (no standards to judge it)
- Subjective assessment
- Often established through direct observation
- Ex. instrument that measures ROM; strength; gait
What is content validity?
- Variables have a universe of content
- characteristics and info that are observable about that variable
- Established if an instrument covers all parts of the universe of content
- Reflects the relative importance of each part
- Important in questionnaires, exams, interviews
What must content validity not include?
- Test must not include factors irrelevant to the purpose of measurement
- Ex. test of motor performance should not contain items assessing psychological function
What does content validity imply?
- Implies test contains all the elements that reflect variable being studied
- Ex. visual analog scale reflects only one element of pain (intensity)
- McGill Pain questionnaire has greater content validity as it assesses many elements of pain such as location, intensity, duration, etc.
What kind of process is content validity?
- Subjective process to establish content validity
- No statistics available
- Experts determine if the questions cover the content domain
- When experts agree that content is adequately sampled, content validity is established
What is Criterion-related validity?
- Most practical and objective approach to testing validity
- Test to be validated is the target test
- This test is compared to the “gold standard” or criterion measure that is already valid
- Correlations are calculated
- High correlations imply the target test is a valid predictor of performance on the criterion test
- Can be used as a substitute for established test
What is concurrent validity?
- Target and criterion measure are assessed at approximately the same time
- Ex. MSL is a valid measure of balance
- Useful to establish concurrent validity when the target test may be more efficient, easier to administer, more practical, safer than the gold standard and can be used instead of gold standard
- Sensory screening
What is predictive validity?
- A measure will be a valid predictor of some future criterion score
- Ex. College admission criteria as a predictor of future success
- Ex. TUG as a measure of future risk for falls
What is construct validity?
- Ability of an instrument to measure an abstract concept (construct)
- Ex. health, disability; confidence - Constructs are not observable but exist as concepts to represent an abstract trait
- Constructs are usually multi-dimensional
- Ex. how do you measure “health” or “function”?
When is there support of construct validity?
*When a test can discriminate btwn individuals who are known to have a condition and those who do not
What is the known groups method in construct validity?
*Construct validation is determined by the degree to which an instrument can demonstrate different scores for groups known to vary (Ex difficulty walking versus no difficulty walking)
How can construct validity of a test be evaluated?
- It can be evaluated in terms of how it relates to other tests of the same and different constructs
- Determine what a test does and doesn’t measure- convergence and discrimination
What is convergent validity?
- 2 measures reflecting same phenomenon will correlate
- Convergence not enough for construct validity
- Must show that the construct can be differentiated from other constructs
What is discriminant validity?
*Measures that assess different characteristics will have low correlations
What do you start with when asking a research question?
- start with selecting a topic of interest
- Pt population (ex. geriatrics, HD, CMT)
- Intervention
- Clinical assessments
What is the second step when asking a research question?
- Identify a research problem
- Broad statement to focus the direction of the study
- What is known and what is not about the topic
- Clinical experiences
- Clinical theory (ex. motor control theory) - effects of practice/repetition
- Literature review: initially a preliminary review (not extensive; an orientation to the issues)
- Gaps (areas without info to make clinical decisions
- Conflicts/contradictory findings/flawed studies
- Replication (to correct for design limitations)
What should be considered when reviewing the literature?
- Initial review is preliminary
- General understanding of the state of knowledge in the area of interest
- Once research problem is formulated begin a full and extensive review
- Provides complete understanding of background to assist in formulating the research question
- Problem provides foundation for a specific research question answerable in our study
What are the components in shaping a question?
Question should be:
- Important = results should be meaningful and useful
- Answerable = questions involving judgements and philosophy are difficult to study
- Feasible for study = researchers must have the necessary skill and resources. Are there sufficient subjects?
What is a target population?
- Must be well defined so it is obvious who will be included in the study
- Ex. community dwelling adults age 60 or more
What should be done for the development of a research rationale?
- Begin a full review of the literature once the research problem is delineated
- A full review of the literature will establish the background for the research question
- this clarifies the rationale for the study
What does research rationale present?
- Presents a logical argument that shows why and how a question was developed
- Shows why question makes sense
- Rationale provides a theoretical framework for the study
- If no rationale for a study, difficult to interpret results
What are variables?
*Building blocks of research question
What is the independent variable?
- Predictor variable
- Condition that ppl have
- Intervention used
- Characteristics of people that will predict or cause an outcome (young vs. old)
What is the dependent variable?
*It’s the outcome variable that varies depending on the independent variable, it does not have levels like an independent variable
What is a conceptual definition?
- Dictionary definition; general
- Leg flexibility is the degree of motion in the leg
- No info on what measure of flexibility/motion is used in a particular research study (ex. ROM or SLR)
What is operational definition?
- Variable is defined according to its meaning within a particular study
- Ex. definition of “balance impairment” in a study
What is the final step in delineating a research question?
- clarifying objective of study
* Research objective must specifically and concisely delineate what a study is expected to accomplish
How might the objectives be presented?
- Hypotheses
- Specific aims
- Purpose of research
- Research objectives
- terms vary according to journals, researchers, disciplines
What are the 4 general types of research objectives?
- Descriptive- to characterize clinical phenomena or conditions in a population
- Measurement properties of instruments - investigation of reliability and validity
- Explore relationships- to determine interactions among clinical phenomena (ex. strength and balance study)
- Comparisons- outlining cause and effect relationships using experimental model
- evaluates group differences regarding effectiveness of treatment
What is the purpose of a study?
*To test the hypothesis and provide evidence to either accept or reject it
What is a hypotheses?
- In experimental studies and exploratory studies researcher proposes an educated guess about study outcomes
- Statement called a hypothesis
- statement that predicts relationship between independent and dependent variables
How is a hypotheses made?
- Not proposed on the basis of speculation
- Derived from theory
- Suggested from previous research, clinical practice/experience, observation
What is a research hypothesis?
- States researchers true expectations of results
- more often than not research hypotheses propose a relationship in terms of a difference
- Some research hypothesess predict no difference btwn variables
What is analysis of data based on?
- Based on testing the statistical (null) hypothesis
- always predicts no difference or no relationship
What is a non-directional hypothesis?
*Do not predict a direction of change
What is a directional hypothesis?
*Predict a direction
What is a simple hypothesis?
*Includes 1 IV and 1 DV
What is a complex hypothesis?
*More than 1 IV or DV
What is important in reviewing the literature?
- Initial review is preliminary
- General understanding of the state of knowledge in the area of interest
- Once research problem is formulated begin a full and extensive review
- Provides a complete understanding of background to assist in formulating the research question
What should be in the scope of the literature review?
- Depends on your familiarity with the topic
- How much has been done in the area and how many relevant references there are
- review info on: pt population, methods, equipment used, operational definition of variables, statistical techniques used
What is the difference between primary and secondary sources?
- Primary: research articles
* Secondary: review articles (systematic review) and textbooks
What is validity in experimental design?
*Issues in experimental control that must be addressed so that we have confidence in the validity of the experimental outcomes
What is the most rigorous form of investigation to test hypotheses?
- The scientific experiment
- Researcher manipulates and controls variables
What is an experiment?
*Looking for cause-and-effect relationship btwn independent variable and dependent variable
What are experiments designed for?
- Designed to control for extraneous (nuisance) variables that exert a confounding (contaminating) influence
- Ex. 2 types of treatment received simultaneously
What are the 3 characteristics of experiments?
- Independent variable is manipulated by the researcher
- Random assignment
- Control (comparison) group must be incorporated into the study
Describe “independent variable is manipulated by the researcher”
- intervention (independent variable) is administered to one group (experimental) and not to another (control)
- Active variable: IV with levels that can be manipulated and assigned (ex. treatment)
- Attribute variable: IV with levels that cannot be manipulated as they represent subject characteristics (ex. occupation, gender, age group)
- not a true experiment
Describe Random Assignment
- In assigning subjects to groups
- Each subject has an equal chance of being assigned to any group
- No systematic bias exists that might affect differentially affect the dependent variable
- If groups are equivalent at the beginning of a study
- Differences observed at end of study are not due to differences that existed before the study began
- eliminates bias by creating a balanced distribution of characteristics across groups
Describe “control group must be incorporated into the study design”
- to rule out extraneous effects, use control group to compare to experimental group
- Must assume equivalence btwn control and experimental groups at start of study
- If we see a change in treatment group (experimental) but not control group, can assume it’s the treatment
What is the research protocol in experimental control?
- To control for extraneous (confounding) factors in a study
- Eliminate them as much as possible (implies the researcher must know what they are)
- Standardize research protocol so that each subject has a similar testing experience
- Ensure they affect all groups equally (Ex. Testing environment, fatigue, instructions to subjects)
Why do you maximize adherence to research protocol?
- To limit loss of data
- Data losses compromise effect of random assignment
- Decreases power of a study
What are the reasons for loss of data?
- Subjects may drop out during the study
- subjects may cross over to another treatment
- Subjects may refuse assigned treatment after allocation
- Subjects may not be compliant with assigned treatments
Why do you do blinding in experimental control?
- To avoid observation bias
- Participants knowledge of treatment status or investigators expectations can influence performance or recording outcomes
- double blind study = subjects and investigators are unaware of the identity of groups until after study
- Single blind study = only investigator/measurement team is blinded
What are the design strategies for controlling for intergroup variability?
- Random Assignment
- Eliminates bias by creating a balanced distribution of characteristics across groups
- Not perfect and may not always work (groups may not be balanced on important variables)
- Can use other strategies beside randomization
- Choose homogenous subjects (ex. only males or older adults)
- Matching (age and gender) across groups
- Use subjects as own controls (repeated measures design)
- Analysis of covariance (ANCOVA)
- not a design strategy but a statistical strategy
- covariates
What are the 4 types of design validity that form a framework for evaluating experiments?
- Statistical conclusion validity
- Internal validity
- Construct Validity
- External Validity