Midterm Flashcards
Evidence Based Practice
- The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients” (Loiselle &; Profetto-McGrath, 2011, p. 368)
- The “integration of best research evidence with clinical expertise and patient values” (Sackett, Straus, Richardson, Rosenberg, &; Haynes, 2000, p. 1).
Paradigm
-Particular way of thinking or viewing things -Perspective -The lens through which we view the world
Quantitative can also be called..
Positivist, Received View
Quantitative Research
Scientific ways of thinking inquiry Statistical explanation, prediction and control Provable/Cause effect Demonstrating associations/relationships Quantify findings Findings are measureable May start with a hypothesis
Qualitative can also be called…
Perceived view/naturalistic
Qualitative Research
Value placed on rich details of the context in which the phenomenon occurs
Time and place (context) are important
Aims for description, understanding, exploration
Answers the question, “What is going on here?”
Truth” is determined by the individual or cultural group
Subjectivism valued
Multiple realities exist
What is inductive reasoning?
Inductive reasoning moves from the specific to the general.
Particular instances are observed and then eventually combined into a general statement or theory.
What is deductive reasoning?
Moves from the general to the specific.
From a general premise to a particular situation or conclusion.
Deductive reasoning generally moves from a theory (the general thing) about something, to the relationships between variables (the specifics) found within that theory.
What are the 3 stages of research process?
- Planning
- Executing
- Informing
Stage 1- Planning
- Selecting and defining the research problem/topic
- Review and synthesize related literature
- Identify frame of reference/define terms
- Develop aims, objectives, questions & hypotheses
- Select a research design/method (sample & setting, define all measurements, data collection methods, analysis)
- Consider feasibility & ethics
- Finalize the proposal/plan (budget, timeline, dissemination, team)
Stage 2- Executing
- Obtain ethical approval (if human/animal participants)
- Obtain funding (optional)
- Collect data
- Analyze data/Interpret findings
Stage 3- Informing
- Write up results
- Disseminate research findings
Research Mapping
Purpose
The purpose of creating a study map is to help you, as a reader and appraiser, to focus on the essentials of the study
Often a visual representation such as a concept or mind map
3 core ethical principles
- respect for persons
- concern for welfare/beneficence
- justice
Respect for persons
Respect for persons is based on 2 ethical convictions:
- Autonomy
- Special protection
Autonomy…
= INFORMED CONSENT -Information Full disclosure –what does this entail? Anonymity & confidentiality In some cases full disclosure would totally undermine the study…covert data collection Concealment Deception Comprehension Understandable Voluntariness Freedom from coercion, freedom to withdraw
Special protection
vulnerable people i.e. children/disabled
Concern for Welfare/ Beneficence
Maximize benefits/minimize harms - RISK Physical Psychological Social Economic
- BENEFIT
Direct
Indirect
Advancement of knowledge
Justice
Fairness and Equality
- Individuals must be treated fairly
- Must receive the minimum standard of care
- Equitable distribution of the benefits & burdens of research
ROLE OF RESEARCHER
- Conduct equitable recruitment of participants
question whether groups are considered for inclusion simply because of their availability, their compromised position, or their vulnerability — rather than for reasons directly related to the problem being studied.
- Ensure benefits derived from the study are available to all
Quantitative Methods- Strengths
- Gives you quantity
- Objectivity
- Predictability - cause and effect, correlations
- Can collect data through multiple methods
- Comparisons between populations/groups are possible
- Comparisons over time are possible
- Able to generalize to the larger population
Quantitative Methods- Weaknesses
- Must know topic to ask “right” questions
- What do your results mean?
- Oversimplify a complex reality?
- How do people interpret the question?
- Phenomenon must be measurable
- Can be expensive
Quantitative Methods- Experimental Design
what you’ll see: 3 essential characteristics:
- Randomized sample
- Control group
- Manipulation of independent variable (IV) such as tx or education program
- Example: classic experiment; applied as randomized clinical trial (RCT)
Quantitative Methods- Quasi-experimental Design
- what you’ll see: manipulation of independent variable
- does not have randomized sample OR control group
- Example: pretest-post test
Quantitative Methods- Non-experimental Design
- what you’ll see: LACKS manipulation of IV, does not test an intervention
- Example: descriptive, correlational research, questionnaire/survey
Experimental Research Design
- used to test cause-and-effect relationships between variables”
- known as the “scientific method”.
Randomized control trial
- Type of Experimental Research design GOLD STANDARD
- A study in which similar people are randomly allocated to two (or more) groups to test a specific treatment/intervention
- The experimental group receives the treatment/intervention to be tested
- The comparison [e.g. standard treatment] or control group [placebo treatment or no treatment]
Independent Variable
- Causes/Affects
- The intervention, treatment
- Has an influence on the DV
Dependent Variable
- it is acted upon or produced
- outcome
- caused by the IV
Confounding Variables
- Random Assignment and/or Random Allocation (AKA Probability Sampling)
- Intentional Control over known or presumed confounders
Stratified Random Sampling
Astratified random sampleis a populationsample that requires the population to be divided into smaller groups, called ‘strata’.
Random samplescan be taken from each stratum, or group.
Hypothesis- Critical Features
- A hypothesis is the researcher best guess as to the outcome of the study
- Point to the research design to be used
- Suggest/predict relationships among variables
Identify the nature of the relationship - Can be directional or non-directional
Null Hypothesis
there is no relationship between the variables being tested (no difference exists or population status quo)
What are 4 steps in hypothesis testing?
- Determine hypotheses
- Identify level of significance
- Compute the test statistic
- Make a decision- reject or retain
- Determine hypotheses
- State the research hypothesis
- State the null hypothesis
- Identify level of significance
- Level of confidence that the difference observed between the experimental and the control/standard groups is a real difference.
- To set the criteria for a successful decision we state the level of significance for a test
- This pre-test level of statistical significance is called the “alpha”.
- Alpha calculated by subtracting the level of confidence from 1.
- There are two standard levels of confidence used in most experimental research: 95% and 99%
1- 0.95 = an alpha of 0.05 (most common)
1-0.99 = an alpha of 0.01
- Test Statistic
- A test statistic tell us how many standard deviations a sample mean is from the population mean
- The larger the value of the test statistic, the further the distance (or number of standard deviations) the sample mean is from the population mean
- This allows us to make a decision about the null hypothesis
- Make a decision
Calculated “P Value” is compared to the pre-set Alpha.
p value < .05 we reject the null hypothesis & reach significance
P value >.05 we retain the null hypothesis & fail to reach significance
Decision to reject or retain the null hypothesis is called significance or statistical significance
Type 1 Error
State a difference exists when it really doesn’t (false positive)
The null hypothesis is rejected when it is correct
We state that there is a difference between delayed cord clamping and milking when there really isn’t one
The probability of making a type I error is alpha, which is the level of significance you set for your hypothesis test.
Usually set at a minimum of < 0.05
Preventing type 1 errors
To lower this risk, researchers must use a lower value for alpha. However, using a lower value for alpha means that the researchers will be less likely to detect a true difference if one really exists.
Type 2 Errors
States there is no difference when one exists (false negative)
The null hypothesis is retained when it is false
State there is no difference in delayed clamping and milking when there is one
The probability of making a type II error is called β (Beta), which is the level of power the researchers have set for the test
Power – the ability of a test to detect a difference when one actually exists; power increases as sample size increases
Usually set at 0.8 (80% chance of finding a difference when there is one)
Preventing Type 2 errors
Depends on a number of factors (alpha level, sample size, effect size, unsystematic variability in the sample’s data and the choice of statistical test) we collectively call power
More power to detect an effect in population if:
- Select a larger (less stringent) alpha level
.05 rather than .01 - The effect size in the population is large rather than small
For eg: if drug has a small or large effect on a disease - The sample size is large rather than small
Large samples to detect small effects
Small samples to detect large effects - The data have lower levels of unsystematic variability
Eg: measurement error, individual differences, conduct RCT’s in controlled environments to avoid this - They use the appropriate statistical test
Some tests make it more difficult to find results
Power should be…
at least 80%
Reliability
The degree of consistency or dependability with which an instrument measures the attribute it is intended to measure (repeatability, stability)
Validity
The degree to which an instrument measures what it is intended to measure. Eg: scale to measure hopelessness and not depression
What are 3 ways to measure reliability?
- Internal Consistency (items all measure the same attribute)
- Test-retest ( good stable characteristics such as temperament vs. moods, attitudes- coefficient higher than 0.70= adequate)
- Inter-rated reliability (2 or more observers or coders make independent observations and then the degree of agreement is calculated- 75% or higher= adequate
What are 3 ways to measure validity?
- Content Validity ( does the measure contain the entire universe of possible items)
- Criterion Validity (examine relationship between scores on an instrument and an external criterion against a “gold standard”)
- Construct Validity (Evidence to establish “tool” is measuring what it should be i.e. known groups technique)
What are 3 main threats to internal validity?
- Participant threats
- measurement threats
- researcher threats
What are 5 types of participant threats?
- selection
- maturation
- history
- morality
- contamination
What are 2 types of measurement threats?
- Instrumentation
2. Testing Effect
What is 1 type of researcher threat?
- Experimenter Expectancy
External Validity
The ability to generalize the experimental
findings to other settings or samples/populations
Threats to external validity (3)
- Reactive effects/Hawthorne effect
- Selection effects – sample not ideal, mortality, maturation
- Placebo effect – reason for blinding
EG: Experimental design looked at long vs short hospital stay on the decisions of first time mothers to breastfeed. Length of hosp stay did not effect BF rates and all rates were higher than city-wide rates.
Random Sampling (4 types)
- Simple random (lottery)
- Systematic random (selection at intervals)
- Stratified random (type of quota sampling)
- Cluster (several stages of random selection)
P<0.05
reject null hypothesis= SIGNIFICANT
P > 0.05
Retain null hypothesis= NOT SIGNIFICANT
Confidence interval must not cross line of no difference which is…
1
Odds ratio (OR) or RR (Relative Risk)= 1
NO DIFFERENCE
Odds ratio (OR) and RR (Relative Risk) >1
higher odds of outcome or higher risk of intervention group to control group
Odds ratio (OR) and RR (Relative Risk) <1
lower odds of outcome or lower risk of intervention group to control group