Exam 2 Flashcards
Practitioner’s Conundrum: Constructivism vs. Empiricism
- Constructivism: generate knowledge from the interpretations of their experiences
- Empiricism: generate knowledge by systematically testing hypotheses to prove or disprove them
- Need to live in BOTH worlds to maximize care
Deductive vs. Inductive Reasoning
- Deductive: take info and make conclusion w/ other knowledge
- Inductive: make conclusion with what you observe
Law vs. Theory
- Law: states that something happens, if A then B
- Theory: summarize/provide explanations for findings, stimulate development of new knowledge, don’t become laws
Scientific Method
- Ask research question
- Do background research
- Construct hypothesis
- Test with an experiment
- Analyze results and draw conclusions
- Report Results
- Think and try again if needed
Components that describe Scientific Method
- systematic = order for reliability
- empirical = info gathered via observation or experiment
- critical examination = statistics, report results
Hierarchy of Evidence (highest to lowest)
- RCT
- Cohort studies
- Case control studies
- Case series studies
- Expert opinion
Types of Probability Sampling
- simple random sampling
- systematic sampling
- stratified random sampling
- disproportional sampling
- cluster sampling
Simple Random Sampling
table of random #s and computer randomly identifies starting point or who is selected
Systematic Sampling
have a population of 10,000 and want a sample of 100 so you pick every 10th person
Stratified Random Sampling
randomly select students from different schools but not equally proportional or represented from each school
Disproportional Sampling
selecting same #s from same population, but it is disproportionate of population (10 girls and 6 boys, pick 2 of each)
Cluster Sampling
population –> several clusters –> take sample from each cluster which is equal in size and similar
Nonprobability Sampling Types
- Convenience sampling
- Quota sampling
- Purposive sampling
- Snowball sampling
Convenience Sampling
based on availability, potential bias due to self selection
Quota Sampling
picking an adequate number for each group
Purposive Sampling
handpicked subjects for a purpose that is extremely specific
Snowball Sampling
original sample provides selection for more subjects
Validity
- believability
- degree to which the relationship between IV and DV is free from effects of extraneous factors
Types of Experimental Design Validity
- Statistical Conclusion Validity
- Internal Validity
- Construct Validity
- External Validity
- Face Validity?
Statistical Conclusion Validity
- inappropriate use of statistics
- low statistical power
- violated assumptions of statistical tests
- reliability and variance
- error rate/overuse of statistics
Internal Validity
- is there a causal relationship b/w IV and DV outside of what study is looking for?
1. Single Group Threats
2. Multiple Group Threats
3. Social Threats
Internal Validity: Group Threats to Validity- assignment, history, maturation, attrition, testing, instrumentation, regression
- assignment- when putting participants in groups, are group characteristics equal or reported?
- history- outside of control of study, related to ADLs
- maturation- change internal to participants that occur over time
- attrition- drop out rate/mortality, threat to intervention study
- testing- learning effect, when having multiple tests & improvement in tests
- instrumentation- choose wrong device or approach
- regression to the mean- extreme baseline values result in next measurement closer to mean
Internal Validity: Social Threats
- diffusion or imitation of treatments- participants have contact and change treatment
- compensatory equalization of treatment- control group receives something to make up for experiment group
- compensatory rivalry or resentful demoralization- control group hears about exp group and rivalry (we’re gonna do better) or resent (why bother and drop out)
Construct Validity
- to what theoretical constructs can results be generalized?
1. poor operational definitions- can’t replicate
2. multiple treatment interactions- different interventions influence results
3. length of follow up- time may decrease importance of effect
4. experiment bias- doing an experiment and Hawthorne effect where people modify behavior due to change in environment
External Validity
- can the results be generalized to other persons, settings, or times?
1. biased sample selection- sample different than representative population
2. setting differences- where study conducted can it be generalized to clinic
3. time- older research, things change, circumstances and healthcare change
Strategies to Minimize Threats to Validity
- random assignment- groups similar with random selection
- control groups- limit maturation
- blinding- limit interaction of groups/investigators
- operational definitions- define things well
- dealing with attrition
- minimizing intersubject differences
Strategies to Minimizing Intersubject Differences
- selection of homogenous subjects- groups well defined & similar
- blocking- look at men separately than women
- matching- male to male
- using subjects as their own control- reduce variability
- analysis of covariance- limit covariance of extraneous variables
Handling Missing Data (reasons, what to do)
- Reasons: drop out of study, switch to another treatment group, refuse assigned treatment, non compliant
- What to do: document why they had attrition, comparison of groups to look for differences, LOCF last observation carried forward as if they didn’t get better, Intention to treat
CONSORT Statement
go through aspects of study to see the things that need to be described or done in the study