Quiz 5 Flashcards
Reliability
Issues related to the soundness of the data collection procedures
Validity
Is the authenticity of the results
- External Validity
- Internal Validity
External Validity
The extent to which the results of the study will be true for different groups of people or similar people in different settings. A researchers decisions about research design and sampling method will impact the degree of external validity in each study
Internal Validity
The extent to which the results of the study are true. When a researcher conducts a study, they want to make sure the results they get are because the intervention worked instead of as a result of some confounding variable.
Confounding Variable
A variable the researcher is unaware of and that has an impact on the outcome of the study
Threats to Internal Validity
- History
- Maturation
- Testing
- Instrumentation
- Statistical Regression
- Placebo effect
- Hawthorne Effect
- Selection Bias
- Attrition (loss to follow up)
History
-Threat to Internal Validity
-An outside event that occurred during the research study that can impact the results of the study.
Controlling for this threat: Control Group
Maturation
-Threat to Internal Validity
-Time passed and the participants grew “older, wiser, stronger, more experienced”
Controlling for this threat: Control Group
Testing
-Threat to Internal Validity
-Subjects become better at a test because they become more familiar with it
Controlling for Testing: Select an instrument that has high validity & reliability
Instrumentation
-Threat to Internal Validity
-Changes in accuracy of measurement from start to conclusion
Controlling for Instrumentation: select an instrument that has high validity and reliability
Statistical Regression
- Threat to Internal Validity
- Participants with high scores on first test tend to perform lower on second test (vice versa)
- Controlling for Statistical Regression: select a data collection tool that has very high validity and reliability
Placebo Effect
- Threat to Internal Validity
- The participants’ or researchers’ expectations that something will work can impact the results of the study.Controlling for the Placebo effect:
- Double-Blind: Both parties don’t know which variable is being tested in which group
- Placebo-controlled: The experimental group gets the real treatment and the control group gets a fake treatment.
Hawthorne Effect
- Threat to Internal Validity
- Research participants will change their behavior simply because they know they are being observed
- Controlling for the Hawthorne Effect: Select a research design that includes a control group & placebo. In obervational studies, conduct sustained observations and employ unobtrusive observation methods.
Selection Bias
-Threat to Internal Validity
-How the researcher selects people to participate impacts the study
Controlling for Selection Bias: Use probability sampling method
Attrition (loss to follow-up)
- Threat to Internal Validity
- People leaving the study
- Controlling for Attrition: N/A
Validity and reliability are exclusive to quantitative research
True
Qualitative practices established Trustworthiness
- Transferability
- Credibility
- Dependability
- Confirmability
Transferability
The reader of the research study determines the extent to which findings can be transferred to their settings or group. In other words, the person who reads the research determines if the findings of the study are a good fit to their situation.
One way that helps the reader make this determination is to have a lot of detailed info about the participants, the research setting, and findings in the journal article. This is known as THICK Description.
Credibility
The confidence in the truth of the findings. Two strategies:
1. Triangulation:
collecting different types of data (verbal, textual, images, etc), collecting data at different times, and/or having 2 researchers collect and analyze data. Looking at many different types of data helps the researcher have confidence that they have uncovered all the data required to gain a complete understanding of the research question.
- Memebr Checking:
sharing the data analysis with the research participants and/pr experts in the field working with participants. When the researcher begins to see meaning develop from the data, the researcher asks the participants if the meaning revealed from the data is true. Assists the researcher in uncovering any hidden bias.
Dependability
Relies on whether the results of the study make sense to another researcher. Here the practice is not for a researcher to replicate the study and achieve identical results; rather it is to ask the question “are the results consistent with the data collected”
One way to accomplish this is through an Audit Trail
Audit Trail: A detailed reporting of how the researcher conducted the study, especially the collection and analyses of the data.
Sample
A group selected from a population in the hopes that the smaller group [the sample] will be representative of the entire population
Population
A group that shares a common characteristic as defined by the researcher
Inclusion Criteria
QUAN- Determines who is suitable to be a a participant based on certain characteristics that are defined by the researcher based on the purpose of the study; high internal validity (dependability), low external validity (generalizability) if the criteria is too strict
Exclusion Criteria
QUAN- people who have met inclusion criteria but should not be included in the study (ex. If the effects of a medicine can harm a pregnant person then it would be considered exclusion criteria)
Probability Sampling Methods
only quan, focus on cause and effect, high internal and external validity
Allows the researcher to obtain a random selection of individuals from the population (random sample)
Simple Random
-probability methods (ONLY QUAN).
Simple random: equal chance of random selection in population
Stratified Random
- probability methods (ONLY QUAN).
The researcher identifies a subgroup or subgroups in the population and wants to ensure that the sample represents the subgroup(s) found in the population
Proportional Stratified
- probability methods (ONLY QUAN).
The researcher identifies a subgroup or subgroups in the population that are very unequal in size and wants to ensure that the sample will represent the population
Systematic
- probability methods (ONLY QUAN).
The researcher selects participants based on a randomly chosen number
Cluster
- probability methods (ONLY QUAN).
Selecting an intact homogenous group from within the population
example on page 100
Non-probability methods
- Can be either QUAN or QUAL or BOTH
- No random Sampling
- Utilizing a non-prob sampling method yields a QUAN study with low levels of external validity- meaning the results of the study might not generalize to other groups of people or similar groups of people that are in a different setting
-In terms of QUAL study, external validity does not apply
Convenience (Non-probability method)
QUAN&QUAL- sample people with easy access to
Quota (Non-probability method)
- QUAN
- the researcher needs to fill discrete groups at a predetermined number of participants. The groups have certain characteristics that are needed to answer the research question
Purposive (Non-probability method)
QUAL- The researcher purposefuly selects individuals w/ specific characteristics or specific experiences who can best answer the research question. at the beginning of the study
Theoretical Sampling(Non-probability method)
QUAL- the practice of selecting participants over the course of the study (in phases) based on the results of the emerging data analysis
What is another characteristic of Theoretical Sampling that sets it apart from the other sampling methods? Its done in phases -> data analysis is done to determine who needs to be sampled next
Snowball (Non-probability method)
QUAN&QUAL- Referral
The researcher identifies an individual w/ specific characteristics of interest or a specific life situation. The researcher then asks that person to refer similar people to the researcher. The researcher continues to ask for referrals from each person referred until the researcher has an adequate number of participants.
Data Saturation
QUAL
Research continues to enroll participants, collect data, analysis until no new information is revealed; sample size must answer the research question
In Purposive sampling; the researcher reaches data saturation and just interviews one more person. Remember, if you are still learning new things then you keep sampling, collecting data and analyzing it.
true
QUAN sample size
The size of the sample is determined by how large the population is.
The general rule states that as the population gets larger, fewer randomly selected participants are required in the sample.
a. Power analysis: Considers the type of data and statistical test used to analyze the data; considers alpha level, amount of power in study, and the effect sizes (also type of data and stat test to analyze data)
b. Sample size estimator: Considers the confidence levels and confidence intervals
c. Confidence level: Sample size estimator; how confident is the research that the results obtained from the sample will be true for the population (90%, 95%, or 99%)
d. Confidence interval: Sample size estimator; margin of error in the results
e. Effect size: Degree of impact the independent variable has on the dependent variable; most difficult aspect of sample size planning
The size of the sample is directly related to the quality and validity of the data analysis results
Prolonged Engagement
QUAL data collection
-the researcher spend enough time in the field with people so that the researcher can develop an overall understanding of the environment. Important in providing a thick and rich description of the study setting. Allows time for the participants to build trust with the researcher and feel comfortable enough to let their guards down
Persistant Observation
Collection of data Enhances credibility; once the researcher developed a detailed understanding and has built trust with participants -> researchers focus on collecting data; depth -Triangulation -Interviews -Data Saturation
If prolonged engagement provides scope, persistent observation provides depth
true
QUAN data collection
The tool or instrument the researcher uses to collect data is directly related to internal validity
Instrument Reliability
Instrument Reliability: measurement are consistent; consistently measures and attribute, variable, or construct that is supposed to be measure
Interrater
Instrument Reliability
QUAN- Required if data collection involves judgement or rating by different observers; statistical comparison between the scores recorded between among the people using the same data collection tool; independent tests compare
Test-retest
Instrument Reliability
QUAN- data is collecting using the same tool/instrument at different times with the same people; repeat testing should yield identical measurement/scores
Equivalent forms
Instrument Reliability
Data collection tool/instrument has 2 versions that are almost identical; pretest and posttest
Internal Consistency
Instrument Reliability
QUAN- the scores on a group of items measuring the same concept within a tool/instrument are highly correlated; two question asking the same thing in opposite ways
Instrument Validity
Tool measures what it is supposed to measure;
Content
Instrument Validity
QUAN- how thoroughly the concept can be measured using this instrument (ex. Asking an expert to review questionnaire to make sure its accurate)
Criterion
Instrument Validity
QUAN- testing the new tool/instrument against another tool/instrument or other measurement if another tool/instrument does not exist. The scores from the new tool/instrument should correlate to the other tool
Construct
Instrument Validity
QUAN- the accumulation of evidence from numerous studies using a specific measuring instrument; expected patterns compared
Code
QUAL data analysis
A word or short phrase that summarizes the meaning of the segment of data.
Immersion
QUAL data analysis
Includes a review of the research purpose statement and research question, in conjunction with reading and rereading the data numerous times. allows research to scope the data and reflect on the data prior to analysis.
Leads to the insights required for first-cycle coding
First Cycle Coding
QUAL data analysis
Dissecting and examining the data for similarities and differences; keeping an ongoing memo; assigns meaning unit to each segment of the data; continues until patterns emerge in the data; collecting data, coding data, reflecting, comparing codes from previous data collection
Memo:notes
Second Cycle Coding (themes)
QUAL data analysis
Themes/patterns emerge from the numerous codes; codes reveal patterns and coalesce into the emerging themes; reveals meaning; process is continued until the research reaches data saturation; themes answer the research question
Immersion and first cycle coding begin as soon as the researcher begins collecting the data. The researcher then repeats these steps (meaning collecting data, coding the data and comparing the codes to previous data sets) until themes begin to emerge from the data, these are referred to as second cycle coding.
true
Nominal
QUAN data analysis
-in name only
Labels differences without putting a value on the difference; no value just representation
Ordinal
QUAN data analysis
identifies difference by ranked order; data shows a difference but not by how much; best to worst
INterval
QUAN data analysis
measures the exact difference in increments that are consistent and can be measured
Ratio
QUAN data analysis
same as interval but the tool/instrument has a true zero value (ex. Biomedical variables (tools to measure vitals))
Unlike nominal and ordinal; INTERVAL and RATIO allow the researcher to measure the exact difference. These two have increments that are consistent and can be measured. Interval and Ratio data can be analyzed using inferential statistics.
true
Parametric Data
-Interval and Ratio
descriptive statistics: central tendency and description of relative positive; inferential statistics: significant difference, t-test, ANOVA, ANCOVA, MANOVA, regression or multiple regression, pearson r, odds ratio
Nonparametric Data
-nominal and ordinal
descriptive statistics: frequencies, percentages; inferential statistics: mann-whitney U test, chi squared
Descriptive Stats
a Mean: computed by adding all the scores and dividing the total by the number in the group
b. Median: is the midpoint among all the scores; the midpoint might not be a whole number
c. Mode: is the most commonly occurring score
d. Range: represents the full range of scores, the lowest value to the highest value in a data set
e. Standard deviation: the dispersion of data around the mean
Inferential Stats
QUAN- data analysis techniques that draw conclusions from the data (causation, association, correlation); used to test hypotheses and require the researcher to set an alpha level (alpha levels, statistical significance, effect size, confidence levels and intervals, type 1 and 2 errors, applied/clinical significance)
Alpha levels (also known as the p-values)
Researchers set alpha levels/p value to determine when the null hypothesis can be rejected because the results of the study did not occur by random chance (statistical significance); p < .05- only 5 results out of 100 might have occurred by chance, not as a result of the experiment; p < .01- 1 result out of 100 might have occurred by chance, not as a result of the experiment; p < .001 - 1 result out of 1000 might have occurred by chance and not as a result of the experiment
If the results of the study are WAY ABOVE the p-value, are the results of the study due to random chance?
no
Type 1 errors
Occurs when the null hypothesis is falsely rejected; researcher believes there was a significant difference when none actually exists; conclude results were not due to chance when they were; mitigate this by lowering alpha level
What’s a good way to reduce Type I errors?
Lowering alpha level
Type 2 errors
Occurs when the null hypothesis is falsely accepted; conclude results were due to chance when they were not; mitigate by using power analysis to determine how many participates are appropriate (increase sample size) (ex. analysis says there is no difference between drugs but there is)
What’s a good way to reduce Type II errors?
using power analysis to determine how many participates are appropriate (increase sample size)
Applied/clinical significance
Applied/clinical significance: Refers to the practical applied value or effect of the treatment or intervention; does the treatment make a real-world difference in quality of patient care; factors that determine clinical significance: size of sample, confidence intervals, effects sizes etc.;
The determination of applied/clinical significance only comes AFTER the researcher finds STATISTICALLY SIGNIFICANT results. If no statistically significant results are found, there is NO REASON to look for applied/clinical significance.
true