Final Deck 1 Flashcards
major types of bias:
- confirmation bias
- response bias
- selection/sampling bias
- recall bias
- misinterpretation/confounding bias
- publication bias
confirmation bias
- the tendency to gather or weight consideration towards evidence that CONFIRMS preexistingor preferred expectations, which dismisses or fails the find contradictory evidence.
- its easier for people to discount newer information than to look into new information.
- EXAMPLE: being more likely to consider positive reviews about an item vs the negative reviews.
- EXAMPLE: being more liekly to accept positive info about a preferred presidential candidate than negative.
recall bias
- retrospective reporting error when people recall information.
- EXAMPLE: parents of children diagnosed with ASD may be more likely to recall events before their diagnosis if they are not ready to face the diagnosis.
response bias
- when responses of the participants are influenced by variables other than the construct being measured.
- EXAMPLE: people responding to a survey inaccurately because their responses are influenced by other variables, such as the environmental or social pressure to respond a certain way.
selection/sampling bias
- systematic and directional error involved in the choice of units/cases/participants from a larger group of study
- affects external validity
- EXAMPLE: sending a survey to only your classmates, friends, and family because it is not a diverse sample; these people are likely to think the same as you so it creates bias.
confounding/misinterpretation bias
- incorrectly attributing an association between two variables instead of a third factor that is independently associated with both the independent variable and the dependent variable
- affects internal validity.
- EXAMPLE: a study that determines if there is a relationship between shoe size and height that does not account for age.
publication bias
- tendency for study results that are published in journals or other outlets to more likely show positive or statistically significant findings
- EXAMPLE: journals only post studies that are statistically significant
ad hoc ergo propter hoc
- Latin: after this, therefore because of this.
- fallacy assuming that something was the cause of something else.
- correlation does not equal causation.
- EXAMPLE: got in a wreck because it was raining; failed the class because of the teacher.
scientific method
- a set of procedures, guidelines, assumptions, and attitudes required for the organized and systematic collection; interpretation and verification of data and the discovery of reproducible evidence.
- question/observation
- research
- hypothesis
- experiment
- analysis
- report
Where does the literature review fall in the scientific method?
- the research phase.
- the literature must be reviewed to know what is out there already so that a hypothesis can be formed based on the gaps.
What is the difference between theory vs. hypothesis?
- A hypothesis is more specific, a theory is more broad.
- hypothesis: predicting results specific to one study.
- theory: phenomenon based on multiple studies.
- theory and hypothesis feed into one another.
Experimental vs. Non-experimental Design
- experimental has control over the independent variable
- non-experimental does not
Experimental Design
- Manipulates the independent variable to see changes in the dependent variable.
- Has control and experimental groups.
- Has random sampling and assignment.
- Can be blind/double blind.
Non-experimental Design
- Less variable control
- More descriptive/applied
- Correlation studies
- Types: surveys, polls, interviews, case studies.
Quantitative vs Qualitative
- Quantitative uses numerical data, usually deductive
- Qualitative uses data collected in words and is used to make observations, analyze narratives, and make themes, usually inductive
Quantitative pros and cons
- pros: standardization, reliability, easy to analyze statistically, larger samples collected quickly.
- cons: less ability to obtain characteristics.
Qualitative pros and cons
- pros: greater depth and exploration
- cons: more time consuming, more intensive work
translational research
- a bridge between basic research (doesn’t have a specific problem in mind) and applied research (research conducted to address a specific problem in society- testing a specific intervention or medication.)
Ethics
- the principles of morally correct conduct accepted by a person or group considered appropriate to a specific field.
- EXAMPLE: in psychological research, proper ethics requires that participants be treated fairly and without harm, investigators report results and finding honestly.
research ethics
- the values, principles, and standards that guide the conduct of individual researchers in several areas, including the design and implementation of studies and the reporting of findings.
- EXAMPLE: research in that stipulate the studies involving data collection from human participants must be evaluated by institutional review boards.
code of ethics
- each organization has their own provisions and code of ethics to follow.
- the goal is to outline behaviors that take place in the field,
- it takes the four principles of ethics and translates them into specific behaviors that can be good or bad.
Two principles from ASHAs Code of Ethics
- “Individuals should honor their responsibility to hold paramount the welfare of persons they serve professionally or who are participants in research and scholarly activities, and thy shall treat animals involved in research in a humane manner.”
-Summary: Prioritize the welfare of the participants of the study. - “Individuals shall honor their responsibility to achieve and maintain the highest level of professional competence and performance.”
- Summary: Maintain professional competence.
Tuskegee
- A study done that gave people (sharecroppers) syphilis and did not treat them, even after penicillin was shown to be effective.
- The study caused many people to get sick and die.
- It led to an enormous public outrage and caused Congress to pass the National Research Act (created the group of people who developed the Belmont Report).
The Belmont Report
- The Belmont Report led to the eventual development of 4 ethical principles.
- (beneficence, nonmaleficence, autonomy (informed consent), and justice (fairness, equitable distribution of benefits and risks of the study).
The Common Rule
- The standardization of participant protections.
- Came about in the 90s.
- Used for almost all federal departments in regards to research.
Vulnerable Populations
Additional protections for research with:
- prisoners
- children
- pregnant women
List/Describe 4 ethical principles
- Beneficence: Doing good, being kind, improving well-being
- Nonmaleficence: Avoid doing harm, focus on NOT reducing wellbeing
- Autonomy: A person’s right to make his/her own decisions; competence; the autonomous rights of one person should not infringe on the rights of another (respect for persons informed consent)
- Justice: Fair distribution of risks/benefits of study across the population
Ethical Dilemma
when two or more of the ethical principles come in conflict
Give an ethical dilemma example
- My school district is in a rural part of the state and has a difficult time recruiting and retaining ASHA certified SLPs. Therefore, by necessity, the district is hiring less qualified clinicians and support personnel and delegating supervisory responsibilities to me.
- beneficence and nonmalfecience are in conflict
What is an IRB?
- Their scope is to protect the participants of the planned study.
- They review every aspect of research to make sure it complies with the requirements of the government and of the university
How does the IRB work?
Ensures
-Risks are minimized
-Risks are reasonable compared to benefits
-Selection of participants is equitable
-Informed consent will be obtained
-Confidentiality is adequately maintained
-Allows for exemptions if participants cannot be identified
-Surveys, interview, questionnaires
-Studies of existing records
-Research on normal educational process
-Also allows for expedited review if minimal risk
Describe a literature review and 3 reasons to conduct one
- A literature review goes through all of the previous work in an area of interest.
- A standalone literature review summarizes and updates the current research.
3 reasons to conduct one are
- understand previous literature and your place in it
- identify useful or flawed measures
- avoid “dead end” topics
- get ideas on reporting structure
How is a literature review different from a systematic review vs. meta analysis?
- A literature review aims to create an overall summary regarding the literature and identify trends/gaps.
- Lit reviews are smaller and focus on a specific research question.
- A systematic review uses a more broad structure to summarize the overall research.
- A meta analysis is a type of study that uses more quantitative techniques that takes previous studies and re-analyzes the statistical work according to effect size.
- It does this to group consistent studies and make averages.
EBSCO vs. Database vs. Journal.
- EBSCO is like a search engine/search service for articles (like google).
- A database is a curated collection of journals (MEDLINE, PsychInfo, CINHAL etc.)
- A journal is an individual thing that publishes studies (ex: New England Journal of Medicine)
Define a predatory journal
- Predatory journals are publications that claim to be legitimate scholarly journals, but misrepresent their publishing practices.
- Some common forms of predatory publishing practices include falsely claiming to provide peer review, hiding information about Article Processing Charges (APCs), misrepresenting members of the journal’s editorial board, and other violations of copyright or scholarly ethics.
List/describe 3 characteristics of a predatory journal
- False claims about peer review
- misrepresenting numbers/members of the editorial board
- violations of copyright
List 3 factors that contributed to the development of predatory journals
- Proliferation of research (open access journals- ex: journals that do not require a subscription)
- Groups came out with very large open access journals on the internet that were totally free and began to misrepresent their publishing practices
- rising journal subscription costs to libraries
Prices of journals and databases have gone up, so it’s harder for libraries to maintain the databases - tenure pressure
- certain researcher-paid open access journals with lax or no peer review
The internet allows predatory journals to really take off*
Predatory journals vs. online journals
- These are not the same thing
- Journals can be online and open access and still be legitimate
Impact Factor
- A calculation of citations and published articles cited in Scopus (Elsevier).
- An impact factor is given to a journal, and it is a measure of the number of times people cite that.
- More citations leads to a higher impact factor/seems to be more reliable
- this was intended to combat predatory journals
Research interest –> research topic –> research question
- Identify a research interest (ex: Alzheimer’s Disease) and search it in a database
- To get to a research topic, add specific details to make it more narrow (who, what, where, when, why)
- A research question is the most specific
-*library sources help you by providing research, and reading abstracts to help you get ideas of what variables to choose
What does a research question specify?
- Who you are going to be evaluating
- What specific construct are you researching (operational definition)
- Where will you conduct the study
- When will you conduct the study and how long will it take?
- The minimum 2 variables
What is the FINER criteria for a research question?
- Feasible (ample participants, time, $)
- Interesting
- Novel (confirms/refutes/extends previous findings, or finds something new)
- Ethical
- Relevant (research, clinical)
How does operationalizing variables impact the statistical design?
When you operationalize the constructs, it lends itself to the design because
- defining the construct can tell you what type of measurements to use (interviews, assessments, surveys)
- the type of data you will need (ordinal, nominal etc)
- how you will compare changes in the construct (pre-test post-test, correlation etc.)
Define Unit of Analysis
- the unit that you are collecting your data on → this is usually individual peoples (ex: parent perceptions of child progress in speech language services).
- The unit of analysis can also be a group (ex: school district report cards- the individual district is rated).
- For the research class for this project, the unit of analysis is individuals taking the survey.
- *it helps with design because when you know the construct, you can operationalize it.
- Ex: anxiety in students is the unit of analysis- operationalized by scores on an anxiety assessment
Operational definition vs. variable
- Operational definitions need to define the observable traits that can be measured to gain information about a construct
- a variable is something that can change in and between participants (ex: level of anxiety)
Conceptual vs. Operational Definitions
- Conceptual definitions explain the background, understanding, and theoretical framework of an idea
- An operational definition explains how the idea will be observed and measured in the study
- background vs measure
constant vs variable
- A constant is a group that does not change throughout the experiment.
- A variable is something that can change or is manipulated to change.
List 2 types of variables
Discreet, continuous
Define discreet variable
Has a finite range/set categories (ex: Likert scale)
-discreet can be further broken down into dichotomous and polytomous variables
-dichotomous: having 2 categories
-polytomous: having 3 or more categories
Define continuous variable
- Has an infinite range (ex: temperature-can be in between degrees)
Contrast independent and dependent variable
- Independent variable is the thing you test or change
- Dependent variable is what happens as a result of the independent variable manipulation
What is the key difference between mediator and moderator variables?
- A mediator variable affects the dependent variable
- a moderator is indirect.
Define/give an example of a mediator variable
- A mediator variable has a direct line of relationship that affects the dependent variable.
- The independent variable causes the mediator and then causes the dependent.
- Ex: temperament. If you have a certain personality, you might be more likely to have a specific type of emotional regulation. And then your temperament directly affects emotional regulation
- Ex: study time affects your test score.
Define/give an example of a moderator variable
- Can affect the relationship between the independent-mediator-dependent variable.
- There is no direct cause from the independent variable to the moderator (not part of the chain, but can affect the relationship).
- Ex: In a relationship between emotional reactivity and emotional dysfunction, studies have demonstrated that there is a relationship between the independent variable (reactivity) and dependent variable (dysfunction).
- Emotional coping mechanisms → there is no relationship between emotional reactivity and coping mechanisms.
- But, coping mechanisms will influence the relationship between activity and dysfunction (people who are taught coping mechanisms are likely to be less reactive). – The coping mechanisms are a MODERATOR- because they are not part of the chain
List the 4 scales of measurement
Nominal, Ordinal, Interval, Ratio
Nominal Scale- describe/give example
- Nominal Scale-most basic scale of measurement.
- They are arbitrary- the only thing about a nominal scale is two different points in the scale cannot have the same value
- property: identity
- mathematical operation: count
- descriptive stats: mode
Ordinal Scale-describe/give example
- Follows the rule of magnitude (order). One value is higher than another.
- They have a magnitude relationship between the different values on the scale.
- The variables have a relationship of magnitude but it is not a set/equal amount between them.
- Example-Likert scale: usually a 5-7 point ordinal scale (agree/strongly agree/neutral-has no equal/set value between them)
- property: identity, magnitude
- mathematical operations: rank order
- descriptive stats: mean, median, mode
Interval Scale-describe/give example
- has identity, magnitude (order) and equal intervals between the points. This is something like an IQ score (have an average of 100 and a SD of 15).
What is the difference between interval and ordinal scales?
- An ordinal scale has an order, but there is no equal distance between the values (agree/strongly agree etc)
- An interval scale also has an order, and the intervals between values are equal and meaningful (ex: IQ scores)
Ratio Scales-describe/give example
- Has a starting point at absolute zero
- ex: the number of times someone stutters (they have to start at 0 first)
What is the difference between an interval and ratio scale?
The interval scale does not start at absolute zero, where the ratio scale does