Research Methods I Flashcards
Belief Perseverance
Certain about one’s knowledge, maintains belief even if the belief is false.
Confirmation Bias
Seek out info that confirms one’s beliefs, ignoring info that contradicts
Availability heuristic
Overestimating memorable events, presuming how often they occur
Illusory Correlation
Believing a relationship exists when there is none
Objectivity
Without bias from experimenter or participants
Data-Driven
Conclusions based on data-objective info
Scientific Method
Way of knowing through objective, empirical methods to search for causes of natural events
Theory
- Logically consistent statements about a phenomenon that:
- Summarizes existing empirical knowledge
- Forms knowledge into precise statements of relationships
- Proposes explanation of phenomenon
- Serves basis for making predictions
Deduction
Reasoning from general statements toward prediction of an event
Hypothesis
Prediction of research result under certain circumstances
Induction
Reasoning from specific events (results of individual studies) to the general (the theory)
A. Beneficence & Nonmaleficence
Weigh the benefits & costs of research; seek to achieve the greatest good with little harm
B. Fidelity & Responsibility
Aware of responsibility to society; highest standards of professional behavior.
C. Integrity
Honest in all research
D. Justice
Treat everyone with fairness; maintain level expertise to reduce bias
E. Respect for People’s Rights & Dignity
Safeguard confidentiality & protect rights of these volunteering as research participants
8.01 Institutional Approach
Accurate info about research proposals, obtain approval for research.
8.02 Informed Consent to Research
Inform participants of purpose ability to decline/withdraw consequences of declining/withdrawing factors that influence their willingness to participate, research benefits, confidentiality limits, incentives, questions & their rights.
8.03 Recording Voices & Images
Obtain informed consent observations or research involves decepetion.
8.04 Client, Student & Subordinate Res. Subj.
Protect participants from consequences of declining/withdrawing. Give alternative & equitable activities if participation is given in form of course credit or extra credit.
8.05 Dispensing with Informed Consent
Dispense informed consent when it doesn’t create distress or harm & involves educational practices participation doesn’t put one at risk for criminal or civil liability, study is related to job or organization effectiveness, otherwise permitted by law or federal or institutional regulations.
8.06 Offering Inducements
Avoid making excessive or inappropriate inducements when offering inducements, clarify services as well as risk, obligations, & limitations.
8.07 Deception
Deception is justified for the study’s value. Deception is not done to cause physical pain or emotional distress. Deception must be an integral feature of the design.
8.08 Debriefing
Participants are debriefed about the research and correct any misconceptions. If they must withhold the info, psychologists reduce risk of harm. If research procedures harmed a participant, they attempt to minimize harm.
8.09 Humane Care & Use of Animals
Animals are handled by regulations & standards; state, federal & professional. Individuals have received training to handle these animals. Minimize discomfort or pain of animals, surgical procedures are done to avoid infection & minimize pain. When appropriate, minimize pain if an animal’s life be terminated.
8.10 Reporting Research Results
No fabricated data. If there are errors, they are corrected.
8.11 Plagiarism
Cannot present portions of another’s work as their own even if the data is cited.
8.12 Publication Credit
Credit is received only for work they have performed & contributed to. Must have been involved in their contributions.
8.13 Duplicate Publication of Data
No publication of data that’s already published.
8.14 Sharing Research Data for Verification
Do not withheld data from other professionals who want to reanalyze.
8.15 Reviewers
Those that review material must respect the confidentiality and rights of the info of those who submitted it.
Scales of Measurement
Categorizing events (qualitative) or describe the size of the event (quantitatives).
Qualitative
Categorical
Quantitative
Descriptive
Nominal
- Labeled & categorized
- Male/female, hair color, names of people, etc.
Ordinal
- Ranking in size or magnitudes.
- Socio-economic status, education level, income level, satisfaction ratings, etc.
Interval Scale
- Equal differences on a scale reflecting differences in magnitude.
- Temperature, SAT score, credit score, rating scale, etc.
Ratio Scale
- Reflect ratio of magnitudes. Absolute zero point.
- Height, weight, etc.
Discrete variables
Separate categories. No values exist between two neighboring categories.
Dichotomous variable
Two categories of a variable.
Continuous variables
Infinite number of possible values between any two observations. Divisible. Separate intervals. Low & upper limit.
Construct
- Can’t be observed directly, inferred from certain behaviors.
- Measurement can be replicated; faithful proxy of construct.
- Use different kinds of measurement for faithful proxies of what we want to measure.
Reliability
Reproducibility of a measure; measures of the same phenomenon are consistent & repeatable; high reliability = minimum measurement error.
Classical Reliability Theory
Assumptions:
- True score = constant
- Error = Random
- Correlation between trues scores & errors = 0
- Correlation between errors of different measurement = 0. Error assumed random.
Item Response Theory
- Based on response patterns of items.
- Item based. Scored differently based on difficulty level.
- Differences in ability to discriminate between test takers.
- Discriminates well for all parts of distribution.
Validity
- Measuring the intended construct.
- Validity assumes reliability. Measures can be reliable, but not valid.
- “Does reaction time represent speed of processing in semantic memory?”
Content Validity
Adequately measures construct, connects to a familiar concept – operational definition.
Multitrait-Multimethod Metric
Checks construct validity by looking at different traits and methods.
Face Validity
Occurs when a measure appears to be a reasonable measure of some trait.
Criterion Validity
Form of validity in which a psychological measure is able to predict some future behavior or is meaningfully related to some other measure.
Predictive Validity
Your measure predicts future behavior or attitude.
Concurrent Valdiity
Does your measures give scores that agree with other related measures.
Construct Validity
- Most rigorous.
- When the measure being used accurately assesses some hypothetical construct; refers to when the construct itself a valid; refers to whether the operational definitions used for independent & dependent variables are valid.
Heterotrait-Monomethod Triangles
Same method, different traits.
Locus of control + Questionnaire, compare with self-esteem + questionnaire.
Heterotrait-Heteromethod Triangles
- Different method, different traits – lowest correlations.
- Self esteem using an interview, locus of control using a collateral report.
Descriptive Statistics
Summary of data from participants.
Inferential Statistics
Draw conclusions if data can be applied to population.
Population
General group.
Sample
Subset of general group
Random sampling
Probability sampling. Each member of population has equal chance of being selected for sample.
Population Parameter
Describes characteristic of a population of scores symbolized by Greek letter
Sample Statistics
Describes characteristic of sample of scores, symbolized by English letter
Experiment
- Research procedure in one independent variable is manipulated, scores on dependent variable are measured, and all other variables are held constant.
- Determines a cause-and-effect relationship between two variables.
Dependent Variable
Response measurement of experiment. Selected behavior to gauge effect of independent variable, aka criterion variable; y variable.
Independent variable
Variable manipulated or controlled by experimenter aka x variable.
Task variables
Groups are given different tasks.
Instructional Variables
Groups are asked to perform a task in different ways.
Experimental/Treatment Group
Receives experimental treatment manipulation.
Control Group
Produce comparisons. Treatment is withheld to provide baseline treatment.
Confounding Variable
Nuisance variable. Uncontrolled variable unintentionally allowed to vary with the dependent variable.
Correlational Study
Subjects’ scores on two variables are measured to determine a relationship. Does not determine cause-and-effect.
Quasi-experimental method
Independent variable is not directly manipulated. Subjects are not randomly assigned. Examines differences between preexisting groups of subjects or differences between preexisting conditions.
- Comparing ethnic groups, gender, dogs vs. cats.
Statistics Conclusion Validity
- Use statistics to draw correct conclusions.
- “Are the variables under study related?”
- “Is variable A correlated (does it covary) with variable B?”
- Good statistical conclusion validity – then yes to those questions.
- Issues that threaten statistical conclusion validity – random heterogeneity and small sample size.
External Validity
- Findings are generalized beyond the experiment.
- Example: If you conduct a study looking at heart disease in men, can these results be generalized to women?
Internal Validity
- Experiment is sound and free of confounding variables.
- Two variables are related, the next issue is causality. Does A cause B?
- If study is lacking internal validity, cannot make cause and effect statements. Study would be descriptive, not casual.
Threats to Internal Validity:
History
Historical events occurred during study. Unplanned, independent variables. Vary across subjects, differential effects on subjects’ responses.
Threats to Internal Validity:
Maturation
Natural changes that occur over the passage of time.
Threats to Internal Validity:
Testing
Only occurs in pre-post design. Consequence of pre-testing program participants is they change their performance on later tests.
Threats to Internal Validity:
Instrumentation
Changing measurement methods affects what is measured.
Threats to Internal Validity:
Mortality or Attrition
Subjects drop out of study.
Statistical Regression to the Mean Threat
Participants are selected because they scored extremely high or extremely low on the pretest. Retesting produces distribution of scores closer to mean.
The Soloman Design
Assess the effect of being pretested on the magnitude of the treatment effect. Participants are divided in four groups and each have different combinations of manipulation.
- First group: Pretest, treatment, posttest
- Second group: Treatment, posttest
- Third group: Pretest, no treatment, posttest.
- Fourth group: Post test.
Subject Selection Threat
When the selection of subjects results in differences between groups related to the different variables studied.
Between-Subjects Design
- Independent variable is reflected by differences observed between subjects. Subjects are allowed to be in only one treatment condition.
- Treatment vs. no treatment, practice vs. no practice.
- Advantages: Each subject is “fresh” and not contaminated by previous treatment condition.
- Disadvantages: Takes more subjects than within subject design. Differences due to individual differences between groups, not independent variables.
Random assignment
Each subject has a chance to be selected for treatment or control group.
Block randomization
Randomize subjects in groups with equal sample sizes.
Ensures balance.
Matching
Used to evaluate effect of treatment by comparing treated and non-treated units in an observational study or quasi-experiment.
Within-Subject Design
Effects of independent variable reflected by difference observed within the subjects. Repeated measurement are taken on same subjects & the effects are associated with differences observed within subjects.
- e.g. learning studies, practice effects
Advantages of doing a within-subjects design
- Requires less subjects to obtain good level of power. Interested in studying a population that is scarce in subjects.
- “Perfect matching” occurs. Error variance is reduced.
- Reduction in error variance produces an increase in power.
Disadvantages in using the same subjects
- Sequence or order effects
- Progressive effects
- Carry over effects
Sequence or order effects
Effect due to order of treatment conditions given to subjects aka practice effects.
Progressive effects
Performance changes from one treatment to the next.
Carry over effects
Effects of a previously administered condition on a subject’s performance on a condition in a within-subject design.
- Must be controlled for so the effect of the independent variable is not distorted.
- Example: Giving one drug and another drug to the same subjects without waiting for the first drug effects to dissipate.
Counterbalancing
Systematic arrangement of treatment conditions designed to neutralize sequence effects.
Latin Square
Counterbalancing used in arranging orders of presentations of treatment conditions in a within-subjects design; form of incomplete counterbalancing.
Cross-Sectional Design
Observational study design. Investigator measures outcome and exposures in the study participants at the same time.
Longitudinal study
Stdy of group of variables in the same participants over a period of time.
Experimenter Bias
Research unconsciously affects data, results or a participant. Different to be objection that isn’t influenced by personal emotions, desires, or biases.
Subject bias
The participant reacts in a manner they think the experimenter wants. Common side effect when subjects are aware of the purpose of the study.
Hawthorne effect
Individuals alter their behavior when they’re aware they’re being observed. The attention received from experimenters cause them to change their conduct.
Factorial Experiments
Two or more independent variables are manipulated: Permit the examination of interactions.
Interaction
The outcome where the effects on behavior of one independent variable change at the different levels of the second dependent variable.
Advantages of Within-Subjects Factorial Design
- They differ in the number of total subjects (N).
- Need less subjects.
- Save time and effort for training.
- Increased statistical power.
Advantages of a Mixed Factorial Design
- Minimizing carryover effects: More treatment combos, the greater the chance of carryover effects.
- Studying learning: Used when researcher is studying learning and processes that influence speed within which learning takes places
- Studying Changes over Time: Studying changes in depression over three types of therapy.
Correlation
Describes the relationship between two variables. Variables are continuous – interval or ratio scales.
- Two naturally occurring events in the environment. Not manipulated.
Three Characteristics of a Correlation
- Positive or negative direction of the relationship.
- Form of the relation.
- Degree of the relation.
Ceiling effects
The dependent measure put an artificially “low ceiling” on high a participant may score.
Floor effect
All the scores are extremely low. Task could be difficult, producing a failure to find any differences between conditions.
Restriction of range
Two variables must be allowed to vary widely. Fail to find relationship when they study one or both variables over a highly restricted range.
Coefficient of Determination
r squared measures the variability in one variable that can determine the relationship with the other variable.
Linear Transformation
Scores of variables transformed into standard scores, added or multiplied by a constant, the correlation between the two variables remain the same.
Bivariate normal distribution
The frequency distribution illustrates the frequency of correlations between 2 variables x and y.
Naturalistic observation
Studying behaviors in everyday environments.
Participant observation
Researcher observes by joining a group as a participant.
Time sampling
Choosing time intervals for observations systematically or random.
Situation or Event sampling
Studying behavior in different locations and different circumstances. Enhances external validity.
Observation without Intervention
Naturalistic observation to describe behavior as it normally occurs and examine relationships. External validity.
Observation with Intervention:
Participant Observation
Undisguised/disguised participant observation. Observe behaviors and situations not usually open to scientific observation.
Observation with Intervention:
Structured Observation
Set up to record behaviors difficult to observe naturalistic observation. Used by clinical and developmental psychologists. Procedures are not followed or variables are not controlled; creates problems in interpreting structured observations.
Observation with Intervention:
Field experiments
One or more independent variables are manipulated in a natural setting.
Physical traces:
Use traces
Physical evidence that results from use (or nonuse) of an item.
Physical traces:
Products
Creations, constructions, or other artifacts or behavior.
- Petroglyphs, MTV, Star Wars action figures
Archival Records:
Running records
Public and private documents that are produced continuously.
- Records for sports teams, stock market prices, etc.
Archival Records:
Records for specific episodes
Documents that describe specific events.
- Birth certificates, marriage licenses, college degrees.
Qualitative Records of Behavior
- Records in descriptions of behavior, audiotapes, and videotapes of observed behavior. (Labeling types of responses by a therapist.)
- Classify and organize data to test hypotheses.
- Records made during or soon after behavior is observed and observers carefully trained to record behavior.
Quantitative Measures of Behavior
- Quantitative measures such as frequency or duration of occurrence when they seek to describe specific behaviors or events.
- Rating scales treated as if they are interval scales, even when they represent ordinal measure.
Observer Reliability
- The extent that independent observers agree in their observations.
- Increased by providing clear definitions about behaviors and events.
- High interobserver reliability increases confidence that observations are accurate.
- Reliability assessed by calculating percentage agreement or correlations.
Influence of the Observer
- Individuals change their behavior if they know they’re being observed (“reactivity”)
- Control reactivity with: Nonreactive measurements, adaptation (habituation, desensitization) and indirect observations.
- Consider ethical issues when attempting to control reacitivity.
Observer bias
- Observers biases determine when behaviors they choose to observe and those expectations can lead to systematic errors in recording behavior.
- Expectancy effects can occur when observers are aware of hypotheses.
- To control observer bias, one must recognize it may be present.
- Can be reduced by keeping observers aware of the goals.
Reactivity
Respondents know their responses are being recorded.
Social desirability
Respondents respond as they think they should.
Response acquiescence
Participants bias to agree.
Correlational research
Subjects’ scores on two variables are measured to determine if there is a relationship.
Convenience sampling
Every element does not have an equal chance to be included in sample.
Simple random sampling
Each element has an equal chance to be in the sample.
Stratified sampling
Population is divided into subpopulations, and random samples are drawn from that.
Cross Sectional Design
One or more samples drawn from one point in time. Allows researchers to describe characteristic of population or the differences between two or more populations.
Successive Independent Sample Design
Different samples from the population complete the survey over a time period. Study changes in a population over time. Does not allow to infer how they change. No equally representative of population.
Longitudinal Design
Respondents survey over time to examine changes in responses. Difficult to identify the causes of change. Final sample may not be comparable due to mortality.
Correlation * Causality
Correlated: Predictions for variables but cannot infer causes.
Correlational evidence can help identify potential causes.
Spurious Correlations
When relation between two variables can be explained by a third variable.
Experimenter Control
Control for all extraneous variables that might impact dependent variables.
Matching instead of randomization
If studying a town, seek a town similar in the same geographic region to act as a comparison group.
Pretest-posttest Design
Other types are analogous to their experimental counterparts.
Single factor nonequivalent groups designs (aka Posttest only): Men versus women on reaction time test.
Nonequivalent groups factorial designs: Men versus women, Young and Old on Reaction Time Test.
Time series
Several observations over time.
Interrupted time series design
A program is evaluated by measuring performance several times prior to the institution of the program and several times after the program has been implemented.
- Group 1: O1, O2, O3, O4, T, O5, O6, O7, O8
- Example: # of minorities accepted into university before after a new change on admission policies
Noninterrupted time series design
There is no program but performance is measured several times.
- Group 2: O1, O2, O3, O4, O5, O6, O7, O8
- Example: # of minorities accepted into a university over a course of 8 years with no change
Interrupted time series design with switching replications
The program is replicated at a different location and at a different time.
- Group 1: O1, O2, T, O3, O4, O5, O6, O7, O8
- Group 2: O1, O2, O3, O4, O5, T, O6, O7, O8
- Example: 2 similar college examining # of minorities accepted before and after change on admission policies
Problems with quasi-experiment
- Common experimental problems with internal validity; e.g. regression to the mean
- If we assign groups to treatments based on their differences, groups may differ in other aspects on variables that we never measured.
Case Studies
- Intensive description and analysis of a single individual.
- Identifies casual influences and interaction effects.
- Recommended as part of a multimethod approach in which the same dependent variable is investigated using multiple additional procedures.
Assumptions of case studies
- Cases selected based on dimensions of a theory or diversity on a dependent phenomenon.
- No generalization to a population.
- Conclusions should be phrased in terms of model elimination, not model variation.
- Difficulty in terms of evaluation of low-probability casual paths.
Advantages of Case Study Method
- Rich source of ideas for developing hypotheses.
- Opportunity for clinical innovation.
- Method for studying rare events.
- Possible challenge to theoretical assumptions.
- Tentative support for a psychological theory.
Disadvantages of the Case Study Method
- Difficulty drawing cause-and-effects conclusions.
- Possible biases when interpreting outcomes.
Single-Case (N = 1) Experimental Designs
- Treatment and baseline conditions for one individual.
- Baseline used to describe behavior before treatment is provided and predicted what behavior will be without treatment.
ABAB Design
- Frequency of behavior decreases during treatment, reverses when withdrawn and reverses during treatment.
- Baseline and Treatment conditions are contrasted.
ROC Curve
- The ROC technique developed in signal processing and the term Receiver Operating Characteristic refers to the performance of (“the ‘operating characteristic’) of a human or mechanical observer (the ‘receiver’) engaged in assigning cases into dichotomous classes.
- Obtained by plotting all sensitivity values (true position fraction) on the y-axis against their equivalent (l-specificity) values (false positive fraction) for all available thresholds.
Sensitivity
Ratio of proportion of the true-positive test results divided by all patients with the disorder. Sensitivity = (a / a + c)
- Where A = true positives
- A + C = true positives + false negatives + all people with disorder
- The better the sensitivity of the test, the fewer the false negatives.
Specificity
A ratio or proportion of the true-negative test results divided by all patients without the disorder.
- Specificity = (d / b + d)
- Where:
- D = true negatives
- (b + d) = sum of false negatives + true negatives = all people without disease
- The better the specificity of the test, the fewer the false positives.