Exam 4 Flashcards
What Makes Surveys/Interviews Different from Other Research Forms
Surveys/Interviews rely on asking questions directly to the participant rather than making observations or manipulating a variable
What Can Be Studied With Surveys
- Just about anything: preferences, secrets, desires, opinions, etc.
- Aims to get honest/accurate info
- Response rates and honesty depend on survey type/topic
Mail Surveys
- Written + self-administered
- Sent via postal service
- Needs to be self-explanatory
- Needs to be interesting enough that people want to respond
- Inexpensive
Strengths and Weaknesses: Mail Surveys
Strengths
- Decreased likelihood of sampling bias
- Can reach a wide variety of people
- Can be distributed where otherwise unsafe
- May get people only available at certain times
- More complete responses since more time to respond
- More likely to get honest responses on sensitive issues
Weaknesses
- Distribution is not perfect
- Not everyone has an address
- Some people can’t read/write
- Very low response rate
- Maybe 30% tops
- Can do multiple mailings (but costly)
- Decreases participant confidence in anonymity
- Self-Selection
- May lead to biased data (esp w/ low response rate)
- No way to know exactly why people who respond do so
- 50% response rate generally required to conclude sample isn’t overly biased
How to Increase Response Rates: Mail Surveys
- Multiple Mailings
- Hand addressed/signed cover letter
- May cause hand cramps
- First-class postage
- Gets expensive
- Advance notice
- Incentives
- Prize drawings
- A small amount of money
- Self-addressed prepaid envelope
- Make it pretty!
- Neat, well-organized, easy to read
Strengths: Internet Surveys
- Wide + cheap distribution
- Automatic data coding
- Self-administered
- Needs to be self-explanatory + interesting - Easier to find specific groups
- e.g. disease sufferers, political affiliation, etc. - Time effective
- More honesty
- Allow for multimedia presentation and responses
- Overcomes some illiteracy issues
Weaknesses: Internet Surveys
- Response rate
- Combine w/ physical mail, incentives - Response rate may not be known
- Who is responding?
- Limited Participant Pool
- Who has internet? (Becoming less of an issue)
Group Surveys
Survey given to a group of individuals already present
- e.g. class, workshop, etc.
- Higher response rate
- Group may/may not be representative (convenience sample)
- Must still be self-explanatory (clarification invalidates survey)
Strengths and Weaknesses: Phone Interviews/Surveys
Strengths
- Higher Response Rate
- Harder to say no to a person + you can establish rapport and correct issues
- Less expensive than in person interviews
- Can clarify questions, get more flexibility in questions + responses
Weaknesses
- Not everyone has a phone
- When are you calling?
- Many phone #’s are unlisted and random dialing can lead to non-relevant numbers
- Call Screening
- Interviewer Bias
Strengths and Weaknesses: Personal Interviews
On the street, in the mall, in a home, etc.
Strengths
- No need to use list/directory (may be out of date or biased)
- You’re sure who’s providing info
- High response rate (80-90%)
- Even if interview isn’t given, some demographics/characteristics may be visible
- Longer + more in depth info
- Clarification, Follow Up ?’s, ask for more response
Weaknesses: Personal Interviews
- Hard to ensure anonymity
- Greatest opportunity for Interviewer Bias
- Interviews behaviors, questions, recording procedures, etc. cause unrepresentative data
- Unconscious/conscious beliefs and opinions
- Tone of voice, word choice, body language, interpretation of participants, etc - Interviews must be trained
- Higher likelihood of Socially Desirable Responses
- Participants might not have info on hand, may need to consult w/ others
- If interviewers are students/employees/etc. may not do smth right
- Sampling Bias
- Time consuming + Expensive
How Should Surveys Be Constructued
Order and Appearance Matters
- Should look easy + interesting
- Organize by topic, don’t jump around
Common Techniques Used in Survey Construction
- Funnel Structure
- Demographic Questions
- Branching
- Filter Questions
Funnel Structure
Start general, then move to specific questions
Demographic Questions
- Descriptive questions about the respondent
- Sometimes seen as intrusive/boring
- Often put @ end so respondents are already committed - Sometimes used as an icebreaker (phone/personal)
Branching
Determine which questions to ask based on previous responses
Filter Questions
- Determine which following questions will/wont apply to participant
- Avoids “N/A” responses
- Avoids boredom/low response rates
Closed Questions
- Possible responses are provided
- e.g. multiple choice, scale, etc.
Advantages
- Easier to quantify/analyze/perform statistics
Disadvantages
- Must provide enough possible responses (sometimes yes/no isn’t enough)
Open-Ended Questions
- Free answer
- e.g. short answer, essay, etc.
Advantages
- Can provide more info/explanation
Disadvantages
- Must be coded
- Can again use integrated reliability
Scales
- Make sure lowest/highest values are clearly labeled
- Consider labeling middle point as “dont know”
- Avoid making scale too large (1-100 isn’t always better than 1-10)
- Free Scales
Questions to Avoid in a Survey/Interview
- Loaded Questions
- Leading Questions
- Double-Barreled Questions
Loaded Questions
Include terms that are emotionally laden/non-neutral
- Often show researcher bias
Leading Questions
Suggest that there is a particular desired response
- The organization of questions can also be leading
Double-Barreled Questions
Any question to which a single person could have two separate answers
- Goal is to get unambiguous, unbiased, accurate info from respondents
Reliability
Does the measurement tool provide consistent results/data/etc.
Test-Retest Reliability
Do people respond the same way to a survey when its given a second time
- Best done w/ some time or activity between the tests
Alternative-Forms Reliability
How well do two forms/versions of same test yield comparable results?
Split-Half Reliability
Entire assessment aims to measure one thing
- Compare agreement between two halves of the assessment
Cronbach’s Alpha
Calculates correlation between each and every test item
Validity
Are you measuring what you say you are measuring?
Face Validity
Common sense measure
- Does it seem like your test actually measures what you intend?
- Not the best measure
Construct Validity
The extent to which the concepts you intend to measure are actually measured
Criterion Validity
How well do results of your instrument correlate with real outcomes/behaviors?
Tips for Survey Development
- Don’t rush
- What info do you want?
- Ask for that info?
- Get rid of excess questions
- Pilot survey
- Get feedback
- Revise
- Pilot again!
What are the Different Sampling Techniques Used to Obtain Participants
- Sampling Frame
- Random Sample
- Systematic Sampling
- Stratified Sampling
- Stratified Random Sampling
- Cluster Sampling
- Convenience Sample
- Quota Sampling
- Snow-Ball Sampling
Sampling Frame
List of all members of a population
- Sample is chosen from the frame
- Something like phone book or census
Elements
Individual members that make up sample
Random Selection
All members of population are equally likely to be chosen for the sample
- Unbiased
Systematic Sampling
Choose elements according to particular plan/strategy
- e.g. every 12th person from sampling frame
- can be partially randomized
Stratified Sampling
Used to guarantee that the sample will be representative of specific population characteristics (class, race, gender, etc.)
- make sure you get certain amount from each category
Stratified Random Sampling
Each member of the individual strata has an equal chance of being included
- Opposite of systematic
Cluster Sampling
Cluster of representative respondents used
- Random sampling may not be practical
- Should still be representative even if not random
Convenience Sample
Survey/test whatever happens to be available
- Easy but often biased
- AKA haphazard or accidental sample
Quota Sampling
Combines stratified and convenience sampling
- Requires representative numbers from particular subgroups, but these numbers are acquired via convenience
Snow-ball Sampling
Each participant recruits more participants etc. etc.
What Sampling Techniques are Considered “Worse” than Others
- Convenience
- Quota
- Snowball
- May be only possible techniques but result in bias
- Every member of the population doesn’t have equal chance of being chosen
- May result in non-representative sample
Nomothetic Research
Conducted on groups in an attempt to identify general laws and principles of behavior
Idiographic Research
Conducted to study behavior of an individual
- Often looking for patterns
- Still aimed @ objective/interpretable measures
Case Study
- Different from single-subjects research designs
- Descriptions of an individual and their experiences
- Do not typically involve manipulation of an independent variable
Single-Subject Designs
Focus on one participant
- Attempt to objectively establish IV/DV relationships
- Individual results
- Replication for generalizability
History of Single-Subject Design Use in Psychology
- Fechner: JND (just noticeable difference)
- Pavlov: Classical Conditioning
- Ebbinghaus: Forgetting Curve
Baselines
Measure taken of the DV before any IV manipulation
- Taken @ least once at the beginning of a study, sometimes more
- Stable/Variable
Stable Baseline
Multiple measures of DV are similar when no IV has been introduced
- No steady increases or decreases
- Steady changes = especially problematic if in same direction as predicted IV
Variable Baseline
Changes in DV across multiple measures, even when no IV manipulation is made
- Keep taking measures until it evens out
- Find the source of variability and control it
Types of Single-Subject Designs
- Time Series
- Withdrawal Design
- Reversal Design
- Alternating-Treatments Design
- Multiple-Baselines Design
- Changing-Criterion Design
Time-Series Designs
Multiple measures are taken before and/or after IV manipulation
Withdrawal Design
Measurements of DV are taken before IV manipulation, during IV manipulation, and after IV is “withdrawn”
- May be repeated multiple times
ABAB design
- A: Baseline B: Intervention
- Baseline, Intervention, Baseline, Intervention
May Have 2nd Intervention
- A: Baseline B: Intervention 1
C: Intervention 2
- ABAC
Placebo Effect
- A: Baseline B: Intervention
C: Placebo
- ABCB
Reversal Design
Similar to withdrawal design
- A: Baseline
- B: Intervention
- C: Opposite Intervention
- ABCB
- Less common in applied research
Alternating-Treatments Design
Alternative form of ABAB design (Baseline may still be included)
- AKA “Between-Series Design”
- A: Baseline
- B: Treatment 1
- C: Treatment 2
- ABCBCBC
Multiple-Baselines Design
Not all behaviors return to baseline when intervention is removed
- Not always appropriate/ethical to stop effective treatment
- Alternative: introduce treatment in diff settings/situations at different rates
OR
- Introduce treatment to address multiple behaviors at different rates
- A: Baseline
- B: Treatment
- Setting/Behavior 1: ABBB
- Setting/Behavior 2: AABB
- Setting/Behavior 3 : AAAB
Changing-Criterion Design
- Avoids withdrawing treatment once started
- Criterion to encounter intervention changes over time
- Often, target behavior is gradually escalated/de-escalated
Types of Carryover Effects + How they Decrease Internal Validity of Single Subject Designs
- History
- Maturation
- Instrumentation
- Fatigue
- Practice
- Subject Bias/Demand Characteristics
- Experimenter Bias
Non-Reactive Measure
Acquiring data about behavior has no effect on behavior
- Behavior has already occurred
- Science Detective
Physical Trace Measure
When physical evidence is assessed in the absence of individuals whose behavior produced it
Archival Data
Records
- Written
- Digital
- Etc.
- Records are assessed to make inferences about behaviors/attitudes/beliefs/etc.
Physical Trace Studies
Study of physical evidence left by individuals’ behavior
Traces
Evidence left as a by-product of behavior
Products
Objects purposefully created by individuals
Accretion Measure Trace
Accumulation of evidence
- Something is added as a consequence of behavior
Erosion Measure Trace
Wearing away of materials as evidence
- Something is taken away
Controlled Trace
Involves researcher intervention
Natural Trace
Occurs w/o researcher intervention
Confounds Specific to Non-Reactive Studies
Selective Survival and Selective Deposit
Selective Survival
Not all traces/product evidence endures over time
Selective Deposit
Different subgroups produce different products/traces
- Not every trace/product is representative of the whole population
Ethical Considerations Associated with Physical Trace Research
Can traces/products be tired back to specific individuals?
- Damage reputation
- Embarrassment
- Incrimination
Are researchers at risk?
Document
Produced for one’s self
- Not for public consumption
Record
Produced with the purpose of being viewed by others
Continuous Records
Records that are added to on a routine basis
- e.g. sales records, census, tax records
Discontinuous Records
Produced less continuously or only once
- e.g. books, articles, photos
Data Processing in Archival Data Investigations
- Reduce data through coding
Content Analysis
- Researcher develops a coding system that is used to record data regarding the content of the records
- Does this record contain the content of interest?
- Determined w/ operational definition
Confounds Associated with Performing Archival Research Studies
Double Edged Sword
- Nonreactive/not made for research
Selective Survival
- Damage: fire, water, lost, corrupted, etc.
- Can cause skewed view of what’s going on
Selective Deposit
- Ex: # of deer tags registered for hunting could be wrong due to limit on tags or due to individuals not applying
Biases/Inaccurate Information
- Trying to portray agenda
- Incompetent
Potential Ethical Issues Associated with Performing Archival Research Studies
Privacy
- Bigger issue w/ newer data
- People seek privacy for themselves and for relatives
Usually a safe bet unless people may be identified