Test 2 Flashcards
T/F Every sample will have an error.
True
People who respond to a survey
- Not only consumers
Respondents
Collecting data by having people answer a series of questions
Survey
A portion of the population that’s being surveyed
sample survey
The sample differs from the population of interest
sampling error
When there’s a flaw in the survey design
Systematic Error
The respondent did or did not do something
Respondent Error
Something a participant DID NOT do
Non-Response Error
Not contacted or refused to do the survey
Non-Respondents
When the participant decides to take the survey or not
Self-Selection Bias
Something a participant DID intentionally or unintentionally
Response Bias
Lying or giving a false answer because you guessed or are bored
Deliberate Falsification
The participant is confused on how to answer because the question is vague or ambiguous
EX: How do you rate your education?
Unconscious Misrepresentation
Participants agreeing with everything
Acquiescence Bias
Scoring extremely higher or lower than their true value
Extremity Bias
Interviewer’s characteristics or body language influence participants responses
Interviewer Bias
Responding because it is a socially accepted answer or to gain esteem
Social Desirability Bias
The researcher makes a mistake intentionally or unintentionally about how the data was gathered or improper survey design
Administrative Error
A mistake made in the data entry phase such as, imputing the data in wrong
Data Processing Error
The researcher selected the sample wrong
Sample Selection Error
The source or list where participants were selected from is wrong
Sample Frame Error
There’s questions in how the results are being measured
measurement Bias
Interviewer is doing something wrong such as, changing words in the question or not fully recording responses
Interviewer Error
Interviewer makes up the number of participants or participants responses
Interviewer Cheating
Percentage of people who responded out of the total people who were contacted
- This is usually around 5%
Response Rates
A brief letter that is sent with a survey to explain what the survey is about
Cover Letter
The 5 Cover Letter Functions/Purposes
- Identifies the surveyor & sponsor
- Explains the purpose of the survey
- Why the respondent was selected
- Provides the incentive for participating
- Qualifying/Screening questions
Personal Interview advantages?
- Opportunity for feedback
- Probing complex answers
- Length of interview
- Completeness of questionnaire
- Props and visual aids
Personal Inverview Disadvantages
Disadvantages:
- Interviewer bias
- Anonymity
- Expensive
Personal interviews conducted in a shopping center or similar public area
Mall Intercepts
Personal interviews conducted at respondents’ doorsteps in an effort to increase the participation rate in the survey
Door to Door
Personal interview that is conducted over the telephone
Telephone Interview
What is the only survey method where the researcher is not involved?
Self-administered methods like:
- Internet, cell phone, & email surveys
- Mail questionnaires
- Drop offs
- Point of sale
The interviewer travels to the respondent’s location to drop off questionnaires that will be picked up later
Drop Offs
Survey requests distributed through electronic mail
E-mail Surveys
Email Survey Advantages?
Advantages:
- Speed
- Lower cost
- More flexibility
- Less manual processing
Email Survey disadvantages?
Disadvantages:
- Possible lack of anonymity
- Spam filters
- Problems with successful delivery
A self-administered survey administered using a Web-based questionnaire
Internet Survey
Directs participant to more questions based upon their responses
Branching
Inserts the text of participants previous responses
Piped text
CATI
Computer-Assisted Telephone Interviews
- Randomly dial phone numbers
Percentage of people who clicked on the survey
Click-Through Rate
Screening procedure that involves a trial run with a group of respondents to discover problems in the survey design
pretesting
Describing some property of phenomenon of interest, usually by assigning numbers.
measurement
Degree to which someone meets a certain criteria, single variable
- IS NOT correlated
EX: Social class
Index Measure
Assigning a value based on a mathematical derivation of multiple variables
- IS correlated
EX: Restaurant satisfactory scales
Composite Measure
Adding everything together, the sum
Summated Scale
Total of the variables / Number of variables
Average
The value assigned for a response is treated oppositely from the other items
Reverse Coding
Used to classify something into categories or labels
- Have nominal or ordinal properties
Categorical Questions
Number that expresses a quantity of the property being measured
Metric Questions
Have 5 or more scale points
Metric Scales
T/F You can take the average for a categorical question.
FALSE. A mode, frequency, or percentage must be used.
T/F You can take the average for a metric question.
TRUE
T/F Slider scales are not metric.
FALSE. Yes, they are.
2 response options to choose from such as, yes or no
- CAN select more than 1 answer (Select all the apply questions)
- There’s no correct answer
Dual Choice
3 or more response options to choose from
- ONLY 1 answer can be selected
- There’s 1 correct answer
Multiple Choice
T/F Questions that say “Select all that apply” are considered dual-choice questions.
True
Number is descriptive of the property being measured
- IT IS meaningful
Natural
Number is an artificial measure of some quantity the participant DOES NOT see
- IS NOT meaningful
Synthetic
One Variable ranks higher than another
order
A certain score is higher than another
Ex: 2 is 1 point higher than 1
distance
Providing consistent data & reproducible results, percise
Reliability
Represents a measure’s homogeneity or the extent to which each indicator of a concept converges on a common meaning
Internal Consistency
Splitting the scale in half to produce similar scores
- This will show if they are correlated or not
Split Half Method
Administering the same scale or measure to the same respondents at two separate times to test for stability
Test - Retest Method
The accuracy of a measurement or a score that truthfully represents a concept
- Purchase intent = purchase!
Validity
The items look like what they are intending to measure
Face Validity
1 measure is associated to another measure
Criterion Validity
Satisfaction & future purchases
Predictive Validity
Satisfaction & re-patronage intentions
Concurrent Validity
Correlated items that measure the same thing
Convergent Validity
Items can be correlated, but should not be correlated too highly
Discriminant Validity
The respondent ranks something in order based on overall preference
Ranking Task
Ranking Task issues
Issues:
- Ordinal measurement
- Alternatives not included
- Outside of choice set
- Can’t tell differences
Rating how important the attributes are
Rating Scales
Asking participants to rate the degree of their agreement such as, “strongly agree, agree, neutral, disagree, & strongly disagree”
Likert Scale
A scale where participants describe their attitudes using a series of positive & negative attributes
EX: Happy or Sad, Serious or Fun, Formal or Casual, etc.
Semantic Differential
Rating everything positively
- This can be prevented by flipping the positive & negative attributes
Halo Effect
“Anchors” a participant’s score along a point value by 2 anchors
- Metric
- Can be measured by finding the Average
EX: Rating between Poor & Excellent
N-Pointed Anchored Scale
Also called a “Slider Scale”
- Metric
- Can be measured by finding the Average
Graphic Rating Scale
ONLY scoring on 1 end or the other of a scale
End Piling
Having a equal number of both positive & negative options
- Most used by Marketers
Balanced
NOT having a equal number of both positive & negative options
- This reduces the likelihood of end pilling
Unbalanced
The participant HAS TO answer the question
- Limited errors
forced choice
More than 1 question
EX: How satisfied are you with Skybar’s music?
How satisfied are you with Skybar’s cleanliness?
How satisfied are you with Skybar’s atmosphere?
Multiple Item
1 question ONLY
EX: What’s your OVERALL satisfaction with Skybar?
Single Item
Will it answer the research questions?
Relevant
How will the data be measured
accuracy
Close ended questions where respondents are given specific, limited-alternative responses & are asked to choose the one closest to their own viewpoint
Fixed alternative questions
A question that suggests or implies certain answers
- Bandwagon Effect
- Partial mention of alternatives
EX: “Don’t you see problems with using your credit card online?”
Leading Questions
A question that suggests a socially desirable answer or is emotionally charged
EX: “Should people be allowed to protect themselves from harm by using a taser as self-defense?”
Loaded Questions
Wording the question so respondents think “everyone is doing it”
Bandwagon Effect
Introductory statement to a potentially embarrassing question that reduces a respondent’s reluctance to answer by suggesting that certain behavior is not unusual
EX: “Some people have the time to brush their teeth three times per day, but others do not. How often did you brush your teeth yesterday?”
Counter biasing Statement
Asking 2 things in 1 question
EX: “How would you rate the associate’s knowledge & helpfulness?”
Double-Barreled Question
Wording the question with additional information
- This HELPS a respondent remember their experience
Aided Recall
Wording the question without any additional information
- This DOES NOT HELP a respondent remember their experience
Unaided Recall
- The first questions asked
- Are used to select the participants who meet the specific criteria required to take the survey
Screening Questions
- Are asked immediately after screening questions
- Shows the respondent that the survey is easy to complete & generates interest
Warm up questions
- Are asked after major sections of questions or changes in question format
- Notifies the respondent that the subjects of questions will change
Transitions question
- Are asked in the middle or close to the end
- Respondent is close to completing the survey & is informed there are not many questions remaining
Complicated & Difficult-to-Answer Questions
- Are asked at the very end
- These are personal & possibly offensive questions
Classification & Demographic Questions
Results when how the questions are ordered affects the way a person responds or when the choices provided favors 1 response over another
Order Bias
The ordering of questions throughout a survey
-Asking a question that does not apply to the respondent may be irritating or cause a biased response
Survey Flow
Focusing on 1 answer & comparing all other answers to it
Anchoring
Starting with broad questions then gradually getting into more specific questions
-Allows researchers to understand the respondent’s frame of reference before asking more specific questions
EX: How satisfied are you with your overall life?
How satisfied are you with your finances?
How satisfied are you with your significant other?
How satisfied are you with your career?
Funnel Technique
Screens out respondents who are not qualified to answer a second question
- “Screening questions”
- These usually provide an N/A option for respondents who cannot answer the question
Filter Questions
Software programs like Qualtrics that allow special features to facilitate survey design
Survey Technology
Having a friend take the survey before its launched to discover problems
Pretest Cpmposition
A smaller group of people selected for the entire population
sample
A groupe of people with similar characteristics
population
EVERYONE in a population is selected
EX: 2010 United States income census
Census
Who do we sample>
the people we are trying to understand
Why sample?
- Pragmatic reasons (less cost, less time, etc.)
- Accurate & reliable results
- Destruction of test units
A list of elements from which a sample may be drawn from
- Also called “Working population”
Sampling Frame
Occurs when certain sample elements are not listed or are not accurately represented in a sampling frame
- Almost every list excludes some members of the population
Sampling Frame Error
Companies who maintain lists of people who are willing to participate in marketing research
Sampling Services
Lists of respondents who have agreed to participate in marketing research along with the email contact information for these individuals
Online Panels
The difference between the sample result & the result of a census
- Larger sample size decreases these errors
Random Sampling Error
The difference between the sample value & the true value of the population mean
- Function of n
- Margin of error
Chance Variation
Errors in the execution of the study’s design
EX: How the researcher selects the sample
Systematic Non-Sampling Error
Sampling procedure that ensures that various subgroups of a population with a certain characteristic will be represented to the exact extent that the researcher wants
- NOT randomly selected
- Also called “Demographically-matched sampling”
EX: A set number participants who own cats
Quota Sampling
ARE NOT random samples because they are a convenience sample
- People make the choice to participate or not
Randomly select sampling units
-Survey software can help
EX: Nth visitor or visitor needs to stay on the page for 30 seconds
EX: Frequent visitors
Website Visitors
Every participant has a chance of being selected
- The chance is KNOWN
Probability Sampling
Every participant has a chance of being selected
- The chance is UNKNOWN
Non-Probability Sampling
T/F Market research usually relies on probability sampling.
FALSE. It usually relies on non-probability sampling.
FALSE. It usually relies on non-probability sampling.
Sampling people who are easy to find or gather data from
EX: Using Facebook to find participants
Convenience Sampling
Using an experienced researcher’s judgment to select the participants
- Test market cities
- Incident rates
Judgement Sampling
Specific cities to sample from because they have a diverse population
Test Market Cities
The percentage of participants with the characteristic needed
Incident Rates
Asking initial respondents to refer additional respondents to take the survey
- Similarity
- Focus groups
- This works best with LOW incident rates
EX: Asking a respondent who has had plastic surgery to list others they know who have gotten plastic surgery too
Snowball sampling
Assures each element in the population has an equal chance of being selected
- Assigning a number then randomly selecting
- Random digit dialing
EX: Winning the lottery because there’s a 1 out of 10 chance the ball will be your number
Simple Random Sampling
A starting point is selected by a random process & then every nth number on the list is selected
- Initial starting point is created using a random number generator
- Skip interval
Systematic Sampling
A skip interval is calculated by dividing
Population size / Desired sample size
Simple random sub-samples that are more or less equal on some characteristic are drawn from within each stratum of the population
- Similar to a Quota sampling, but this IS randomly selected
- Has a select stratification variable
Stratified Sampling
This must be a characteristic of the population elements
- Is known to impact the DV
- Is a grouping variable
- The mean is analyzed
EX: Customer firm size
Stratification Variable
Randomly selecting clusters or elements within subgroups
- 1 step versus 2 step
Cluster Sampling
Selecting a cluster or multiple clusters based upon where they are geographically
Area Cluster Sampling
A tendency for respondents to agree with the viewpoints expressed by a survey
Acquiescence Bias
An error caused by the improper administration or execution of the research task
Administrative Error
Attempts to try and contact those sample members missed in the initial attempt
Call backs
Letter that accompanies a questionnaire to induce the reader to complete and return the questionnaire
Cover Letter
A category of administrative error that occurs because of incorrect data entry, incorrect computer programming, or other procedural errors during data analysis
Data Processing Error
A survey method that requires the interviewer to travel to the respondent’s location to drop off questionnaires that will be picked up later
Drop off method
A category of response bias that results because some individuals tend to use extremes when responding to questions
Extremity Bias
Communication that allows spontaneous two-way interaction between the interviewer and the respondent
Interactive Survey Approaches
Potential respondents in the sense that they are members of the sampling frame but who do not receive the request to participate in the research
No Contacts
Two-way communication by which respondents give answers to static questions that do not allow a dynamic dialog
Noninteractive Survey Appraches
The statistical differences between a survey that includes only those who responded and a perfect survey that would also include those who failed to respond
Nonresponse Error
Refers to some true value of a phenomenon within a population
Population Parameter
Screening procedure that involves a trial run with a group of respondents to iron out fundamental problems in the survey design
Pretesting
A bias that occurs when respondents either consciously or unconsciously answer questions with a certain slant that misrepresents the truth
Response Bias
A more formal term for a survey emphasizing that respondents’ opinions presumably represent a sample of the larger target population’s opinion
Sample Survey
Error resulting from some imperfect aspect of the research design that causes respondent error or from a mistake in the execution of the research
Systematic Error
Errors due to the inadequacies of the actual respondents to represent the population of interest
Sampling Error
Sampling errors are caused by
- Method of sampling used
- size of the sample
Sampling errors are reduced by
- Increasing the size of the samples
- using an appropriate sampling method
Types of Survey Methods:
- Person-administered
- Self-administered
- Telephone-administered
Data collection methods that require the presence of a trained human interviewer who asks questions and records the subject’s answers
(in-home,mall-intercept)
Person-Administered Surveys
Advantages of Person-Administered Surveys:
- Adaptability
- Rapport
- Feedback
- Quality of responses
A _________ is a scale type that has respondents describe their attitude using a series of bipolar rating scales.
semantic differential
____ ______ assign number and letters for identification
nominal scale