Exam 1 Flashcards
Aims to clarify ambiguous situations or discover ideas that may amount to true business opportunities. Does not provide conclusive evidence from which to determine a particular course of action. Can be useful in helping to better define a marketing problem or identify a market opportunity.
Exploratory Research
Describes characteristics of objects, people, groups, organizations, or environments. Tries to “paint a picture” of a given situation. Addresses who, what, when, where, why, and how questions. Accuracy is critically important. Researchers usually conduct studies with a considerable understanding of the marketing situation.
Descriptive Research
Allows decision-makers to make causal inferences. What brought some event about? Seeks to identify cause-and-effect relationships to show that one event actually makes another happen.
Causal Research
What are the three types of marketing research?
Exploratory Research, Descriptive Research,Causal Research
The process of developing and deciding among alternative ways of resolving a problem or choosing from among alternative opportunities
Decision Making
A situation that makes some potential competitive advantage possible
Market Opportunity
A business situation that makes some significant negative consequence more likely.
Market Problem
Observable cues that serve as a signal of a problem.
Symptoms
List the six major stages of the marketing research process in order
Defining research objectives, Planning a research design, Planning a sample, Collecting data, Analyzing data, Formulating conclusions and preparing a report
A formal, logical explanation of some event(s) that includes predictions of how things relate to one another.
Theory
A formal statement, derived from theory, explaining some specific outcome.
Hypothesis
A single study addressing one or a small number of research objectives
Research Project
Numerous research studies that come together to address multiple, related research questions
Research program
A conclusion that when one thing happens, another specific thing will follow.
Causal Inference
Three critical pieces of causal evidence are:
Temporal Sequence, Concomitant Variance, Nonspurious Association
Deals with the time order of events. Having an appropriate causal order is a necessary criterion for causality.
Temporal Sequence
Variation occurs when two events “covary,” meaning they vary systematically. In causal terms, means that when a change in the cause occurs, a change in the outcome also is observed.
Concomitant Variance
Means any covariation between a cause and an effect is indeed because of the cause and not simply owing to some other variable
Nonspurious Association
Causal research should do all of the following:
- Establish the appropriate causal order
- Measure the concomitant variation (relationship) between the presumed cause and the presumed effect
- Examine the possibility of spuriousness by considering the presence of alternative plausible causal factors
Degrees of Causality
- Absolute causality
- Conditional causality
- Contributory causality
The cause is necessary and sufficient to bring about the effect is:
Absolute causality
A cause is necessary but not sufficient to bring about an effect is:
Conditional causality
Maybe the weakest form of causality. A cause need be neither necessary nor sufficient to bring about an effect is:
Contributory causality
An association that is not true is:
Spurious Association
What hold the greatest potential for establishing cause-and-effect relationships?
Marketing Experiments
What is a carefully controlled study in which the researcher manipulates a proposed cause and observes any corresponding change in the proposed effect.
An Experiment
Represents a way of describing public opinion by collecting primary data through communicating directly with individual sampling units and provide a snapshot at a given point in time.
A Survey
People who answer an interviewer’s questions verbally or provide answers to written questions through any media delivery (paper or electronic).
Respondents
A more formal term for a survey emphasizing that respondents’ opinions presumably represent a sample of the larger target population’s opinion is:
Sample survey
Which term is most often associated with quantitative research?
Survey
List sources of error in survey research
Total survey error, Sampling error, Systematic Error, Sample bias, Respondent Error, Nonrespondents, Nonresponse error
Term used when something contains two major sources, sampling error and systematic error due to some issue with the respondent or the survey administration.
Total survey error
Inadequacies of the actual respondents to represent the population of interest causes:
Sampling error
Error resulting from some imperfect aspect of the research design that causes respondent error or from a mistake in the execution of the research
Systematic Error
A persistent tendency for the results of a sample to deviate in one direction from the true value of the population parameter.
Sample bias
A category of sample bias resulting from some respondent action such as lying or inaction such as not responding
Respondent Error
Sample members who are mistakenly not contacted or who refuse to provide input in the research
Nonrespondents
What are the two major categories of respondent error?
Response bias and nonresponse error bias
The statistical differences between a survey that includes only those who responded and a perfect survey that would also include those who failed to respond is called:
Nonresponse error
Potential respondents in the sense that they are members of the sampling frame but who do not receive the request to participate in the research are called
No contacts
People who are unwilling to participate in a research project
Refusals
A bias that occurs because people who feel strongly about a subject are more likely to respond to survey questions than people who feel indifferent about it
Self-selection bias
A bias that occurs when respondents either consciously or unconsciously answer questions with a certain slant that misrepresents the truth is called
Response bias
List five types of Response Bias
- Acquiescence bias
- Extremity bias
- Interviewer bias
- Auspices bias
- Social desirability bias
Tendency of a respondent to maintain a consistent response style often tending to try to go along and agree with the viewpoint of a survey
Acquiescence bias
A category of response bias that results because some individuals tend to use extremes when responding to questions
Extremity bias
A response bias that occurs because the presence of the interviewer influences respondents’ answers.
Interviewer bias
Bias in responses caused by respondents’ desire, either conscious or unconscious, to gain prestige or appear in a different social role
Social desirability bias
An error caused by the improper administration or execution of the research task
Administrative error
List four Administrative errors
Data processing error, Sample selection error, Interviewer error, Interviewer cheating
A category of administrative error that occurs because of incorrect data entry, incorrect computer programming, or other procedural errors during data analysis
Data processing error
An administrative error caused by improper sample design or sampling procedure execution
Sample selection error
Mistakes made by interviewers failing to record survey responses correctly
Interviewer error
The practice of filling in fake answers or falsifying questionnaires while working as an interviewer
Interviewer cheating
Interactive face-to-face communication in which an interviewer asks a respondent to answer questions.
Personal interview
List Advantages of Personal Interviews
i. Opportunity for Feedback
ii. Probing Complex Answers
iii. Length of Interview
iv. Completeness of Questionnaire
v. Props and Visual Aids
vi. High Participation Rate
List Disadvantages of Personal Interviews
i. Interviewer Influence
ii. Lack of Anonymity of Respondent
iii. High Cost
iv. There may be a need for several callbacks
List Ways researchers gather information
i. Mall-intercept interview
ii. Door-to-door interviews
Personal interviews conducted in a shopping center or similar public area
Mall-intercept interview
Personal interviews conducted at respondents’ doorsteps in an effort to increase the participation rate in the survey
Door-to-door interviews
Advantages of conducting surveys using self-administered questionnaires
- Geographic Flexibility
- Lower cost
- Respondent Convenience
- Respondent Anonymity
Disadvantages of conducting surveys using self-administered questionnaires
- Response rate
- Survey error
- Communication Problems
Surveys in which the respondent takes the responsibility for reading and answering the questions without having them stated orally by an interviewer
Self-administered questionnaires
The number of questionnaires returned and completed divided by the number of sample members provided a chance to participate in the survey is the
Response rate
Studies in which various segments of a population are sampled and data collected at a single point in time.
Cross-sectional studies
Studies in which data are collected at different points in time
Longitudinal studies
Tendency for knowledge of who is sponsoring the research to affect respondents’ answers
Auspices bias
List two Categories of Respondent Error
Nonresponse Error and Response bias
List two Categories of Survey Error (Total Error)
Random Sampling Error and Systematic (Non-Sampling) Error
List two categories of Systematic (Non-Sampling) Error
Respondent Error and Administrative Error
List the different communication methods available for data gathering in survey research
- Interviewer-administered survey methods (Personal Interviews and Telephone Interviews)
- (Respondent) Self-administered survey methods ( Paper-based and Electronic)
- Mixed-mode surveys
A form of direct communication in which an interviewer asks respondents questions face-to-face.
Personal Interview
Advantages of Telephone Interviews
- Relatively high speed of data collection
- Inexpensive compared to personal interviews
- Better respondent anonymity than personal interviews
- Relatively higher respondent cooperation
- Lower nonresponse compared to personal interviews.
Disadvantages of Telephone Interviews
- Problems in getting representative samples; unlisted phone numbers; random digit dialing as solution
- Problem of answering machines & faxes
- Need for callbacks
- Respondent can easily hang up
- Inability to use visual aids
- Need for shorter forms of questioning
- National “Do-not-call list”
Screening procedure that involves a trial run with a group of respondents to iron out fundamental problems in the survey design.
Pretesting
List 3 Basic Ways to Pretest
- Screen the questionnaire with other research professionals
- Have the client or the research manager review the finalized questionnaire
- Collect data from a small number of respondents
Advantages of Surveys
Speed, Cost, Accuracy, Efficiency
Disadvantages of Surveys
Survey error and Communication Problems
The American Marketing Association’s code of ethics expresses researchers’obligation to:
- Protect the public from misrepresentation and exploitation under the guise of marketing research
- Protect respondents’right to privacy
- Avoid the use of deception
- Inform respondents about the purpose of the research
- Maintain confidentiality and honesty in collecting data
- Maintain objectivity in reporting data
- The process of assigning numbers or scores to attributes of people or objects.
- The process of describing some property of a phenomenon of interest by assigning numbers in a reliable and valid way
THE NATURE OF MEASUREMENT
Precise measurement requires the following 3 things:
- Careful conceptual definition
- Operational definition of the concept
- Assignment rules by which numbers or scores are assigned to different levels of the concept that an individual (or object) possesses
A generalized idea about a class of objects, attributes, occurrences, or processes
Concept
A concept that is measured with multiple variables
Construct
Anything that varies or changes from one instance to another; can exhibit differences in value, usually in magnitude or strength, or in direction.
Variable
What must be precisely defined for effective measurement?
Concepts
A definition that gives meaning to a concept by specifying what the researcher must do (i.e. activities or operations that should be performed) in order to measure the concept under investigation
Operational definition
The process of identifying scales that correspond to variance in a concept.
Operationalization
What are 2 Rules of Measurement?
- Guidelines established by the researcher for assigning numbers or scores to different levels of the concept (or attribute) that different individuals (or objects) possess
- The process is facilitated by the operational definition.
To effectively carry out any measurement (whether in the physical or social sciences) we need to use some form of
a scale
Any series of items (numbers) arranged along a continuous spectrum of values for the purpose of quantification (i.e. for the purpose of placing objects based on how much of an attribute they possess) is a
Scale
What are three ways in which the word “scale” is used in marketing research?
- The level at which a variable is measured
- An index or composite measure of a construct
- The response categories provided for a close-ended question in a questionnaire
Numbers assigned in measurement can take on different levels of meaning depending on one of four mapping characteristics possessed by the numbers which are:
- Classification
- Order
- Distance
- Origin
The numbers are used only to group or sort responses. No order exists
Classification (Nominal Scale)
The numbers are ordered. One number is greater than, less than, or equal to another
Order (Ordinal Scale)
Differences between the numbers are ordered. The difference between any pair of numbers is greater than, less than, or equal to the difference between any other pair of numbers
Distance (Interval Scale)
The number series has a unique origin indicated by the number zero
Origin (Ratio Scale)
What are the Four Levels of Scale Measurement
- Nominal Scale
- Ordinal Scale
- Interval Scale
- Ratio Scale
a scale in which the numbers or letters assigned to an object serve only as labels for identification or classification, e.g. Gender (Male=1, Female=2)
Nominal Scale
a scale that arranges objects or alternatives according to their magnitude in an ordered relationship, e.g. Academic status (Sophomore=1, Freshman=2, Junior=3, etc)
Ordinal Scale
a scale that both arranges objects according to their magnitude, distinguishes this ordered arrangement in units of equal intervals, but does not have a natural zero representing absence of the given attribute, e.g. the temperature scale (40oC is not twice as hot as 20oC)
Interval Scale
a scale that has absolute rather than relative quantities and an absolute (natural) zero where there is an absence of a given attribute, e.g. income, age.
Ratio Scale
What’s the measure where the variables need not be strongly correlated with each other?
Index measures
What’s the measure where the variables are typically strongly correlated as they are all assumed to be measuring the construct in the same way?
Composite measures
What’s the construct of a Index Measure?
Social class
Examples of Index Measures
Linear combination (index) of occupation, education, income. Social class = β1Education + β2Occupation + β3Income
What’s the construct of a Composite Measure?
Attitude Toward Brand A
Examples of Composite Measures
Extent of agreement/disagreement with multiple statements:
a) “I like Brand A very much”
b) “Brand A is the best in the market”
c) “I always buy Brand A”
Statements a), b), c), constitute a “scale” to measure attitudes toward brand A
A scale created by simply summing (adding together) the response to each item making up the composite measure.
Summated Scale
Means that the value assigned for a response is treated oppositely from the other items.
Reverse Coding
What are the Three criteria commonly used to assess the quality of measurement scales in marketing research?
- Reliability
- Validity
- Sensitivity
The degree to which a measure is free from random error and therefore gives consistent results or an indicator of the measure’s internal consistency
RELIABILITY
Two keys to Reliability are:
- Stability (Repeatability)
2. Internal Consistency
The extent to which results obtained with the measure can be reproduced.
Stability
The degree of homogeneity among the items in a scale or measure
Internal Consistency
How do you test for stability?
Test-Retest Method
Administering the same scale or measure to the same respondents at two separate points in time to test for stability.
Test-Retest Method
Two Test-Retest Reliability Problems
- The pre-measure, or first measure, may sensitize the respondents and subsequently influence the results of the second measure
- Time effects that produce changes in attitude or other maturation of the subjects
The degree of homogeneity among the items in a scale or measure
Internal Consistency
Assessing internal consistency by checking the results of one-half of a set of scaled items against the results from the other half.
Split-half Method
The most commonly applied estimate of a multiple item scale’s reliability.
Coefficient alpha (α)
Represents the average of all possible split-half reliabilities for a construct.
Coefficient alpha (α)
Assessing internal consistency by using two scales designed to be as equivalent as possible.
Equivalent Forms
The accuracy of a measure or the extent to which a score truthfully represents a concept.
VALIDITY
The ability of a measure (scale) to measure what it is intended measure.
VALIDITY
Establishing validity involves answers to the following questions?
- Is there a consensus that the scale measures what it is supposed to measure?
- Does the measure correlate with other measures of the same concept?
- Does the behavior expected from the measure predict actual observed behavior?
Approaches to Establishing Validity
- Face or Content Validity
- Criterion Validity
- Construct Validity
Criterion Validity
- Concurrent
2. Predictive
The subjective agreement among professionals that a scale logically appears to measure what it is intended to measure.
Face or content validity
The degree of correlation of a measure with other standard measures of the same construct
Criterion Validity
The new measure/scale is taken at same time as criterion measure.
Concurrent Validity
New measure is able to predict a future event / measure (the criterion measure).
Predictive Validity
Degree to which a measure/scale confirms a network of related hypotheses generated from theory based on the concepts.
Construct Validity
Another way of expressing internal consistency; highly reliable scales contain convergent validity.
Convergent Validity
Represents how unique or distinct is a measure; a scale should not correlate too highly with a measure of a different construct.
Discriminant Validity
What is a necessary condition for validity?
Reliability
Reliability is not a sufficient condition for
Validity
What is a necessary but not sufficient condition for Validity?
Reliability
The ability of a measure/scale to accurately measure variability in stimuli or responses
SENSITIVITY
The ability of a measure/scale to make fine distinctions among respondents with/objects with different levels of the attribute (construct).
SENSITIVITY
What is generally increased by adding more response points or adding scale items?
Sensitivity
What are the four Characteristics of Experiments?
- Subjects
- Experimental Conditions
- Main Effect
- Interaction Effect
The sampling units for an experiment, usually human respondents who provide measures based on the experimental manipulation.
Subjects
One of the possible levels of an experimental (independent) variable manipulation.
Variables included in the statistical analysis as a way of controlling or accounting for variance due to that variable
Experimental Conditions
What are the two variables included in the statistical analysis as a way of controlling or accounting for variance due to that variable?
- Blocking variables
2. Covariate
Categorical variables are
Blocking variables
Continuous variable is
Covariate
The experimental difference in dependent variable means between the different levels of any single experimental variable.
Main Effect
Differences in dependent variable means due to a specific combination of independent variables.
Interaction Effect
What are Basic Issues in Experimental Design?
- Manipulation of the Independent Variable
- Selection and Measurement of the Dependent Variable
- Selection and Assignment of Test Units
- Sample Selection And Random Sampling Errors
- Establishing Control
The way an experimental variable is manipulated.
Experimental treatment
A group of subjects to whom an experimental treatment is administered.
Experimental Group
A group of subjects to whom no experimental treatment is administered.
Control Group
Two experimental variable are
Categorical and Continuous
Categorical variables are
described by class or quality
Continuous variables are
described by quantity (level)
A specific treatment combination associated with an experimental group is a
Cell
The following three items describe?
- Several experimental treatment levels (different values of the independent) may be used.
- More than one independent variable may be examined.
- Cell
Manipulation of the Independent Variable
The following two items describe?
- Selecting dependent variables that are relevant and truly represent an outcome of interest is crucial.
- Choosing the right dependent variable is part of the problem definition process.
Selection and Measurement of the Dependent Variable
The subjects or entities whose responses to treatment are measured or observed.
Test units
Subject selection, experimental design, and unrecognized extraneous variables
Systematic or nonsampling error
Four ways to Overcoming sampling errors
- Randomization
- Matching
- Repeated measures
- Control over extraneous variables
List two ways Establishing Control
- Constancy of Conditions
2. Counterbalancing
Subjects in all experimental groups are exposed to identical conditions except for the differing experimental treatments.
Constancy of Conditions
Attempts to eliminate the confounding effects of order of presentation by varying the order of presentation (exposure) of treatments to subject groups.
Counterbalancing
When there is an alternative explanation beyond the experimental variables for any observed differences in the dependent variable.
Experimental Confound
Once a potential confound is identified, the validity of the experiment is severely questioned.
Experimental Confound
What can reduce the likelihood of confounds?
Careful experimental design
What are three Sources of Experimental Confound?
Sampling error
Systematic error
Later-identified extraneous variables
An experimental design element or procedure that unintentionally provides subjects with hints about the research hypothesis.
Demand Characteristic
Occurs when demand characteristics actually affect the dependent variable.
Demand Effect
People will perform differently from normal when they know they are experimental subjects.
Hawthorne Effect
List four ways of Reducing Demand Effects
- One treatment/subject
- Use a disguise
- Use a blind administrator
- Isolate subjects
A situation in which the researcher has more complete control over the research setting and extraneous variables.
Laboratory Experiment
Research projects involving experimental manipulations that are implemented in a natural environment.
Field Experiments
A single independent variable and a single dependent variable.
Basic experimental designs
Allows for an investigation of the interaction to two or more independent variables.
Factorial experimental design
List characteristics of Laboratory Experiments
Artificial: Low Realism Few Extraneous Variables High control Low Cost Short Duration Subjects Aware of Participation
List characteristics of Field Experiments
Natural: High Realism Many Extraneous Variables Low control High Cost Long Duration Subjects Unaware of Participation
Involves repeated measures because with each treatment the same subject is measured.
Within-Subjects Design
Each subject receives only one treatment combination.
Usually advantageous although they are usually more costly.
Validity is usually higher.
Between-Subjects Design
Two Issues of Experimental Validity
Internal Validity
Manipulation Checks
The extent that an experimental variable is truly responsible for any variance in the dependent variable.
Internal Validity
A validity test of an experimental manipulation to make sure that the manipulation does produce differences in the independent variable.
Manipulation Checks
List the six Extraneous Variables Affecting Internal Validity
- History
- Mortality
- Selection
- Maturation
- Testing
- Instrumentation
Occurs when some change other than the experimental treatment occurs during the course of an experiment that affects the dependent variable.
History Effect
A change in the dependent variable that occurs because members of one experimental group experienced different historical situations than members of other experimental groups.
Cohort Effect
Effects that are a function of time and the naturally occurring events that coincide with growth and experience.
Maturation Effects
A nuisance effect occurring when the initial measurement or test alerts or primes subjects in a way that affects their response to the experimental treatments.
Testing effects
A change in the wording of questions, a change in interviewers, or a change in other procedures causes a change in the dependent variable.
Instrumentation Effect
The selection effect is a sample bias that results from differential selection of respondents for the comparison groups, or a sample selection error.
Selection Effect
Occurs when some subjects withdraw from the experiment before it is completed.
Mortality Effect (Sample Attrition)
Uncontrollable events occurring in the environment between before and after measurements
History
Changes in subjects during the course of the experiment
Maturation
A before measure that alerts or sensitizes subject to the nature of experiment or second measure.
Testing
Changes in instrument result in response bias
Instrumentation
Sample selection error because of differential selection comparison groups
Selection
Sample attrition; some subjects withdraw from experiment
Mortality
Example of History Extraneous Variable
A major employer closes its plant in test market area.
Example of Maturation Extraneous Variable
Subjects become tired during the experiment.
Example of Testing Extraneous Variable
A questionnaire about the traditional role of women
triggers enhanced awareness of females in an experiment.
Example of Instrumentation Extraneous Variable
New questions about women are interpreted differently from earlier questions.
Example of Selection Extraneous Variable
Control group and experimental group is self-selected group based on preference for soft drinks
Example of Mortality Extraneous Variable
Subjects in one group of a hair dying study marry rich widows and move to Florida
The accuracy with which experimental results can be generalized beyond the experimental subjects.
External Validity
Trade-Offs Between Internal and External Validity
Artificial laboratory experiments usually are high in internal validity, while naturalistic field experiments generally have less internal validity, but greater external validity.
Ethical Issues in Experimentation
- Debriefing experimental subjects
2. Attempts to interfere with a competitor’s test-marketing efforts
A subset, or some part, of a larger population.
Sample
Any complete group of entities that share some common set of characteristics.
Population (universe)
An individual member of a population.
Population Element
An investigation of all the individual elements that make up a population.
Census
List the seven Stages in the Selection of a Sample
- Define the target population
- Select a sampling frame
- Determine if a probability or nonprobability sampling method will be chosen
- Plan procedure for selecting sampling units
- Determine sample size
- Select actual sample units
- Conduct fieldwork
Why Sample?
- Budget and time constraints, Limited access to total population (Pragmatic)
- (Relatively) Accurate and Reliable Results
- Destruction of Test Units
A list of elements from which a sample may be drawn; also called working population.
The Sampling Frame
Occurs when certain sample elements are not listed or are not accurately represented in a sampling frame.
Sampling Frame Error
A single element or group of elements subject to selection in the sample.
Sampling Unit
A unit selected in the first stage of sampling.
Primary Sampling Unit (PSU)
A unit selected in the second stage of sampling.
Secondary Sampling Unit
A unit selected in the third stage of sampling.
Tertiary Sampling Unit
The difference between the sample result and the result of a census conducted using identical procedures.
Random Sampling Error
A statistical fluctuation that occurs because of chance variations in the elements selected for a sample.
Random Sampling Error
Error results from nonsampling factors, primarily the nature of a study’s design and the correctness of execution. It is not due to chance fluctuation.
Systematic Sampling Error
Random sampling errors and systematic errors associated with the sampling process may combine to yield a sample that is less than perfectly representative of the population.
Less than Perfectly Representative Samples
A sampling technique in which every member of the population has a known, nonzero probability of selection.
Probability Sampling
A sampling technique in which units of the sample are selected on the basis of personal judgment or convenience. The probability of any particular member of the population being chosen is unknown.
Nonprobability Sampling
Four types of Nonprobability Sampling
Convenience Sampling
Judgment (Purposive) Sampling
Quota Sampling
Snowball Sampling
Obtaining those people or units that are most conveniently available.
Convenience Sampling
An experienced individual selects the sample based on personal judgment about some appropriate characteristic of the sample member.
Judgment (Purposive) Sampling
Ensures that various subgroups of a population will be represented on pertinent characteristics to the exact extent that the investigator desires.
Quota Sampling
Possible Sources Of Bias in Nonprobability Sampling
The way Respondents chosen (Similar to interviewer, Easily found, Willing to be interviewed, Middle-class)
Advantages of Quota Sampling
Speed of data collection
Lower costs
Convenience
A sampling procedure in which initial respondents are selected by probability methods and additional respondents are obtained from information provided by the initial respondents.
Snowball Sampling
Five types of Probability Sampling
Simple Random Sampling Systematic Sampling Stratified Sampling Cluster Sampling Mulitistage Sampling
Assures each element in the population of an equal chance of being included in the sample.
Simple Random Sampling
A starting point is selected by a random process and then every nth number on the list is selected.
Systematic Sampling
Simple random subsamples that are more or less equal on some characteristic are drawn from within each stratum of the population.
Stratified Sampling
The number of sampling units drawn from each stratum is in proportion to the population size of that stratum.
Proportional Stratified Sample
The sample size for each stratum is allocated according to analytical considerations.
Disproportional Stratified Sample
An economically efficient sampling technique in which the primary sampling unit is not the individual element in the population but a large cluster of elements. Clusters are selected randomly.
Cluster Sampling
Involves using a combination of two or more probability sampling techniques.
Multistage Area Sampling
What Is the Appropriate Sample Design?
- Degree of Accuracy
- National vs. Local
- Resources
- Knowledge of Population
- Time
Advantages of Convenience Sampling
No need for list of population
Advantages of Judgment (Purposive) Sampling
Useful for certain types of forecasting; sample guaranteed to meet a specific objective
Advantages of Quota Sampling
Introduces some stratification of population, requires no list of population
Advantages of Snowball Sampling
Useful in locating members of rare populations
Disadvantages of Convenience Sampling
Unrepresentative samples likely; random sampling error estimates cannot be made; projecting data beyond sample is relatively risky
Disadvantages of Judgment (Purposive) Sampling
Bias due to expert’s beliefs may make sample unrepresentative; projecting data beyond sample is risky
Disadvantages of Quota Sampling
Introduces bias in researcher’s classification of subjects; nonrandom selection within classes means error from population cannot be estimated; projecting data beyond sample is risky
Disadvantages of Snowball Sampling
High bias because sample units are not independent; projecting data beyond sample is risky
Cost and degree of use of Convenience Sampling
Very low cost, extensively used
Cost and degree of use of Judgment (Purposive) Sampling
Moderate cost, average use
Cost and degree of use of Quota Sampling
Moderate cost, very extensively used
Cost and degree of use of Snowball Sampling
Low cost, used in special situations
Cost and degree of use of Simple Random Sampling
High cost, moderately used in practice (most common in random digit dialing and with computerized sampling frames)
Cost and degree of use of Systematic Sampling
Moderate cost, moderately used
Cost and degree of use of Stratified Sampling
High cost, moderately used
Cost and degree of use of Cluster Sampling
Low cost, frequently used
Cost and degree of use of Mulitistage Sampling
High cost, frequently used, especially in nationwide surveys
Advantages of Simple Random Sampling
Only minimal advance knowledge of population needed; easy to analyze data and compute error
Advantages of Systematic Sampling
Simple to draw sample; easy to check
Advantages of Stratified Sampling
Ensures representation of all groups in sample; characteristics of each stratum can be estimated and comparisons made; reduces variability for sample size
Advantages of Cluster Sampling
If clusters geographically defined, yields lower field cost; requires listing of all clusters, but of individuals only within clusters; can estimate characteristics of clusters as well as of population
Advantages of Mulitistage Sampling
Depends on techniques combined
Disadvantages of Simple Random Sampling
Requires sampling frame to work from; does not use knowledge of population that researcher may have; larger errors for same sampling size than in stratified sampling; respondents may be widely dispersed so cost may be higher
Disadvantages of Systematic Sampling
May introduce increased variability if sampling interval is related to periodic ordering of the population
Disadvantages of Stratified Sampling
Requires accurate information on proportion in each stratum; if stratified lists are not already available, they can be costly to prepare
Disadvantages of Cluster Sampling
Larger error for comparable size than with other probability samples; researcher must be able to assign population members to unique cluster or else duplication or omission of individuals will result.
Disadvantages of Mulitistage Sampling
Depends on techniques combined