CH03 Measurement to Build Marketing Insights Flashcards
what is a measurement?
measurement: the process of assigning numbers or labels to persons, objects, or events in accordance with specific rules for representing quantities or qualities or attributes
eg. questions in a survey that provide a scale of 1 to 5 for possible responses
what is a rule?
rule: a guide, method, or command that tells a researcher what to do
eg. assign the number 1 through 5 to people according to their disposition to do household chores, 1 being the willingness to do chores while 5 being not willing to do chores at all
what are the steps in the measurement process?
- identify the concept of interest
- develop a construct
- define the concept constitutively
- define the concept operationally
- develop a measurement scale
- evaluate the reliability and validity of the measurement
explain step 1 of the measurement process, (1. identifying the concept of interest)
- identify the concept of interest
- identifying the concept of interest, aka an abstract idea derived from specific facts
- concepts are used to group similar sense data together
eg. perceptions of “things to sit on” is a concept
eg. of marketing concepts include brand loyalty, consumer satisfaction, market segmentation, purchase intent, etc.
define constructs
constructs: specific types of concepts that exist at higher levels of abstraction
explain step 2 of the measurement process (2. developing a construct)
- a theoretical or conceptual representation of a specific marketing concept in a measurable form
created to operationalize or measure abstract concepts that cannot be directly observed or quantified - constructs help researchers define and quantify the concept they are studying, allowing for empirical investigation and statistical analysis
eg. “degree of comfort” is a construct
eg. of marketing construct include
- brand equity
- perceived quality
- attitude towards a product
- consumer trust
- customer loyalty
- any specific construct will be of value with regard to observable phenomena to the extent that it contributes to:
- explanation
- prediction, and
- control
generally, constructs themselves are not directly observable
eg. what constitutes “comfortable seating” for our research purposes?
- a desk chair that can be used for hours?
- a theatre seat that reclines?
- a child seat for a car?
define constitutive definition
constitutive definition: statement of the meaning of the central idea or concept under study, establishing its boundaries; also known as theoretical, or conceptual definition
define operational definition
operational definition: a statement for precisely which observable characteristics will be measured and the process for assigning a value to the concept
explain step 3 of the measurement process (3. defining the concept constitutively)
it establishes the meaning and boundaries of a concept by specifying its essential features
- a clear and precise description of what the concept represents
- this ensures consistent interpretation and measurement of concepts across studies
a vague constitutive definition can cause an incorrect research question to be addressed
- defining “comfortable seating” as “no back pain” is an improvement
- … but it’s still too general to serve research purposes
in marketing, a constitutive definition of brand loyalty could be:
“the extent to which a customer exhibits repeat purchasing behavior and positive attitudes towards a particular brand”
define operational definition
operational definition: statement of precisely which observable characteristics will be measured and the process for assigning a value to the concept
explain step 4 of the measurement process (4. defining the concept operationally)
- refers to the process of defining a construct in measurable terms for empirical research
- involves specifying the indicators or variables that will be used to assess the construct
in marketing, an operational definition of brand equity could be:
“the sum of a brand’s perceived quality, brand awareness, and brand loyalty scores measured on a 7-point likert scale”
what is construct equivalence?
construct equivalence deals with how people see, understand, and develop measurements of a particular phenomenon
- a construct in cross-cultural comparisons must have the same meaning across cultural groups being surveyed
- that common theoretical meaning of the construct then must be adequately represented in the measurement instrument
eg. the construct “friend” could mean
- someone who is very close psychologically and can be counted on when needed, or …
- close friends, short-time friends, long-time friends, and acquaintances
give an example of the measurement process using the first 4 steps
- concept: familiarity with brands of ketchup
- construct: brand awareness
- constitutive definition: “the extent to which consumers know the brand, recall the brand when promoted, and have previous experiences with the brand”
- operational definition: percentage of respondents who recognize the designated brand of ketchup
what is a scale?
scale: a set of symbols or numbers so constructed that the symbols or numbers can be assigned by a rule to the individuals (or the behaviors or attitudes) to whom the scale is applied
explain step 5 in the measurement process (5. developing a measurement scale)
- creating a measurement scale begins with determining the level of measurement that is desirable or possible
- there are 4 basic levels of measurement: nominal, ordinal, interval, and ratio
- higher levels are more “powerful”
define nominal data
mutually exclusive (aka 2 things cannot happen at the same time)
- avoids overlap in the question options
eg. how old are you?
0-30, 31-55, 55-70, 71+
collectively exhaustive:
- all options are included
eg. 71+ (rather than just ending at 70)
typical statistics:
- frequency counts
- percentages
- modes
define ordinal data
ordinal data: scales that maintain the labelling characteristics of nominal scales but also have the ability to order data
ranking type data examples:
- best liked, worse liked
- win, place, or show
- first, second, or third
- small, medium, and large
- comparison rankings: “rank these movies from best to worst”
typical statistic:
- median
define interval data
interval data: scales that have the characteristics of ordinal scales, pus equal intervals between points
comparison type data eg.
- on a “1 to 10” scale
- age, income, etc. as ranges with equal intervals
eg. how old are you?
- 0-20, 21-40, 41-60, 61-80, 80+
typical statistic:
- average, aka arithmetic mean
define ratio data
ratio data: scales that have the characteristics of interval scales, plus a meaningful zero point
flat numeric type data eg.
- age + 50 (not an age range)
- income = $45,000 (not an income range)
- number of children: _____
typical statistic
- geometric mean, aka a mean (average) that indicates the central tendency or typical value of a set of numbers by using the product of their values (as opposed to the arithmetic mean, which uses their sum)
consider 2 and 8
- arithmetic mean = (2+8)/2 = 5
- geometric mean = sqrt (2 x 8) = 4
explain step 6 in the measurement process (6. evaluating the reliability and validity of the measurement)
the ideal: M = A
where M refers to the measurement and A stands for complete accuracy
the reality: M = A + E
where E = errors
there are 2 types of errors, what are they?
systematic or random errors
compare systematic errors vs random errors
- systematic error results in a constant bias in the measurements
eg. if the measurement instrument is flawed - random error is transient in nature
eg. if someone fills out a survey incorrectly
what are the reasons for measurement errors?
- difference due to a stable characteristic of individual respondents (eg. personality, values, and intelligence)
- differences due to short-term personal factors (eg. temporary mood swings, health problems, time constraints, or fatigue)
- differences cause by situational factors (eg. distractions or others present in the interview situation)
- differences resulting from variations in administering the survey (interviewers can ask questions with different voice inflections, causing response variation)
- differences due to the sampling of items included in the questionnaire (eg. when researchers attempt to measure the quality of service at McDo’s, the scales and other questions used represent only a portion of the items that could have been used)
- difference cue to a lack fo clarity in the measurement instrument (eg. a question may be ambiguous, complex, or incorrectly interpreted)
- differences due to mechanical or instrument factors (eg. blurred questionnaires, lack of space to fully record answers, missing pages in a questionnaire, etc.)
define reliability
reliability: degree to which measures …
- are free from random error, and
- provide consistent data
extent to which the survey responses are internally consistent
- a reliable measurement does not change when the concept is being measured remains constant in value
what are the 3 ways to assess reliability?
test-retest, the use of equivalent forms, and internal consistency
define test-retest reliability
test-retest reliability: ability of the same instrument to produce consistent results when used a second time under conditions as similar as possible to the original conditions
define stability
stability: lack of change in results from test to retest
define equivalent form reliability
equivalent form reliability: the ability of two very similar forms of an instrument to produce closely correlated results
define internal consistency reliability
internal consistency reliability: the ability of an instrument to produce similar results when used on different samples during the same time period to measure a phenomenon
what is the split-half technique?
split-half technique: a method of assessing the reliability of a scale by dividing the total set of measurement items in half and correlating the results
define validity
validity: degree to which what the research was intending to measure was actually measured
- refer to the extent to which the measurement instrument and procedure are free from both systematic and random error
a measuring divide is valid only if differences in scores reflect true differences on characteristic being measured
what is face validity?
face validity: the degree to which a measurement seems to measure what is supposed to measure
what is content validity?
content validity: the representativeness, or sampling adequacy, of the content of the measurement instrument
what is criterion-related validity?
criterion-related validity: the degree to which a measurement instrument can predict a variable that is designated a criterion
define predictive validity
predictive validity: the degree to which a future level of a criterion variable can be forecast by a current measurement scale
define concurrent validity
concurrent validity: the degree to which another variable, measured at the same point in time as the variable of interest, can be predicted by the measurement instrument
define construct validity
construct validity: the degree to which a measurement instrument represents and logically connects, via the underlying theory, the observed phenomenon to the construct
define convergent validity
convergent validity: the degree of correlation among different measurement instruments that purport to measure the same construct
define discriminant validity
discriminant validity: the measure of the lack of association among constructs that are supposed to be different
what is scaling?
scaling: procedures for assigning numbers (or other symbols) to properties of an object in order to impart some numerical characteristics to the properties in question
- scares are either unidimensional or multidimensional
what are the 2 scaling approaches?
- unidimensional: measures only one dimension of a concept, respondent, or object
- multi-dimensional: measures several dimensions of a concept, respondent, or object
define graphic rating scales
graphic rating scales: a measurement scale that includes a graphic continuum, anchored by two extremes
what are itemized rating scales?
itemized rating scales: measurement scales in which the respondent selects an answer from a limited number of ordered categories
what are noncomparative scales?
noncomparative scales: measurement scales in which judgement is made without reference to another object, concept, or person
what are rank-order scales?
rank-order scales: measurement scales in which the respondent compares 2 or more items and ranks them
what are comparative scales?
comparative scales: measurement scales in which one object, concept, or person is compared with another on a scale
define paired comparison scales
paired comparison scales: measurement scales that ask the respondent to pick one of the 2 objects in a set, based on some stated criteria
what are constant sum scales?
constant sum scales: measurement scales that ask the respondent to divide a given number of points typically 100, among 2 or more attributes, based on their importance to him or her
what is the semantic differential scale?
semantic differential scale: measurement scales that examine the strengths and weaknesses of a concept by having the respondent rank it between dichotomous pairs of words of phrases that could be used to describe it; the means of the responses are them plotted as a profile or image
define a stapel scale
stapel scale: measurement scales that require that respondent to rate, on a scale ranging +5 to -5, how closely and in what direction a descriptor adjective fits a given concept
define a likert scale
likert scale: measurement scales in which the respondent specifies a level of agreement or disagreement which statements expressing either a favourable or an unfavourable attitude toward the concept under study
what is a purchase-intent scale?
purchase-intent scale: scales used to measure a respondent’s intention to buy or not to buy a product
what is a net promoter score?
net promotor score: a measure of satisfaction; the percentage of promotors minus the percentage of detractors when answering the question, “would you recommend this to a friend?”
considerations in selecting a scale
the nature of the construct being measured
- the objective o the research study is the fundamental effect on the manner in which scales are used for survey measurement
type of scale
- choice of which type of scale to use (depends on the problem at hand and the questions that must be answered)
- research approach (one might select a scale that can be administered over the telephone or via the internet to save interviewing expense)
- ease of administration and development
define balanced scales
balanced scales: measurement scales that use the same number of positive and negative categories
define nonbalanced scales
nonbalanced scales: measurement scales that are weighted toward one end of the other of the scale
number of scale categories: even vs odd
even number of scale categories:
- there is no neutral point, forcing the respondent to indicate some degree of positive or negative feelings
odd number of scale categories:
- researcher claim that having a neutral point on a scale gives me the respondent an easy way out
forced vs non-forced choice
forced choice:
- argument for a forced choice is that the respondent has to concentrate on his or her feelings
non-forced choice
- allows the respondent to have a selection when they do not have an opinion or have no knowledge of the subject
direct questioning: direct vs dual questions
direct questions:
- respondents may be asked to explain their reasons for preferring one product or brand over another
dual questions:
- involves asking 2 questions concerning each product attribute that might be determinant
- first ask what factors consumers consider important
- then ask how these factors differ among the company’s products or brands
define indirect questioning
indirect questioning: any interviewing approach that does not directly ask respondents to indicate the reasons why they bought a product or service
define observation
observation: observing shoppers and recording their movements and statements while interacting with certain products on display
how to choose a method for identifying determinant attitudes:
- direct questioning, indirect questioning, and observation each have some limitations in identifying determinant attitudes
- the marketing researcher should therefore use 2 or more of the techniques