Ch 6 Sallis: Questionnaires Flashcards

1
Q

In the Sallis et al. (2021) textbook there is a diagram showing a theoretical plane and an empirical plane. How does the operational definition of a construct relate to the theoretical definition?

A

It refines the theoretical definition into something measureable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is content validity?

A

he extent to which the measurement captures the entire theoretical construct. ex: if you’re measuring mathematical ability, a test with only algebra questions would have low content validity because it doesn’t cover geometry, calculus, or other areas of math.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is construct validity based on? (2)

A

it is expressed and evaluated as two subdimensions: covergent validity and discriminant validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is construct validity?

A

is about whether a test or measurement truly measures the concept or idea (the “construct”) it is supposed to measure.

For example: if a test claims to measure intelligence, it should actually assess skills like reasoning, problem-solving, and understanding—not just memory or trivia knowledge.
It’s like asking: Does this test really capture the thing we’re trying to measure?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is convergent validity?

A

Convergent validity means that when different tools or tests are used to measure the same thing, their results should be similar or strongly related.

For example:
- If two different tests are supposed to measure happiness, people who score high on one test should also score high on the other.

It shows that these tools agree and are truly measuring the same concept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is discriminant validity?

A

Discriminant validity means that a test or measurement should not strongly overlap with tests measuring something completely different.

For example:
- If a test measures happiness, it shouldn’t have a high correlation with a test measuring anxiety because they are separate feelings.

It ensures that the test is specific to what it’s supposed to measure and doesn’t get confused with other concepts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Face Validity?

A

Face validity is about whether something looks like it measures what it’s supposed to measure, based on first impressions.

For example:
- If a questionnaire is supposed to measure stress, do the questions seem clearly related to stress (like asking about feeling overwhelmed)?

It’s a quick, common-sense check—often done by asking experts or a small group of people if the test seems right. It’s not as detailed as other types of validity but helps ensure the questions make sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is statistical conclusion validity?

A

Statistical conclusion validity is about whether the results of a study are backed up by proper statistical analysis. It checks if the conclusions you draw from the data are reliable and accurate.

For example:
- If your measurements are inconsistent or the data isn’t handled properly, your conclusions might be wrong.
- If the study is not strong enough (like having too few participants), it might miss real effects, leading to errors (like Type II errors, where you fail to detect something that’s actually there).

In short, it ensures your data and methods are solid enough to trust the conclusions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is reliability?

A

Reliability is the extent to which a measurement produces consistent results when repeated. All measurements are subject to random error. A measurement with low random error has high reliability. If we measure customer satisfaction with a questionnaire, there will be random factors that influence how respondents answer. If we immediately measure it again with the same respondents, assuming that nothing substantive has happened to change it, the results will be slightly different. In general, if respondents have well-developed opinions about what we are measur- ing, repeated measures will be quite similar. If they do not have developed opinions, they effectively guess the answers, and randomness increases. Reliability, in this sense, is not just how good the measurement instrument is (the questionnaire), but also a function of the context and respondents.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Which are the four types of validity?

A
  1. content
  2. construct
  3. face
  4. statistical conclusion
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which are the four types of measurement scales?

A

Nominal, ordinal, interval, ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is nominal level measurement?

A

A nominal level scale is the simplest type of measurement scale, used to label or categorize data without giving any order or value to the categories. It’s just about grouping things into different groups or types.

For example:
- Eye color: blue, green, brown
- Types of fruits: apple, banana, orange
- Gender: male, female, other

The categories have no rank or numerical value—they’re just labels. You can count how many fall into each group, but you can’t say one category is “greater” or “less” than another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is ordinary level measurement?

A

Ordinal scales rank data in a specific order but don’t show how much one rank differs from another.

For example:
- Education levels: (1) Elementary, (2) High School, (3) University. The order is clear, but the time or effort between these levels isn’t equal or measurable.

Another example is a Likert scale, like rating agreement with a statement from 1 (Strongly Disagree) to 7 (Strongly Agree). The numbers show rank, but the difference between 3 and 4 might not feel the same as between 6 and 7.

You can rank items, but you can’t calculate meaningful averages because the gaps between ranks aren’t consistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is interval level measurement?

A

Interval scales rank data and ensure the distances between values are equal and meaningful. However, they don’t have a true zero point, so you can’t make statements like “twice as much.”

For example:
- Temperature: Celsius and Fahrenheit scales have equal intervals between degrees, so you can calculate averages. But because there’s no true zero (0°C doesn’t mean “no temperature”), you can’t say 20°C is twice as hot as 10°C.

In short, interval scales let you measure differences but not proportions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is ratio level measurement?

A

Ratio scales are like interval scales but with one key difference—they have an absolute zero point, meaning zero represents the complete absence of the thing being measured. This allows you to compare values as multiples of one another.

For example:
- Age: A 40-year-old is twice as old as a 20-year-old.
- Income: Someone earning 50,000 euros earns five times more than someone earning 10,000 euros.
- Store visits: Visiting a store six times is twice as many visits as going three times.

With ratio scales, you can calculate averages, differences, and ratios because the zero point makes the comparisons meaningful.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a parametric statistical method?

A

Parametric statistical methods (like advanced statistical tests) require data measured at the interval or ratio level because these levels provide precise and meaningful numerical information.

  • You should not use parametric methods with ordinal or nominal data because they don’t meet the necessary requirements (like equal intervals or an absolute zero).
  • A helpful rule is to always measure data at the highest level possible (interval or ratio), as this gives you more flexibility for analysis.
  • Higher-level data (interval/ratio) can be simplified into lower-level data (ordinal/nominal) if needed. For example, you could group exact ages (ratio level) into categories like “child,” “teen,” or “adult” (ordinal level).
  • However, you cannot turn lower-level data into higher-level data. For example, you can’t take nominal categories like “red,” “blue,” or “green” and turn them into precise numerical measurements.

In short: Always aim for higher-level data because it’s more versatile for analysis.

17
Q

What is attitude and perception measurement?

A

Attitudes are formed over time as a result of how a person perceives phenomena. They are not directly observable and therefore challenging to measure. Nevertheless, research often requires measuring attitudes and perceptions. Single questions do not capture their dimensionality, so most often, a battery of questions is used to measure them. Each question is meant to measure a slightly different aspect of the construct. Generally speaking, this is the case for all complex social constructs.
There are many types of question batteries. Two types of questions have become dominant in social sciences: Likert scales and semantic differential scales.

18
Q

What is likert scales?

A

It works by asking people to respond to a series of statements, usually related to a specific topic, and indicate how much they agree or disagree with each statement.

For example:

“I enjoy working in teams.”
Strongly disagree (1)
Disagree (2)
Neutral (3)
Agree (4)
Strongly agree (5)
The responses are often ranked numerically, making it easier to analyze trends or overall attitudes. It’s a common tool in surveys and research to measure opinions, feelings, or attitudes.

19
Q

What is semantic differential scales?

A

Semantic differential scales are used to measure how people perceive things, like words, concepts, or brands. The process involves identifying key traits (attributes) and then creating a scale with two opposite ends (anchors) for each trait.

For example:
- For price, the anchors could be expensive vs. cheap or high price vs. low price.
- Respondents rate where they think an object, like a brand, falls on the scale.

Typically, a 7-point scale is used, with only the two anchor points labeled. Surveys often include 20–30 traits, but this can make analysis more complicated.

Key points:
- Like Likert scales, semantic differential scales are ordinal, meaning the distances between points are not necessarily equal. Some researchers suggest using the median (middle value) instead of the mean (average) for analysis because the median is less affected by extreme values.
- However, it’s common to assume equal intervals and calculate averages anyway.

Unlike Likert scales, where responses to several questions are combined into one score, semantic differential scales often show results for each trait separately. These results are displayed using a snake diagram, connecting points for each attribute.

This method is often used to compare two brands. For example:
- One brand might be seen as expensive but have the same service quality as a competitor that is seen as cheaper. Such insights help businesses plan strategies to improve their brand perception.

20
Q

What is scale value measurements?

A

Scale value measurement is about using a scale to assign numbers to people’s opinions, feelings, or perceptions so they can be measured and analyzed.

For example:
- A Likert scale might ask how much someone agrees with a statement, like:
- 1 = Strongly Disagree
- 5 = Strongly Agree

  • A rating scale might ask someone to rate their satisfaction on a scale from 1 to 10.

Respondents choose a point on the scale that best represents their opinion, and this is turned into a number. These numbers are then used to:
- Calculate averages or trends.
- Compare results between groups or items.
- Identify areas that need improvement.

In simple terms, scale value measurement turns opinions into numbers so they can be studied and understood.

21
Q

What are two types of distinctive scales?

A

comparative and non-comparative

22
Q

What is comparative scales?

A

Comparative scales ask people to compare different options instead of judging them on their own. This can be done in two main ways:

  1. Ranking scale:
    • People rank a list of options based on their preferences or importance.
    • Example: A bank might ask customers to rank communication methods (e.g., face-to-face, email, phone) for discussing large investments.
    • Key points:
      • The scale shows the order of preference but doesn’t tell how much better one option is than another.
      • If there are too many options, it gets hard for people to rank them accurately.
      • To make it easier, respondents can rank just the top 3 or 4 options.
      • However, forcing people to rank everything might reduce accuracy if they don’t care about all the options.
  2. Constant sum scale:
    • Instead of ranking, people distribute a fixed number of points (e.g., 100) across the options to show their preferences.
    • Example: A customer might give 75 points to face-to-face communication and split the remaining points among email and phone, showing a clear preference.
    • Key points:
      • This method gives both the ranking and a sense of how much better one option is compared to another.
      • It can still be hard for people if there are too many options.
      • During interviews, time pressure or the presence of an interviewer might make responses less accurate.

Even though the constant sum scale uses numbers and shows differences, it’s debatable whether it truly counts as a higher-level (interval) scale for statistical purposes.

In short: Comparative scales help prioritize options, but they can be challenging if there are too many choices or if people feel rushed to decide.

23
Q

What are some advantages of personal interviews?

A
  • Respondents may be shown relatively complicated visual stimuli.
  • The interviewer can assist the respondent by explaining difficult questions.
  • The interviews can be relatively long and comprehensive.
  • The interviewer can persuade the respondent to complete and answer all
    questions.
  • The interviewer can observe who responds and records reactions.
24
Q

What are some disadvantages of personal interviews?

A
  • The interviewer’s presence can influence how the respondent answers.
  • The interview is time-consuming.
  • Personal interviews take a lot of resources for each respondent (depending on
    duration).
25
Q

What are the advantages of online solutions interviews?

A
  • Inexpensive.
  • Flexibility (the respondent answers when time and opportunity permit).
  • Several free and paid solutions are available.
  • The questionnaire can be tailored to how the respondent is answering (directed to
    different questions).
  • Few physical boundaries, so long as respondents have open internet access.
  • Online interviews and focus groups are possible.
  • Many visual and audio aids are available.
26
Q

What are the disadvantages of online solutions interviews?

A
  • The response rates may drop due to a virus hazard.
  • It is easy to skip the survey.
  • Bias in the sample.
27
Q

What are some advantages in telephone interviews?

A
  • They can be conducted quickly.
  • They are inexpensive compared to personal interviews.
  • Easier to make contact than in-person interviews.
  • The interviewer can clarify misunderstandings.
  • The influence of interviews is less than in-person interviews.
28
Q

What are some disadvantages in telephone interviews?

A
  • You cannot use complex scales.
  • Visual stimuli cannot be used.
  • The respondent does not have time to reflect on answers.
  • The interview cannot be long because the respondent becomes impatient.
29
Q

What are the advantages of online/postal surveys?

A
  • Fairly inexpensive.
  • Flexibility (the respondent answers when time and opportunity permit).
  • Easier to pose sensitive questions.
  • Relatively complicated question and response scales can be used.
30
Q

What are the disadvantages of online/postal surveys?

A
  • Low response rates (reminders required).
  • No control over who answers (spouse, administrative assistant).
  • No possibility to aid the respondent, leading to unanswered questions).
  • Priming through question order may not work since the respondent can view the
    entire questionnaire.
31
Q
A