Exam 2 Lecture 2 Flashcards

1
Q

True story of being a scientist

A

Asked to critique a study that looked at a group of people who drink too much.

PROTOCOL: Heart rate measured
- During a relaxation exercise
- When they were showed photos of booze
- As they told a favorite story about drinking
- While they smell their favorite drink, and taste it
- Measured craving after all of this

RESULTS: Heart rate changed during the storytelling

CONCLUSION: Craving causes changes in heart rate measures

I don’t believe you!
- The heart and lungs are linked, when you inhale, your heart accelerates. What does talking do to your breathing?
When you concentrate, your heart rate (&BP) changes
Maybe concentrating on all this changes HR, not craving

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

We need a way to vet whether a study was done correctly and whether the data are meaningful… whether we should believe what we read/see/hear

A

Validity testing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

When you are testing something that is vague, murky, complicated, multifaceted, squishy, subjective… most things being studied are… that something there needs to be some way to determine if you are actually testing

A

VALIDITY TESTING

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Using a measure that measures what you think you’re measuring

A

Validity

Does an online IQ test really measure intelligence?
Does the Myers-Briggs test really reflect who you are?
Before you believe a conclusion, you need to believe the process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Designing a study that studies what you think you’re studying

A

Validity

Studying how well antidepressants work by studying differences in SAT scores between AD users and non-users. NOT VALID

Studying how people respond to social situations using a lab-based experiment where people watch movie clips about awkward social situations while hooked up to physiological sensors
MAYBE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Before you believe a conclusion, you need to believe __________

A

The process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Validity helps measure what?

A

The APPROPRIATENESS of your data and data collection protocol.
The validity of a measurement tool (questionnaire, equipment) must be tested. The validity of a study must also be tested.

There are many different types of validity.

Validity is NOT about precision. It’s about RELEVANCE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Is validity about precision or relevance?

A

Relevance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Validity of a measurement examples

A

You want to assess stress levels of college students. What data should you collect?

  • Blood glucose
  • How much deodorant they wear
  • Answer to “you stressed rn?”
  • Antibiotic use in past week
  • A 106-item survey to measure stress
  • Salivary cortisol at wake time

Implications to all of these, some don’t even assess stress levels, others are not helpful indicators/may generate too much noise/error compared to others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Validity of a Study protocol

A

Am I actually answering the question I’m asking?

Ex: Is heart rate (HR) different after a stroke?
Protocol: Collect HR before, during, and after a stroke using an electrocardiogram
- a plethsymogram
- counting carotid pulse
- asking spouse about patient palpitations

Which are valid? They are all valid, but some of these measurement strategies vary in how precise/accurate they are.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Validity (measuring what you think you’re measuring). When is it easier to demonstrate and when is it harder to demonstrate?

A
  • Easier to demonstrate when the study is about tangible/concrete construct (body composition, enzyme activity, heart rate)
  • Harder to demonstrate when measuring complex ideas (fatigue and stress), human behavior, and self-reported data (attitudes and feelings)- these days, we measure A LOT of complex things
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

TESTING and PROVING validity is critical when measuring things that are

A

Hard to define, hard to measure, and hard to quantify

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the types of validity?

A

MEASUREMENT VALIDITY aka CONSTRUCT VALIDITY

and

STUDY VALIDITY

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the subcategories of MEASUREMENT AKA CONSTRUCT VALIDITY

A
  • Content validity
  • Face validity
  • Criterion validity (Convergent and discriminant)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the subcategories of STUDY VALIDITY

A
  • Internal validity
  • External validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What does MEASUREMENT VALIDITY AKA CONSTRUCT VALIDITY aim to ask

A

Are YOUR DATA relevant to your measure?

17
Q

What does STUDY VALIDITY aim to ask

A

Is your study design relevant to answer the question you are asking?

18
Q

Measurement Validity aka Construct Validity
What is the difference between constructs and indicators?

A

Construct= A concept or idea you want to measure
Sometimes it isn’t directly measurable, so you get a variety of relevant (but not exact) data called indicators/items

There are a variety of strategies to determine whether the indicators/items (aka- the data you actually collected) really capture the construct (aka the big idea that you were trying to measure/measurement goals)

19
Q

Construct vs. Indicators (Alcohol Example)

A

Construct= Alcohol misuse

Indicator= Frequency of use (days per week)
Indicator= Quantity used (drinks per occasion)
Indicator= Hangover severity (symptoms checklist)
Indicator= Consequences (negative outcomes checklist)
All of the indicators are used to determine the construct.

20
Q

Construct vs. Indicators (Exercise Difficulty Example)

A

Indicator= Knowledge (breath control)
Indicator= Experience (fitness history)
Indicator= Preparedness (good sneakers)
Indicator= Motivation (enthusiasm to try)

21
Q

Most psychological states require a bunch of questions to get at. This 21-item survey measures 3 psychological states. Explain.

A

21-item survey= 21 indicators
3 psychological states= 3 constructs

22
Q

To have construct validity, what three things must you test?

A
  1. Face validity
  2. Content validity
  3. Criterion validity
23
Q

What is construct validity?

A

The way you are trying to answer your question makes sense based on what is currently known

Do the tools you are using to answer your question capture all relevant, current knowledge of the construct?

Constructs change definitions over time as knowledge expands
- Optimal nutrition (“good fat” etc)
- Autism is a spectrum (very complicated history)

Constructs require differentiation from similar constructs
- Anxiety versus depression
- Fatigue from overexertion, depression, sleep disorders, thyroid conditions

24
Q

Face Validity- a component of Construct Validity

A

Face validity= yep. makes sense
- On the surface, your measure seems appropriate for what you are trying to measure
- On the surface, it seems relevant and related

This is a more subjective measure of validity.
- This type of validity might vary depending on who is participating in the study and what the overall study design is
- This is one place where bias can sneak in
- Generational differences
- Racial/ethnic/cultural differences

25
Q

Face Validity Example

A

Asking “you stressed rn” on a stress survey has more face validity for younger people than older people.
Without face validity, there is no construct validity.
With face validity… ok, but that’s not enough

26
Q

Content Validity- a component of Construct Validity

A

Content validity= yep. that’s the whole story. Does your measure represent all aspects of your construct?
Does your measure include all relevant indicators?
To be a complete measurement tool, it must include all the different parts, components, perspectives.
It also must not include irrelevant things.

27
Q

Content validity Example

A

If my next exam does not include items about validity, it will not have content validity
If my next exam includes items about best practices for orthopedic surgeons, it will not have content validity.

Without content validity, you don’t have construct validity.

28
Q

Criterion Validity- a component of construct validity

A

Criterion validity= yep. true not matter how I measure it

  • Study A measures stress by asking “you stressed rn”
  • Study B gives a 106-item stress symptoms survey
  • Study C collects salivary cortisol at wake time

If the same person does all 3 studies, would they say the same thing? This is criterion validity

To test criterion validity, you can use a statistical test. You want to measure the relationship between the studies and make sure a high stress person from Study A is also labeled high stress in Study B and Study C.

29
Q

Am I answering only the question I’m asking? What two types of CRITERION validity describe this (C+D)

A
  1. Convergent validity (same construct, different measures)- Yep, the whole story is consistent.
  2. Discriminant validity (different constructs, different measures)- Nope, we are not testing another similar thing. We are testing the right thing.
30
Q

Convergent Validity

A

Category under Criterion Validity

Convergent validity (same construct, different measures)
- Making sure that what you are measuring is related to other things it is known to be related to (Heart rate measured by ECG and counting from radial artery)
Yep, the whole story is consistent.

31
Q

Discriminant Validity

A

Category under Criterion Validity

Discriminant validity (different constructs, different measures)
- Making sure you aren’t accidentally picking up something else with your measure (Does the SAT measure intelligence or test-taking ability?)
- Making sure that two different constructs are not highly related (How can you be sure a person is anxious and not just stressed out?)
- Making sure things that shouldn’t be related aren’t related (A Division 1 soccer player shouldn’t have low fitness scores)
Nope, we are not testing another similar thing. We are testing the right thing.

32
Q

Concurrent vs. Predictive are 2 types of criterion validity. Describe concurrent validity.

A

Concurrent: A new measure/indicator is similar to the ‘gold standard’.
- You have created a new technology that measures step count.
- Does it get the same answer as FitBit? If so, has concurrent validity.

33
Q

Concurrent vs. Predictive are 2 types of criterion validity. Describe predictive validity.

A

Predictive: Can a measure/indicator (meaningfully) predict a future outcome?
- A moving company measures how much a person can deadlift because they need workers who can lift heavy items. If people who can deadlift more get jobs done quicker, then deadlifting has PREDICTIVE VALIDITY (of job success)

34
Q

Criterion Validity- This is a strong form of construct validity but often hard to come by because of what?

A
  1. You need to measure more things which complicates the study design
  2. You might find out the different measures aren’t all related, or only partially related. Then what? Which one is ‘correct’?
    - You really need a ‘gold standard’ but that doesn’t always exist.
35
Q

The Borg Scale

A

The Borg Scale= rating of perceived exertion
How hard is this exercise?
- On the surface, it makes sense that the more you feel you are exerting, the more difficult it is.
- If you agree with this, you’d say that the Borg scale has face validity.

A little weird though.
How hard is this exercise? “6”, not hard at all
Weird because the ratings go from 6-20

The 6-20 scale is intentional! To line it up with HR (gold standard)
If not, criterion validity is poor.

36
Q

Complex constructs require __________

A

Multiple indicators!

Does RPE/BORG score completely capture the construct of difficulty?
If not, content validity is poor.