Chapter 5 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What are the 3 types of measures?

A
  1. Self-report
  2. Physiological
  3. Observational
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is operationalisation?

A

The process of turning a construct of interest into a measured or manipulated variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two definitions of variables in physiological research?

A

The conceptual definition and the operational definition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How are variables defined in research?

A

A conceptual definition is first made of the idea that we wish to measure (such as happiness being subjective well-being) and a operational definition is then made to measure the variable (the quantifiable measure). These definitions can define the same concept in many different ways.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do self-report measures operationalise variables?

A

It records people’s answers to questions about themselves in a questionnaire or interview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are self-report measures used on children?

A

Self reports may be replaced by parent reports or teacher reports

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do observational measures operationalise variables?

A

It records observable behaviours or physical traces of behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do physiological measures operationalise variables?

A

By recording biological data - usually with equipment to amplify, record or analysize the data. These measures may record high IQ as their brains dont light up as much on complex problems

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How do you know which operationalisation is the best?

A

Variables are often measured by a multitude of different measures so there is no necessary “best”. A way you can tell if a variable is measured well is if the results that measure an idea in all 3 measures follow the same pattern.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the different operational variables?

A

Categorical (although they have categories, they can be represented by numbers but they often dont mean anything) and quantitative (they have meaningful numbers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the three types of quantitative variables?

A

Ordinal, ratio and interval scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What makes ordinal scales different from the others?

A

Ordinal scales focus on rank and often the only meaningful part of the data is the differences - but not the quantifiable differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the requirements of an interval scale?

A
  1. The numbers need to represent equal intervals between levels
  2. There is no “true” zero aka zero does not mean there is nothing there
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the requirements of ratio scales?

A

When the numbers have equal intervals and when the value of zero truly means there is nothing there.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the 2 aspects of construct validity?

A

Reliability (how consistent) and validity (does it measure what its supposed to)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the three kinds of reliability?

A
  1. Test-retest reliability (same score measured on repeated tests)
  2. Interrater reliability (consistent scores no matter who measures)
  3. Internal reliability (a consistent pattern of answers given no matter the question)
17
Q

When does test-retest reliability apply?

A

It applies in self-report, observational or physiological measures - but it is most effective at measuring constructs that are stable such as IQ

18
Q

When does interrater reliability apply?

A

It is the most relevant for observational measures

19
Q

When does internal reliability apply?

A

It applies to measures that combine multiple items

20
Q

What are two statistical devices researchers can use?

A
  1. Scatterplots
  2. Correlation coefficient r
21
Q

How is evidence for reliability an association claim?

A

It is between the association at an earlier time and a later time, one coder and another and one version of a measure and another

22
Q

How can scatterplots and r be used to measure reliability?

A

By comparing the measurements taken at different times, or by different people or otherwise, if they data points flow in a straight line and have a high r, that suggests strong reliabilty (although not perfect cause measurement error)

23
Q

How is test-retest reliability assessed with r or a scatterplot?

A

2 measures taken at 2 different times and compared on a scatterplot - only applicable for traits that do not really change

24
Q

How is interrator reliability assessed with r or a scatterplot?

A

2 measures taken by 2 different researchers - a negative correlation would be worse than just a low r. Kappa is used instead of r for categorical variables .

25
Q

How is internal reliability assessed?

A

Often by the same question being given worded in multiple ways and summed to create a single composite score.

26
Q

How is internal reliabiltiy statistically measured?

A

Researchers compute correlations between every item and every other item where the Average Inter-item correlation (average of all these correlations), if between .15 and .5 often means they go well together. Researchers my also use Cronbach’s alpha which combines the AIC and number of items, with a strength scale the same to r (strong is 0.8 < x)

27
Q

When is construct validity very important?

A

When a construct is not directly observable such as happiness or IQ

28
Q

What are three different kinds of validity?

A
  1. Face validity (subjectively considered a plausible operationalisation)
  2. Content validity (measure must capture all parts of a defined construct)
  3. Criterion validity (the measure is associated with a concrete behavioural outcome that is should be associated with)
29
Q

How can criterion validity be measured?

A

Through scatterplots and r where the behavioural outcome and the measure should have a high r to have validity. It is good for self-report measures as it indicates how well their score is to their actual behaviour.

30
Q

How can a known-groups paradigm be used to measure criterion validity?

A

A known groups paradigm allows researchers to see whether scores on the measure can discriminate among 2 or more groups whose behaviour is already confirmed. It can also be used to validate self-report measures.

31
Q

What validities measure whether there is a meaningful pattern of similarities and differences among related measures?

A

Convergent and discriminant validity

32
Q

An example of convergent and discriminant validity is…

A

Whether a self-report measure correlated more strongly with measures of similar constructs than dissimilar constructs

33
Q

What is the difference between convergent and discriminant validity?

A

Convergent sees whether a measure of a construct is the similar to another measure of the same/similar construct - while discriminant is whether a measure of a construct is similar to a different measure of a different construct. It should also differentiate between similar symptoms but different diagnoses. Discriminant is often for near neighbours (similar but fundamentally different constructs)

34
Q

Reliability and validity

A

They are not the same - values may be less valid than reliable but it can not be more valid than it is reliable. Reliability is nescessary for validity.

35
Q
A