LESSON 3 Flashcards

1
Q

what are random errors?

A

Random error – chance fluctuations in out measurement. They obscure results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are a constant/systematic errors?

A

Constant/systematic error – bias present which influences measurements continuously. They create bias in results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are the main 5 threats to internal validity?

A

selection, history, maturation, instrumentation, reactivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

IV threats: what is selection?

A

Bias from the selection of ppt to the different conditions of the IV
Means that ppt who are assigned to different levels of the IV may differ systematically to other ppt and systematically influence the measurement of the DV (other than the IV)
This is a key issue for Quasi experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

IV threats: what is history?

A

Uncontrolled events that occur between testing occasions which may influence the DV (other than the IV)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

IV threats: what’s maturation?

A

Changes in the characteristics of ppt between the test occasions e.g ppt getting older between conditions (i.e in longitudinal study), or looking at memory (ppt memory may have deteriorated between testing conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

IV threats: what is instrumentation?

A

Changes in sensitivity/reliability of measuring instruments during the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

IV threats: what is reactivity?

A

Ppt awareness that they are being observed may influence their behaviour
(demand characteristics – when ppt believes they are expected to act in a certain way, experimenter bias – when experimenters expect to see something and this influences their behaviour)
How to counter reactivity – single/double blind procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

what is the difference between reliability and validity?

A

Reliability = consistency
 Test this through repeating measurements/study
Validity = truthfulness
 By operationalise variables and controlled experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the main 4 ways researchers measure reliability?

A

test-retest reliability, inter-rater reliability, parallel forms reliability, internal consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

measuring reliability: what are parallel forms reliability measurements?

A

If we administer different versions of out measure (e.g different types of IQ test) to the same ppt, would we get the same results?
Different versions can be useful to help eliminate memory effects as the questions are different

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

measuring reliability: what is internal consistency?

A

Determines whether all items/questions (e.g questionnaire) are measuring the same thing/construct
This can be assessed through split-half reliability (questionnaire items split into two groups and halves be administers to ppt on diff occasions e.g even questions vs odd questions – these should produce similar results!!)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what are the 4 types of validity?

A

face validity, content validity, criterion validity, construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

what is face validity?

A

Is the test measuring what it if supposed to measure at face-value?
e.g do the questions on a test reflect the knowledge ppt should have learnt?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is context validity?

A

Does it measure the construct fully?
e.g does the test cover all expected knowledge and not just part of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is criterion validity?

A

Does the measure give results which are in agreement with other measures of the same thing?
HOW TO MEASURE THIS:
Concurrent validity – comparison of new test with established test
Predictive validity – does the test predict outcome on another variable

17
Q

what is construct validity?

A

Is the construct we are trying to measure valid, and does it actually exist?
The validity of a construct is supported by collected research over time
HOW TO MEASURE THIS:
Convergent validity – correlates with tests of the same and related constructs (e.g satisfaction measures and contentment measures should relate to measures of happiness)
Discriminant validity – the construct shouldn’t correlate with tests of different/unrelated constructs (e.g measurements of sadness shouldn’t correlate with measures of happiness)

18
Q

what is the difference between something being ‘sufficient’ and something being ‘necessary’? (true causation)

A
  • Sufficiency and necessity – variable must achieve this criterion to make claims about causality
  • Sufficient: y is adequate to cause x
  • Necessary: y must be present to cause x
19
Q

for true causation to be established the variable must be…

A

both necessary and sufficient to cause a change

20
Q

what is cluster sampling?

A

Researcher samples an entire group/cluster from the population of interest
- Sufficiency reasons
- Generalisability issues as the cluster may not accurately represent the entire population

21
Q

what is snowball sampling?

A

Recruit a small number of ppt and then use those initial contacts to recruit further ppt
Bias sample (get a certain kind of person) but it useful when looking for specific or difficult to access populations

22
Q

what are the two main concerns for ecological validity?

A
  • POPULATION VALIDITY – is the sample representative?
  • ECOLOGICAL VALIDITY – does the behaviour measured reflect naturally occurring behaviour?
23
Q

what are factors that must be considered when deciding the sample size?

A

 Design (between or within design, number of conditions – more conditions = more ppt needed), response rate (ppt may drop out, not all may contribute), heterogeneity of population (is a small sample size enough, or does it need to be larger in order to be representative?)

24
Q

what problems are associated with repeated measures designs relative to independent group designs?

A

there is an increased likelihood of fatigue effects - may become fatigued by second occasion of participation
there is an increased likelihood of reactivity - gaining awareness is more likely by taking part in 2 conditions

25
Q

give an example of conceptual replication

A

using previously reported methods, collecting a new dataset but recruiting a younger sample, generate an operationalised hypothesis to perform empirical tests on

26
Q

how can a researcher ensure that observers rates have stayed consistent over time?

A

test-retest reliability

27
Q

factorial designs always:
- contain at least…
- the IV always have a…

A

contain at least 2 IVs
always have a within-subjects IV

28
Q

what is an operational definition?

A

variable which has been operationalised by a researcher in order to measure it e.g frustration measured by bitemarks on a pencil

29
Q

what is the principle of induction?

A

Causation - if A is often observed with B it is probable that on the next occasion A is observed, B will be too

30
Q

what is a key issue with the principle of induction?

A

we can’t be certain we have considered every single instance of a phenomena/considered the full range of possible conditions to rely on induction

31
Q

what is the difference between replication and reproduction in psychological testing?

A

Replication - means obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data
reproduction - consistent results using the same input data, computational steps, methods, code, and conditions of analysis.

32
Q

what is the difference between direct and conceptual replications in psychological testing?

A

direct replications - attempt to confirm original findings using same methods
conceptual replications - attempt to confirm original theoretical ideas by repeating across different conditions