RESEARCH METHODS (YEAR 2) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Define content analysis

A

Observational study which enables indirect study of behaviour by examining visual, written or verbal communication material (media such as books or TV)
Quantitative, qualitative or both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does the researcher have to decide in sampling method for a content analysis?

A
  • Books - looking at every page or just every fifth page? (E.g.)
  • Comparing books - does the researcher select books randomly or identify certain characteristics (e.g. all biographies or romantic fiction)
  • Adverts - time or event sampling?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define coding in content analysis

A

Placing quantitative or qualitative data in categories (behavioural categories)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How may data from a content analysis be represented?

A
  • Quantitative - count instances of a behaviour
  • Qualitative - describe examples in each category
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define thematic analysis

A

Technique used when analysis qualitative data
Themes or categories identified + data is organised according to these themes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Outline what is involved with thematic analysis

A
  • Qualitative data is difficult to summarise —> summarised by identifying repeated themes in the material
  1. Read + reread data, trying to understand meaning communicated + the perspective of the participants | NO NOTES
  2. Break data into meaningful units - small bits of text which can independently convey meaning
  3. Assign a label or code to each unit (INITIAL CATEGORIES) - each unit may be given 1+ code/label
  4. Combine simple codes into larger categories/themes + then instances can be counted or examples provided
  5. CHECK OVER CATEGORIES by collecting new data + applying the categories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define case study

A

Research method than involves in-depth study of a single individual, institution or event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Outline what is involved in a case study

A
  • Uses information from a range of resources (e.g. person concerned, family + friends)
  • Interviews, observed while engaging in family life, IQ/personality tests or questionnaires to produce psychological data about the target person or group
  • May use experimental method to test what the target person/group can/can’t do
  • Usually LONGITUDINAL
  • Qualitative - but quantitative is possible (e.g. scores from psychological tests)
  • IDIOGRAPHIC APPROACH
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Give an example of a case study

A
  • Phineas Gage
  • 1848, was working on construction of American railway + tramping iron drove through his skull
    ^— survived + could function fairly normally
  • HOWEVER affected personality - friends said he was no longer the same man
  • Case was important in development of brain surgery to remove tumours because it showed some parts of the brain could be removed w/o fatal effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define reliability

A

Consistency of findings from in investigation - the consistency of measurements
Would expect any measurement to produce the same/CONSISTENT data if taken on successive occasions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

State two ways of assessing reliability

A
  • Test re-test
  • Inter-observer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Explain the test re-test method

A

The same test/ questionnaire is given to same person (or people) on 1+ different occasions
If results = same/similar: reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Explain inter-observer reliability

A
  • 2+ Observers compare data from independent observation + record of same person

E.g. Ainsworth’s Strange Situation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

State 4 ways of improving reliability

A
  • Questionnaires
  • Interviews
  • Experiments
  • Observations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Explain questionnaires in improving reliability

A
  • Questionnaire that produces low test-retest reliability may require some items to be deselected or rewritten
  • Researcher may replace open questions with closed questions which are less ambiguous
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain interviews in improving reliability

A
  • To ensure reliability in an interview, use the same interviewer each time
  • If not, all interviews must be trained (e.g. to avoid questions that are leading or ambiguous)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Explain experiments in improving reliability

A
  • Lab experiments often described as ‘reliable’ due to strict control over many aspects of procedure
    ^— e.g. instructions participants receive and the conditions in which they are tested (STANDARDISATION)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Explain observations in improving reliability

A
  • Behavioural categories should be easily measurable (less vague - pushing rather than aggression) + no overlap (hugging and cuddling)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Define validity

A

The extent to which an observed effect is genuine

20
Q

State 2 ways of assessing validity

A
  • Face validity
  • Concurrent validity
21
Q

Explain ‘face validity’ in assessing validity

A
  • The extent to which test items look like what the test claims to measure
  • ONLY REQUIRES INTUITION

E.g. whether the questions on a stress questionnaire are obviously related to the stress

22
Q

Explain ‘concurrent validity’ in assessing validity

A
  • Means of establishing validity by comparing an existing test or questionnaire with the one you are interested in
    ^— participants given both measures at the same time + then scores are compared

IF SCORES = SAME: VALID

23
Q

State 4 ways of improving reliability

A
  • Questionnaires
  • Interviews
  • Experiments
  • Observations
24
Q

Explain questionnaires in improving validity

A
  • Lie scales control for effects of social desirability bias - respondents assume data is confidential

Lie scale: a question is asked, then rephrased and asked again later on
^— different answers indicate a lie, which similar answers indicate the truth

25
Q

Explain interviews in improving validity

A
  • Ensure interviewer builds a rapport w/ interviewee so they are confident to answer honestly
26
Q

Explain experiments in improving validity

A
  • Control group means researchers are more confident that changes in the Dv are due to manipulating the IV
    ^— minimises investigator effects
27
Q

Explain observations in improving validity

A
  • Behavioural categories should be well defined, thoroughly operationalised + not ambiguous or overlapping
28
Q

Define ecological validity

A

The ability to generalise a research effect beyond the particular setting in which it is demonstrated to other settings

29
Q

Define mundane realism

A

Refers to how a study mirrors the real world
^— research environment is realistic to the degree to which experiences encountered in the research environment will occur in the real world

30
Q

Define temporal validity

A
  • The abilities to generalise a research effect beyond the particular time period of the study
31
Q

What is internal validity?

A
  • Concerns what occurs within a study

E.g. demand characteristics, investigator effects, confounding variables, social desirability bias, poorly operationalised behavioural categories

32
Q

What is external validity?

A
  • Concerns what occurs after the study

E.g. temporal + ecological validity

33
Q

Explain why temporal validity may be a problem in research

A
  • Suggests that research findings can become outdated and only generalisable to that time
    ^— this is when it lacks validity
34
Q

What are the two types of test when choosing a statistical test?

A
  • Test of difference
  • Test of association or correlation
35
Q

How is a test of difference identified when choosing a statistical test?

A
  • The test is an experiment
  • Words used: ‘experiment’, ‘difference’
  • Groups sorted into ‘group a’ and ‘group b’
36
Q

How is a test of association or correlation identified when choosing a statistical test?

A
  • It is stated to be a ‘correlation’
  • Words used: ‘correlation’, ‘association’, ‘ relationship’, ‘co-variables’
37
Q

What are the two types of design when choosing a statistical test?

A
  • Unrelated design
  • Related design
38
Q

How is an unrelated design identified when choosing a statistical test?

A
  • The experimental design used in the experiment is ‘independent groups’
39
Q

How is a related design identified when choosing a statistical test?

A
  • The experimental design used in the experiment is ‘repeated measures’ or ‘matched pairs’
40
Q

What are the three types of data when choosing a statistical test?

A
  • Nominal data
  • Ordinal data
  • Interval data
41
Q

How is nominal data identified when choosing a statistical test?

A
  • Data gathered in the way of categories
    ^— e.g. counting no. of boys and girls in the year (‘male’ and ‘female’ categories)
42
Q

How is ordinal data identified when choosing a statistical test?

A
  • Data gathered through a scale (e.g. from questionnaire)
    ^— e.g. asking everyone in a class how much they like psychology on a scale of 1-10
43
Q

How is interval data identified when choosing a statistical test?

A
  • Data gathered that exists with scientific units
    ^— kg, seconds, height in cm
44
Q

Give a method of determining if the calculated value needs to be higher or lower than the critical value fore each statistical test

A

Rule of R
- Contains an R = calculated value > critical value
- Doesn’t contain an R = calculated value < critical value

45
Q

What is a Type I error?

A
  • A ‘false positive’
  • When the alternative hypothesis is mistakenly accepted rather than the null hypothesis
46
Q

What is a Type II error?

A
  • A ‘false negative’
  • When the null hypothesis is mistakenly accepted rather than the alternative hypothesis