RM A2: L1-6 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is a content analysis?

A
  • research methods used to study and analyse the content of communication like text, images and media
  • goal is to understand patterns, themes or messages within the content
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is content analysis carried out on large pieces of data?

A
  • use of coding system of pre determined categories that can be applied to the content
  • pilot study often used to test the categories to ensure they are separate and do not overlap
  • coding could be counting the number of times a word/behaviour appears
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is thematic analysis?

A
  • more focused form of content analysis working with qualitative data
  • used to identify, analyse and interpret key themes or patterns in the data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Content analysis +ve:

A
  • produces reliable data
  • if was to be repeated in the future results would be similar/consistent
    = produces quantitative data
    = allows for trends and patterns in data to be identified
  • less time consuming than other research methods like interviews when collecting data
    = strong external validity as data already in real world so high mundane realism
  • ethical issues like confidentiality avoided as data already in public domain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Content analysis -ve:

A
  • not very scientific or objective
  • can be subjective based on themes used
    = can be invalid, are themes really measuring the effect of IV on DV
  • data collected needs to be contextualised
  • adds complexity and subjectivity
  • e.g. sleep behaviour in lab is different context to sleep at home
    = possible observer bias but can be eliminated by inter-rater reliability
  • possible interpretative bias, researcher may pay extra attention to certain things while ignoring others
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the stages of a content analysis?

A
  1. sampling
    - decide how behaviour/material should be sampled
    - time or event sampling?
  2. record data
    - table/video??
  3. analyse/categorise data
    - summarise data quantitatively or qualitatively?
  4. tally up amounts
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a case study?

A
  • detailed study into the life of a person
  • covers a great detail into their background
  • looks at past and present behaviour of the individual to build a case history
  • provides qualitative data
  • usually focus on a small number of people as usually only few people with a rare behaviour
  • aim to be scientific in their approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a longitudinal study?

A
  • when the case study takes place over a long period of time
  • person/group is tracked over a period of time to look for changes that might occur
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Examples of case studies:

A
  • case study of HM from memory
  • Little Hans for psychodynamic approach
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Case studies +ve:

A
  • detailed so able to gain in depth insight
    = forms basis for future research
  • studying of unusual behaviours lets us infer things about usual behaviour of humans
    = allows the study of situations that would be unethical/difficult to investigate directly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Case studies +ve:

A
  • not generalisable to wider population as data gathered from small group
    = various interviewer biases like social desirability bias for unique individual and interpretive bias from researcher
  • retrospective studies may rely on memory which could be inaccurate
    = time consuming and difficult to replicate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is reliability?

A
  • how consistent the findings from an investigation are
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is internal reliability?

A
  • describes how consistent the test is within itself
  • whether the different parts of a test or study consistently assess the same thing
  • measuring instrument gives the same results on different occasions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How is internal reliability assessed?

A

split half method
- randomly select half go questions and put in one form
- put rest in another
- both forms of this test should be done separately but end in same result
- correlated with coefficient ≥0.8

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is intra researcher reliability?

A
  • examines the consistency of the individual researchers behaviour during research
  • achieved if the researcher behaves consistently during research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is external reliability?

A
  • when consistent results are produced despite when the investigation is done or who it is done by and with
  • findings should be consistent over time or with different groups
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How is external reliability assessed?

A
  • test retest method
  • inter observer/rater reliability
  • pilot study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How is reliability improved?

A
  • inter observer/researcher reliability
  • adjusting questions in interviews if included
  • standardisation of instructions
  • researcher training
  • vigorous operationalisation, concepts less open to interpretation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the test retest method?

A
  • researcher administers same test on same person on different occasions
  • results should have correlation coefficient of ≥0.8
  • sufficient time between retest so participant cannot recall answers
  • not too long as their attitudes may change
20
Q

What is inter observer reliability?

A
  • extent to which there is an agreement between 2 or more observers involved in observation of the study
  • eliminates subjectivity bias
  • may be carried out in pilot study or at end of the study
21
Q

What is a pilot study?

A
  • conducting small trial run of the study before main research
  • ensures procedures and resources are of good standard
  • helps minimise human error
22
Q

How can reliability be improved in self-reports (questionnaires)?

A
  • if low test-retest or inter rater reliability
  • items of survey deselected or rewritten
  • replace open questions which are open for misinterpretation with closed questions, less ambiguous and clearer
23
Q

How can reliability be improved in self-reports (interviews)?

A
  • best way to ensure reliability is to use same interviewer each time
  • if not then all interviewers must be properly trained
  • all interviews should be structured in a certain manner
  • structured interviews so interviewer behaviour more controlled by fixed questions
    = more than one interviewer, inter researcher reliability
24
Q

How can reliability be improved in observations?

A
  • making sure behavioural categories set have been properly operationalised, sir they are measurable and self evident
  • categories should not overlap
  • if not can lead to inconsistent records
    = inter observer reliability
  • observers may need further training
25
Q

How can reliability be improved in experiments?

A
  • standardisation of instructions
  • procedures should be identical for each participant
26
Q

What is validity?

A
  • refers to the extent to which results of a research study are legitimate
27
Q

What is internal validity?

A
  • whether the outcomes observed in an experiment are due to the manipulation of the IV and not another factor
28
Q

What is internal validity affected by?

A
  • investigator effects
  • demand characteristics
  • participant variables
  • confounding variables
  • social desirability bias
  • lack of operationalisation
29
Q

How is internal validity assessed?

A
  • concurrent validity
  • face validity
30
Q

What is concurrent validity?

A
  • extent to which a psychological measure compares to a similar existing measure
  • results obtained should match or be similar to the results of the established version of this test
31
Q

What is face validity?

A
  • when a measure is scrutinised to determine whether it appears to measure what it is supposed to
  • can be done through simply looking at it or passing it to an expert to check
32
Q

How to improve internal validity?

A
  • reduce investigator effects, 2 researchers
  • reduce demand characteristics, double/single blind procedure
  • tackle confounding variables, pilot study, test retest
33
Q

What is external validity?

A
  • relates to factors outside the investigation
  • how well the results gained from the research can be generalised to other settings, people and time lines
34
Q

What are the types of external validity?

A
  • ecological validity
  • temporal validity
  • population validity
35
Q

What is ecological validity?

A
  • extent to which findings can be generalised to other situations and settings
36
Q

What is temporal validity?

A
  • findings are true over a period of time
  • generalisability to other historical times and eras
37
Q

What is population validity?

A
  • generalisability to different populations of various ages, genders and cultures etc.
38
Q

How can external validity be assessed?

A
  • meta analysis, comparison of findings from research towards same hypothesis being compared
  • consider env of test, lab not natural
  • assess how DV was measured, method of measure and task given can have effects
  • assess whether participants were acting naturally, ensure demand characteristics are kept to min
39
Q

How to improve external validity?

A
  • reduce demand, double/single blind procedure
  • more natural setting like field experiment rather than lab
40
Q

How is validity improved experimental research?

A
  • control group, ensure IV is affecting DV
  • standardise procedures
  • single/double blind
41
Q

How is validity improved in questionnaires?

A
  • incorporate lie scale to diminish social desirability bias
  • ensure anonymity so participants are not wary and inclined to be dishonest
  • avoiding leading questions
  • closed, direct questions to reduce ambiguity
42
Q

How is validity improved in observations?

A
  • high ecological validity by minimal intervention by the researcher
  • covert observations by observer
  • precise behavioural categories, no overlapping or ambiguity
43
Q

How is validity improved in qualitative methods?

A
  • interpretative validity, extent to which researchers interpretation matches those of their participants
  • ^ coherence with researcher reports and direct quotes from p
  • triangulation, use of different sources for evidence
44
Q

What is a paradigm?

A
  • a set of shared ideas and assumptions within a scientific discipline
  • model or set of rules that people follow because they believe it works well
45
Q
A