Paper 2- Topic 3 Research Methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

define an aim

A

a general statement of what the researcher wants to investigate, and the purpose of it

e.g. to investigate whether…..has an effect on……

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

define a hypothesis

A

a predictive statement that states the relationship between the variables being studied

e.g. there will be a difference between…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

define operalisation

A

clearly defining variables in a way that they can be easily measured

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

define extraneous variable

A

a nuisance variable that does not vary systematically with the IV

  • random error that doesn’t affect everyone in the same way
  • makes it harder to detect results, as “muddies results”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

define a confounding variable

A

a form of extraneous variable that varies systematically with the IV, as it impacts the entire data set

  • may confound all results, as this influence may explain results of DV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is experimental method

A

aka types of experiment

lab
field
natural
quasi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

define a quasi experiment

A

IV is a naturally occurring 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 event (an existing difference between people), not manipulated

  • there can be no change in IV
  • measuring affect of naturally occurring IV on DV
  • can be field or lab
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

define a natural experiment

A

IV is a naturally occurring event, not manipulated

  • someone or something caused the IV to vary (not the researcher)
  • measuring affect of naturally occurring IV on DV
  • can be field or lab
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define a field experiment

A

carried out in a natural setting

- IV manipulation is under less control

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

define a lab experiment

A
  • carried out under controlled conditions

- researcher manipulates IV to see effect on the DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

less common drawbacks of natural experiment

A
  • cause and effect IV on DV is harder to establish as manipulation of IV is under less control
  • ethical issues- P’s can’t consent, and their privacy is invaded
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

less common drawback of quasi and natural experiments

A
  • P’s can’t be randomly allocated to conditions
  • —> may experience confounding variables
    (e. g. all those who been in car crash may have higher trauma level than control group)
  • IV isn’t deliberately changed so can’t say that IV caused the observed change in DV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

define standardisation

A
  • keeping procedures in a research study the same
  • all participants treated the same - (so they have the same experience)
  • makes the study replicable and easy to complete again accurately
  • removes experimenter bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define counterbalancing

A

where half of P’s do the first condition first followed by the second, and the other half do the second condition first and the first condition second
- control for order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

define random allocation

A

each participant has an equal chance of being in each group/condition

  • control for participant variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

define participant variables

A

individual characteristics that may influence how a participant behaves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

randomisation

A
  • use of chance wherever possible to reduce bias or experimenter influence (conscious or unconscious)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what variables do double blind and single blind procedures control

A

double blind: demand characteristics and experimenter bias

single blind: demand characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

recall the 8 features of science and the pneumonic

A

PROPH(F)ET

  • paradigms
  • replicability
  • objectivity
  • paradigm shift
  • hypothesis testing
  • falsifiability
  • empirical method
  • theory construction
  • objectivity
  • falsifiability
  • replicability
  • theory construction
  • hypothesis testing
  • paradigms and paradigms shift
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

define objectivity

A

ability to keep a critical distance, from own thoughts and bias

  • lab studies with most control, tend to be most objective
  • forms basis to empirical method
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

define empirical method

A

scientific process of gathering evidence through direct observation and experience

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

define falsifiability

give example of an unfalsifiable theory

A

theories admit the possibility of being proven false, through research studies

  • despite not being “proven”, the strongest theories have survived attempts to falsify them
  • Popper suggested the key scientific criterion is falsifiability
    e. g. Freud’s Oedipus complex is unfalsifiable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

define replicability

what does it help assess

example of study

A

extent to which the research procedures can be repeated in the exact same way, generating the same findings

  • assess validity as repeated over different cultures and situations, to see the extent to which findings can be generalised
  • Ainsworth’s Strange Situation- lab, instructions, behavioural categories
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

define a theory

  • describe their construction
A
  • a set of general laws that explain certain behaviours
  • this will be constructed based on systematic gathering of evidence through empirical method, and can be strengthened by scientific hypothesis testing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

define hypothesis testing

A

statements, derived from scientific theories, that can be tested systematically and objectively

  • only way to be falsified (using null hypothesis)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

define a paradigm

A

a paradigm is a set of shared beliefs and assumptions in science

  • psycholgy lacks a universally accepted paradigm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

define a paradigm shift

A

a scientific revolution occurs, as a result of contradictory research that questions the established paradigm

  • other researchers start to question paradigm and there becomes too much evidence against paradigm, to ignore, leading to a new paradigm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

define deduction

A

process of deriving new hypotheses from an existing theory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

define a case study

features of typical case study

A

a detailed, in depth investigation and analysis, of an individual, group or event

  • qualitative data
  • longitudinal
  • gather data from multiple sources (friends, family of individual also)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

pros and cons of case study

A

pros
• rich, in depth data
• can contribute to understanding of typical functioning (HM research discovered the two separate LTM & STM stores)
• can generate hypotheses for further nomothetic research being done, based on contradictory case (whole theories may be revised)

cons
• rarely occur, so hardly generalisable
• ethical issues (e.g. patient HM always consented to be questioned as he didn’t remember them everyday for 10 years)
• researcher interprets the qualitative data and selects which data to use (bias)
—> also data from family and friends may have experienced memory decay

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

define content analysis

and the aim

A

a type of observational research, where P’s behaviour is indirectly studied using communications they’ve produced

aim is to systematically summarise the P’s form of communication and split into coding units, so conclusions can be drawn

  • usually qualitative to quantitative
  • communications (e.g. tests, emails, TV, film)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

describe the steps of content analysis

A
  • gather and observe/read through the communication
  • the researcher identifies coding units (similar to behavioural categories)
  • the communication is analysed by applying the coding units to the text, and the number of times the coding unit appears is counted
  • data is then summarised quantitatively and so conclusions can be drawn
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

define thematic analysis

A

a form of content analysis, which uses qualitative method of analysing the data that involves identifying emergent themes within the communication used, in order to summarise it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

describe steps of thematic analysis

A
  • form of content analysis but the summary is qualitative
  • identify emergent themes (recurring ideas) from the communication
  • more descriptive than coding units (e.g. stereotyping is theme. women gets told to go to kitchen is coding unit)
  • these themes may be further developed into broader categories
  • a new set of communication will be used to see if they fit in the themes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

pros and cons of content analysis

A

pros
• material is often public so don’t need consent
• flexible as can produce both quantitative and qualitative data

cons
• P’s are indirectly studied, so communications they produce are analysed out of the context it occurred in
• content analysis suffer from lack of objectivity as researchers interpret the communication themselves

36
Q

acronym to remember the second column (related column) in the table for choosing statistical tests

A

S
W
R

sign
wilcoxon
related T

37
Q

hint to remember all of the first column (unrelated data) from the table for choosing inferential tests for significance

A

all have U in them

chi sqUare
mann whitney U
Unrelated t

38
Q

the three factors affecting which inferential test to use

A
  • data? (level of measurement)
  • difference? (testing for a difference or a correlation)
  • design (independent groups or matched pairs/ repeated measures —> unrelated or related)
39
Q

define a parametric test

A

a more robust test, that may be able to identify significance that other tests can’t

MUST BE…

  • interval data
  • P’s must be drawn from a normally distributed population
  • the variance between P’s in each group must be similar
40
Q

observed/ calculated value

A

is the value that is produced by the statistical test

41
Q

critical value

A

value that is gathered from the calculations table for the specific test

  • the cut off point between accepting and rejecting the null hypothesis
42
Q

how do you know whether the observed value should be ≥ or ≤ the critical value, for test to be significant

A

“gReater rule”

if test has an R in it, the observed/ calculated value should be GREATER than or equal to the critical value

e. g. unRelated t
- Related t
- chi- squaRe
- peaRsons R
- spearman’s Rho

all should have an observed value ≥ critical value to be significant

(sign test, wilcoxon and mann whitney u must have observed value ≤ critical value, to be significant)

43
Q

define nominal data

A
  • presented in form of categories

- is discrete and non-continuous

44
Q

define ordinal data

A
  • presented in orders or ranked
  • no equal intervals between units of data
  • lack precision as subjective for to what someone sees a “4” as
  • data is converted into ranks (1st, 2nd, 3rd) for statistical tests as raw scores are not accurate enough
45
Q

define interval data

A
  • continuous data
  • units of equal, precisely defined sizes (often public measurement scales used - e.g. time, temperature)
  • most sophisticated precise data - hence their use in parametric tests
46
Q

experimental design(s) of related data

A

matched pairs

repeated measures

47
Q

experimental design(s) of unrelated data

A

independent groups

48
Q

type 1 and 2 error

A

type 1- false positive (said there was a significance when their wasn’t 𝗼𝗻𝗲)

type 2- false negative (𝘁𝗼𝗼 strict)

49
Q

steps to complete sign test

A
  • find difference between two scores (+ - 0 )
  • select lowest number of + or - as ‘s’ observed value (same for Wilcoxon test)
  • calculate N (no. of participants - 0’s)
  • use hypothesis, probability and N value to find critical value
  • s must be ≤ critical value, to be significant
50
Q

perfect conclusion template for a statistical

using sign test for example

A
  • observed value ‘s’ of 1.4 was ≤ critical value of 1.6 for N value of 10 at a probability of 5% for a one tailed test
  • therefore, we can accept the alternative hypothesis showing that ‘the happiness score of toffees increases when Rafa is out, rather then when he is in’
51
Q

what is the order of all sections of a scientific research report

A
abstract 
introduction
method
results
discussion
referencing
52
Q

describe the abstract section of a scientific report

A
  • short summary of the study
  • includes all major elements: aims, hypothesis, method, results, discussion
  • written last, at start of report
53
Q

describe the introduction section of a scientific report

A

• large section of writing

  • outlines relevant theories, concepts and other research- & how they relate to this study
  • state aims an hypotheses
54
Q

describe the method section of a scientific report

A

✰ section explaining how experimental is carried out, split into:

  • design - experimental design (e.g. IG, MP, RM) ; experimental method (overt, naturalistic); IV & DV; and validity and reliability issues
  • participants - sampling technique, who is studied (biological and demographic), how many P’s, target population
  • apparatus/ materials needed
  • procedure - step by step instructions of how it was carried out, include briefing and debrief to P’s
  • ethics - DRIPC, how this was addressed
55
Q

describe the results section of a scientific report

A

✰ summary of key findings, split into :

• descriptive statistics
- uses tables, graphs and measures of central tendency & dispersion

• inferential statistics
- test chosen, calculated and critical values, significance level, if it was significant, which hypotheses accepted

56
Q

describe the discussion section of a scientific report

A

✰ large piece of writing where researcher summarises and interprets the findings verbally and the implication of them

includes:
- relationship to previous research in introduction

  • limitations of research- consider methodology and suggestions for improvement
  • wider implications of research- real world applications and the contribution of research to current theories
  • suggestions for future research
57
Q

describe the referencing section of a scientific report

A

full details of any source material mentioned in the report

58
Q

describe how to do a book reference

A

surname, first initial (year published), title of book (italics), place of publication. publisher

e.g. Copland, S (1994), 𝘛𝘩𝘦 𝘤𝘩𝘳𝘰𝘯𝘪𝘤𝘭𝘦𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴, California, Puffin books

59
Q

how to write a journal reference

A

author, date, article title, journal name (italics), volume (issue), page numbers

e.g.
Copland, S (1994) 𝘌𝘧𝘧𝘦𝘤𝘵𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴 𝘰𝘯 𝘺𝘰𝘶𝘳 𝘣𝘢𝘭𝘭 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦, 11 (12), 231-237

60
Q

brief description of an appendix (not on illuminate scientific report, but also in there)

A
  • contains any raw data, questionnaires, debriefs, consent forms, calculations
  • evidence that don’t fit in the main body of report
61
Q

outline what’s in a consent form

A
  • aim
  • what they will do, and for how long
  • right to withdraw and confidentiality
  • ask for questions
  • place to sign & add date
62
Q

outline what’s in a debrief

A
  • aims
  • discuss what went on in all conditions and any deception
  • findings
  • right to withdraw
  • remind confidentiality
  • where they can find more info
  • any questions?
63
Q

outline what’s in ‘instructions’

A

• step by step of everything P has to do

64
Q

all different types of validity

A
  • internal
  • external
  • ecological
  • concurrent
  • face
  • temporal
65
Q

define concurrent validity

A

extent to which findings have a correlation with the results from well-recognised studies with established validity

66
Q

define temporal validity

A

extent to which findings can be generalised to other historical contexts/eras

67
Q

define ecological validity

A

extent to which findings can be generalised to real life, outside of the research setting

68
Q

define face validity

A

extent to whether on the surface, a study looks like it will measure what it’s set out t.

69
Q

define internal validity

A

extent to which a study measures what it set out to measure

is observed effect on the DV, due to maniupulation of IV

70
Q

define external validity

and examples of it

A
  • examples of external validity include ecological validity
71
Q

define validity

A

whether the observed effect of a study is genuine and accurate across a number of measures

(e.g. across historical contexts, compared to well-recognised studies, measuring what set out to measure)

72
Q

how to improve validity in

  • questionnaires
  • interviews
  • experiments
  • observations
A

• questionnaires

  • incorporate redundant questions to create a ‘lie scale’ (account for social desirability bias)
  • anonymity
  • remove ambiguous questions

• interviews and case studies

  • structured interview reduces investigator effects, but reduce rapport and so answers less accurate
  • triangulate data
  • gain respondent validity by checking you understood the p correctly and use quotes in findings (increase interpretive validity)

• experiments

  • control group
  • pilot study to expose extraneous variables
  • change experimental design to reduce order effects or effect of participant variables
  • standardise procedure
  • counterbalancing, double blind, randomisation

• observations

  • familiarise with BC so don’t miss anything
  • operationalise BC, so it is clear what you’re looking for
  • use covert or non participant
73
Q

define demand characteristics

A
  • type of extraneous variable where P’s think they may have guessed the aims of the research and therefore act in a different way. (they either help or hinder the experimenter finding what they want)
74
Q

define a pilot study

A

a small scale trail run of the actual study completed before the real full scale research is completed

75
Q

why use pilot studies

A
  • can identify extraneous variables, that can be controlled for the real study
  • can help improve reliability (test-retest)
  • modify any flaws with procedure or design (reduce cost from messing up large scale)
  • can allow training of observers
  • can adapt or remove ambiguous or confusing questions in questionnaire or interview
76
Q

define peer review

A

assessment of research, done by other psychologists in a similar field, who provide an unbiased opinion of a study to ensure it is high enough quality for publication

77
Q

describe the aims of peer review

A
  • allocate research funding as people (and funding organisations) may award funding for a research idea they support
  • ensure only high quality, … < useful >….. studies are published
  • suggest amendments, improvements or withdrawal before publishment
78
Q

process of peer review

A
  • research is sent to an anonymous peer to objectively review all aspects of written investigstion
  • they look for:
    • clear and professional methods & design
    • validity
    • originality (not copied) and significance in the field
    • results - the stasis chosen and the conclusions
79
Q

weaknesses of peer review

A

•bury ground breaking research

  • may slow down rate of change
  • also if research contradicts paradigm or mainstream research, itsmay be buried or resisted

•publication bias

  • editor preferences may give false view of current psychology
  • some publishers only want to publish positive news or headline grabbing research to boost the popularity of their journal (may ignore valuable research)

•anonymity
- the peers reviewing stay unidentified
- researchers competing for funding may be over critical
(some publishers now reveal after who is reviewing to combat this)
- reviewers may also resist findings that challenge their previous research

80
Q

define reliability

A

how consistent a study’s data is, and the extent to which it would produce similar results if the study was repeated

81
Q

two ways of assessing reliability

A

• inter-observer reliability
(inter-rater for forms like content analysis)

• test-retest reliability

82
Q

define inter-observer reliability

A

the extent to which there is an agreement between two or more observers observing the same behaviour, using the same behavioural categories

83
Q

define test-retest reliability

A

measuring the results of the same P in a test or questionnaire, on different occasions, and comparing the scores for a correlation

84
Q

describe how to carry out inter-observer reliability

A
  • complete observation again with two or more observers watching the same observation and using same behavioural categories
  • compare the results of the different observations using a spearman’s rho test (0.8+ for a strong correlation)
85
Q

describe how to carry out test-retest reliability

A
  • administer same test or questionnaire to the same P on different occasions
  • not too soon - prevent recall of answers
  • not too long - prevent the views or ability being tested changing
  • use a correlation to compare the results, (0.8+ for a strong correlation)
86
Q

how to improve reliability in

  • questionnaires
  • interviews
  • experiments
  • observations
A

• questionnaires

  • closed questions
  • clear, unambiguous questions

• interviews and case studies

  • same researcher - limits leading or ambiguous questions
  • structured interviews

• experiments
- standardised procedure and instructions

• observations
- operationalise and familiarise/train observers with behavioural categories
- two or more observers and compare results
-

87
Q

two ways of assessing validity

A

• ‘eyeball’ test to measure face validity of the test or measure
OR
- pass to expert to measure face validity

              AND

• compare with well-recognised test with established validity to create a correlation co-efficient, measuring concurrent validity (close agreement in +0.8)