Paper 2- Topic 3 Research Methods Flashcards
define an aim
a general statement of what the researcher wants to investigate, and the purpose of it
e.g. to investigate whether…..has an effect on……
define a hypothesis
a predictive statement that states the relationship between the variables being studied
e.g. there will be a difference between…
define operalisation
clearly defining variables in a way that they can be easily measured
define extraneous variable
a nuisance variable that does not vary systematically with the IV
- random error that doesn’t affect everyone in the same way
- makes it harder to detect results, as “muddies results”
define a confounding variable
a form of extraneous variable that varies systematically with the IV, as it impacts the entire data set
- may confound all results, as this influence may explain results of DV
what is experimental method
aka types of experiment
lab
field
natural
quasi
define a quasi experiment
IV is a naturally occurring 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 event (an existing difference between people), not manipulated
- there can be no change in IV
- measuring affect of naturally occurring IV on DV
- can be field or lab
define a natural experiment
IV is a naturally occurring event, not manipulated
- someone or something caused the IV to vary (not the researcher)
- measuring affect of naturally occurring IV on DV
- can be field or lab
define a field experiment
carried out in a natural setting
- IV manipulation is under less control
define a lab experiment
- carried out under controlled conditions
- researcher manipulates IV to see effect on the DV
less common drawbacks of natural experiment
- cause and effect IV on DV is harder to establish as manipulation of IV is under less control
- ethical issues- P’s can’t consent, and their privacy is invaded
less common drawback of quasi and natural experiments
- P’s can’t be randomly allocated to conditions
- —> may experience confounding variables
(e. g. all those who been in car crash may have higher trauma level than control group) - IV isn’t deliberately changed so can’t say that IV caused the observed change in DV
define standardisation
- keeping procedures in a research study the same
- all participants treated the same - (so they have the same experience)
- makes the study replicable and easy to complete again accurately
- removes experimenter bias
define counterbalancing
where half of P’s do the first condition first followed by the second, and the other half do the second condition first and the first condition second
- control for order effects
define random allocation
each participant has an equal chance of being in each group/condition
- control for participant variables
define participant variables
individual characteristics that may influence how a participant behaves
randomisation
- use of chance wherever possible to reduce bias or experimenter influence (conscious or unconscious)
what variables do double blind and single blind procedures control
double blind: demand characteristics and experimenter bias
single blind: demand characteristics
recall the 8 features of science and the pneumonic
PROPH(F)ET
- paradigms
- replicability
- objectivity
- paradigm shift
- hypothesis testing
- falsifiability
- empirical method
- theory construction
- objectivity
- falsifiability
- replicability
- theory construction
- hypothesis testing
- paradigms and paradigms shift
define objectivity
ability to keep a critical distance, from own thoughts and bias
- lab studies with most control, tend to be most objective
- forms basis to empirical method
define empirical method
scientific process of gathering evidence through direct observation and experience
define falsifiability
give example of an unfalsifiable theory
theories admit the possibility of being proven false, through research studies
- despite not being “proven”, the strongest theories have survived attempts to falsify them
- Popper suggested the key scientific criterion is falsifiability
e. g. Freud’s Oedipus complex is unfalsifiable
define replicability
what does it help assess
example of study
extent to which the research procedures can be repeated in the exact same way, generating the same findings
- assess validity as repeated over different cultures and situations, to see the extent to which findings can be generalised
- Ainsworth’s Strange Situation- lab, instructions, behavioural categories
define a theory
- describe their construction
- a set of general laws that explain certain behaviours
- this will be constructed based on systematic gathering of evidence through empirical method, and can be strengthened by scientific hypothesis testing
define hypothesis testing
statements, derived from scientific theories, that can be tested systematically and objectively
- only way to be falsified (using null hypothesis)
define a paradigm
a paradigm is a set of shared beliefs and assumptions in science
- psycholgy lacks a universally accepted paradigm
define a paradigm shift
a scientific revolution occurs, as a result of contradictory research that questions the established paradigm
- other researchers start to question paradigm and there becomes too much evidence against paradigm, to ignore, leading to a new paradigm
define deduction
process of deriving new hypotheses from an existing theory
define a case study
features of typical case study
a detailed, in depth investigation and analysis, of an individual, group or event
- qualitative data
- longitudinal
- gather data from multiple sources (friends, family of individual also)
pros and cons of case study
pros
• rich, in depth data
• can contribute to understanding of typical functioning (HM research discovered the two separate LTM & STM stores)
• can generate hypotheses for further nomothetic research being done, based on contradictory case (whole theories may be revised)
cons
• rarely occur, so hardly generalisable
• ethical issues (e.g. patient HM always consented to be questioned as he didn’t remember them everyday for 10 years)
• researcher interprets the qualitative data and selects which data to use (bias)
—> also data from family and friends may have experienced memory decay
•
define content analysis
and the aim
a type of observational research, where P’s behaviour is indirectly studied using communications they’ve produced
aim is to systematically summarise the P’s form of communication and split into coding units, so conclusions can be drawn
- usually qualitative to quantitative
- communications (e.g. tests, emails, TV, film)
describe the steps of content analysis
- gather and observe/read through the communication
- the researcher identifies coding units (similar to behavioural categories)
- the communication is analysed by applying the coding units to the text, and the number of times the coding unit appears is counted
- data is then summarised quantitatively and so conclusions can be drawn
define thematic analysis
a form of content analysis, which uses qualitative method of analysing the data that involves identifying emergent themes within the communication used, in order to summarise it
describe steps of thematic analysis
- form of content analysis but the summary is qualitative
- identify emergent themes (recurring ideas) from the communication
- more descriptive than coding units (e.g. stereotyping is theme. women gets told to go to kitchen is coding unit)
- these themes may be further developed into broader categories
- a new set of communication will be used to see if they fit in the themes
pros and cons of content analysis
pros
• material is often public so don’t need consent
• flexible as can produce both quantitative and qualitative data
cons
• P’s are indirectly studied, so communications they produce are analysed out of the context it occurred in
• content analysis suffer from lack of objectivity as researchers interpret the communication themselves
acronym to remember the second column (related column) in the table for choosing statistical tests
S
W
R
sign
wilcoxon
related T
hint to remember all of the first column (unrelated data) from the table for choosing inferential tests for significance
all have U in them
chi sqUare
mann whitney U
Unrelated t
the three factors affecting which inferential test to use
- data? (level of measurement)
- difference? (testing for a difference or a correlation)
- design (independent groups or matched pairs/ repeated measures —> unrelated or related)
define a parametric test
a more robust test, that may be able to identify significance that other tests can’t
MUST BE…
- interval data
- P’s must be drawn from a normally distributed population
- the variance between P’s in each group must be similar
observed/ calculated value
is the value that is produced by the statistical test
critical value
value that is gathered from the calculations table for the specific test
- the cut off point between accepting and rejecting the null hypothesis
how do you know whether the observed value should be ≥ or ≤ the critical value, for test to be significant
“gReater rule”
if test has an R in it, the observed/ calculated value should be GREATER than or equal to the critical value
e. g. unRelated t
- Related t
- chi- squaRe
- peaRsons R
- spearman’s Rho
all should have an observed value ≥ critical value to be significant
(sign test, wilcoxon and mann whitney u must have observed value ≤ critical value, to be significant)
define nominal data
- presented in form of categories
- is discrete and non-continuous
define ordinal data
- presented in orders or ranked
- no equal intervals between units of data
- lack precision as subjective for to what someone sees a “4” as
- data is converted into ranks (1st, 2nd, 3rd) for statistical tests as raw scores are not accurate enough
define interval data
- continuous data
- units of equal, precisely defined sizes (often public measurement scales used - e.g. time, temperature)
- most sophisticated precise data - hence their use in parametric tests
experimental design(s) of related data
matched pairs
repeated measures
experimental design(s) of unrelated data
independent groups
type 1 and 2 error
type 1- false positive (said there was a significance when their wasn’t 𝗼𝗻𝗲)
type 2- false negative (𝘁𝗼𝗼 strict)
steps to complete sign test
- find difference between two scores (+ - 0 )
- select lowest number of + or - as ‘s’ observed value (same for Wilcoxon test)
- calculate N (no. of participants - 0’s)
- use hypothesis, probability and N value to find critical value
- s must be ≤ critical value, to be significant
perfect conclusion template for a statistical
using sign test for example
- observed value ‘s’ of 1.4 was ≤ critical value of 1.6 for N value of 10 at a probability of 5% for a one tailed test
- therefore, we can accept the alternative hypothesis showing that ‘the happiness score of toffees increases when Rafa is out, rather then when he is in’
what is the order of all sections of a scientific research report
abstract introduction method results discussion referencing
describe the abstract section of a scientific report
- short summary of the study
- includes all major elements: aims, hypothesis, method, results, discussion
- written last, at start of report
describe the introduction section of a scientific report
• large section of writing
- outlines relevant theories, concepts and other research- & how they relate to this study
- state aims an hypotheses
describe the method section of a scientific report
✰ section explaining how experimental is carried out, split into:
- design - experimental design (e.g. IG, MP, RM) ; experimental method (overt, naturalistic); IV & DV; and validity and reliability issues
- participants - sampling technique, who is studied (biological and demographic), how many P’s, target population
- apparatus/ materials needed
- procedure - step by step instructions of how it was carried out, include briefing and debrief to P’s
- ethics - DRIPC, how this was addressed
describe the results section of a scientific report
✰ summary of key findings, split into :
• descriptive statistics
- uses tables, graphs and measures of central tendency & dispersion
• inferential statistics
- test chosen, calculated and critical values, significance level, if it was significant, which hypotheses accepted
describe the discussion section of a scientific report
✰ large piece of writing where researcher summarises and interprets the findings verbally and the implication of them
includes:
- relationship to previous research in introduction
- limitations of research- consider methodology and suggestions for improvement
- wider implications of research- real world applications and the contribution of research to current theories
- suggestions for future research
describe the referencing section of a scientific report
full details of any source material mentioned in the report
describe how to do a book reference
surname, first initial (year published), title of book (italics), place of publication. publisher
e.g. Copland, S (1994), 𝘛𝘩𝘦 𝘤𝘩𝘳𝘰𝘯𝘪𝘤𝘭𝘦𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴, California, Puffin books
how to write a journal reference
author, date, article title, journal name (italics), volume (issue), page numbers
e.g.
Copland, S (1994) 𝘌𝘧𝘧𝘦𝘤𝘵𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴 𝘰𝘯 𝘺𝘰𝘶𝘳 𝘣𝘢𝘭𝘭 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦, 11 (12), 231-237
brief description of an appendix (not on illuminate scientific report, but also in there)
- contains any raw data, questionnaires, debriefs, consent forms, calculations
- evidence that don’t fit in the main body of report
outline what’s in a consent form
- aim
- what they will do, and for how long
- right to withdraw and confidentiality
- ask for questions
- place to sign & add date
outline what’s in a debrief
- aims
- discuss what went on in all conditions and any deception
- findings
- right to withdraw
- remind confidentiality
- where they can find more info
- any questions?
outline what’s in ‘instructions’
• step by step of everything P has to do
all different types of validity
- internal
- external
- ecological
- concurrent
- face
- temporal
define concurrent validity
extent to which findings have a correlation with the results from well-recognised studies with established validity
define temporal validity
extent to which findings can be generalised to other historical contexts/eras
define ecological validity
extent to which findings can be generalised to real life, outside of the research setting
define face validity
extent to whether on the surface, a study looks like it will measure what it’s set out t.
define internal validity
extent to which a study measures what it set out to measure
is observed effect on the DV, due to maniupulation of IV
define external validity
and examples of it
- examples of external validity include ecological validity
define validity
whether the observed effect of a study is genuine and accurate across a number of measures
(e.g. across historical contexts, compared to well-recognised studies, measuring what set out to measure)
how to improve validity in
- questionnaires
- interviews
- experiments
- observations
• questionnaires
- incorporate redundant questions to create a ‘lie scale’ (account for social desirability bias)
- anonymity
- remove ambiguous questions
• interviews and case studies
- structured interview reduces investigator effects, but reduce rapport and so answers less accurate
- triangulate data
- gain respondent validity by checking you understood the p correctly and use quotes in findings (increase interpretive validity)
• experiments
- control group
- pilot study to expose extraneous variables
- change experimental design to reduce order effects or effect of participant variables
- standardise procedure
- counterbalancing, double blind, randomisation
• observations
- familiarise with BC so don’t miss anything
- operationalise BC, so it is clear what you’re looking for
- use covert or non participant
define demand characteristics
- type of extraneous variable where P’s think they may have guessed the aims of the research and therefore act in a different way. (they either help or hinder the experimenter finding what they want)
define a pilot study
a small scale trail run of the actual study completed before the real full scale research is completed
why use pilot studies
- can identify extraneous variables, that can be controlled for the real study
- can help improve reliability (test-retest)
- modify any flaws with procedure or design (reduce cost from messing up large scale)
- can allow training of observers
- can adapt or remove ambiguous or confusing questions in questionnaire or interview
define peer review
assessment of research, done by other psychologists in a similar field, who provide an unbiased opinion of a study to ensure it is high enough quality for publication
describe the aims of peer review
- allocate research funding as people (and funding organisations) may award funding for a research idea they support
- ensure only high quality, … < useful >….. studies are published
- suggest amendments, improvements or withdrawal before publishment
process of peer review
- research is sent to an anonymous peer to objectively review all aspects of written investigstion
- they look for:
• clear and professional methods & design
• validity
• originality (not copied) and significance in the field
• results - the stasis chosen and the conclusions
weaknesses of peer review
•bury ground breaking research
- may slow down rate of change
- also if research contradicts paradigm or mainstream research, itsmay be buried or resisted
•publication bias
- editor preferences may give false view of current psychology
- some publishers only want to publish positive news or headline grabbing research to boost the popularity of their journal (may ignore valuable research)
•anonymity
- the peers reviewing stay unidentified
- researchers competing for funding may be over critical
(some publishers now reveal after who is reviewing to combat this)
- reviewers may also resist findings that challenge their previous research
define reliability
how consistent a study’s data is, and the extent to which it would produce similar results if the study was repeated
two ways of assessing reliability
• inter-observer reliability
(inter-rater for forms like content analysis)
• test-retest reliability
define inter-observer reliability
the extent to which there is an agreement between two or more observers observing the same behaviour, using the same behavioural categories
define test-retest reliability
measuring the results of the same P in a test or questionnaire, on different occasions, and comparing the scores for a correlation
describe how to carry out inter-observer reliability
- complete observation again with two or more observers watching the same observation and using same behavioural categories
- compare the results of the different observations using a spearman’s rho test (0.8+ for a strong correlation)
describe how to carry out test-retest reliability
- administer same test or questionnaire to the same P on different occasions
- not too soon - prevent recall of answers
- not too long - prevent the views or ability being tested changing
- use a correlation to compare the results, (0.8+ for a strong correlation)
how to improve reliability in
- questionnaires
- interviews
- experiments
- observations
• questionnaires
- closed questions
- clear, unambiguous questions
• interviews and case studies
- same researcher - limits leading or ambiguous questions
- structured interviews
• experiments
- standardised procedure and instructions
• observations
- operationalise and familiarise/train observers with behavioural categories
- two or more observers and compare results
-
two ways of assessing validity
• ‘eyeball’ test to measure face validity of the test or measure
OR
- pass to expert to measure face validity
AND
• compare with well-recognised test with established validity to create a correlation co-efficient, measuring concurrent validity (close agreement in +0.8)