PAPER 2- TOPIC 3 RESEARCH METHODS ✅ Flashcards
define an aim
a general statement of what the researcher wants to investigate, and the purpose of it
e.g. to investigate whether…..has an effect on……
define a hypothesis
a testable predictive statement that states the relationship between the variables being investigated, in the study
e. g. there will be a difference between…
- must be operationalised
- directional or non directional
define operationalisation
clearly defining variables in a way that they can be easily measured
define extraneous variable
a nuisance variable that does not vary systematically with the IV
- random error that doesn’t affect everyone in the same way
- makes it harder to detect results, as “muddies results”
define a confounding variable
a form of extraneous variable that varies systematically with the IV, as it impacts the entire data set
- may confound all results, as this influence may explain results of DV
recall the 8 features of science and the pneumonic
PROPH(F)ET
- paradigms
- replicability
- objectivity
- paradigm shift
- hypothesis testing
- falsifiability
- empirical method
- theory construction
- objectivity
- falsifiability
- replicability
- theory construction
- hypothesis testing
- paradigms and paradigms shift
define objectivity
give example
ability to keep a critical distance, from own thoughts and bias
- forms basis to empirical method
- lab studies with most control, tend to be most objective
- —- e.g. Milgram, Asch
define empirical method
give example
scientific process of gathering evidence through direct observation of the sensory experience
- e.g. experimental method, observational method
- —-> Milgram ——-> Ainsworth SS
define falsifiability
give example of an unfalsifiable theory
theories admit the possibility of being proven false, through research studies
- despite not being “proven”, the strongest theories have survived attempts to falsify them
e. g. Freud’s Oedipus complex is unfalsifiable
define replicability
what does it help assess
example
extent to which the research procedures can be repeated in the exact same way, generating the same findings
- assess validity as repeated over different cultures and situations, to see the extent to which findings can be generalised
(e. g. Ainsworth SS, behavioural categories, standardised procedure)
define a theory
- describe their construction
- a set of general laws that explain certain behaviours
- this will be constructed based on systematic gathering of evidence through empirical method, and can be strengthened by scientific hypothesis testing
define hypothesis testing
•••example
statements, derived from scientific theories, that can be tested systematically and objectively
- only way to be falsified (using null hypothesis)
••• e.g. has STM got more than one store —> led to WMM
define a paradigm
a paradigm is a set of shared beliefs and assumptions in science
- psycholgy lacks a universally accepted paradigm
define a paradigm shift
•••example
- significant change in a dominant theory in a scientific division, causing a scientific revolution
- —> as a result of contradictory research that questions the established paradigm
- other researchers start to question paradigm and there becomes too much evidence against paradigm, to ignore, leading to a new paradigm
•••idea of brain’s function as holistic —> idea of localisation of function
define deduction
process of deriving new hypotheses from an existing theory
define a case study
features of typical case study
a detailed, in depth investigation and analysis, of an individual, group or event
- qualitative data
- longitudinal
- gather data from multiple sources (friends, family of individual also)
pros and cons of case study
pros
• rich, in depth data
• can contribute to understanding of typical functioning (HM research discovered the two separate LTM & STM stores)
• can generate hypotheses for further nomothetic research being done, based on contradictory case (whole theories may be revised)
cons
• rarely occur, so hardly generalisable
• ethical issues (e.g. patient HM always consented to be questioned as he didn’t remember them everyday for 10 years)
• researcher interprets the qualitative data and selects which data to use (bias)
—> also data from family and friends may have experienced memory decay
•
define content analysis
and the aim
a type of observational research, where P’s behaviour is indirectly studied using communications they’ve produced
aim is to systematically summarise the P’s form of communication and split into coding units to be counted (quantitative) or analysed as themes (qualitative)
- usually qualitative to quantitative
- communications (e.g. tests, emails, TV, film)
describe the steps of content analysis
- gather and observe/read through the communication
- the researcher identifies coding units, in order to categorise the information
- the communication is analysed by applying the coding units to the text, and the number of times the coding unit appears is counted
- data is then summarised quantitatively and so conclusions can be drawn
define thematic analysis
a form of content analysis, which uses qualitative method of analysing the data that involves identifying emergent themes within the communication used, in order to summarise it
describe steps of thematic analysis
- form of content analysis
- identify emergent themes (recurring ideas) from the communication
- —-> themes are more descriptive than coding units
(e. g. stereotyping is theme. women gets told to go to kitchen is coding unit) - these themes may be further developed into broader categories, to try and cover most of the aspects in the communication
- a new set of communication may be used to test the validity of the themes
- qualitative summary is then written up, using quotes from communication
pros and cons of content analysis
pros
• high reliability, as follow systematic procedures
- material is often public so don’t need consent & cheap to use secondary data
- flexible as can produce both quantitative and qualitative data
cons
• very time consuming, manually coding the data and identifying coding units or recurrent themes
- P’s are indirectly studied, so communications they produce are analysed out of the context it occurred in
- content analysis suffer from lack of objectivity as researchers interpret the communication themselves —> human error if interpreting more complex communications
acronym to remember the second column (related column) in the table for choosing statistical tests
S
W
R
sign
wilcoxon
related T
hint to remember all of the first column (unrelated data) from the table for choosing inferential tests for significance
all have U in them
chi sqUare
mann whitney U
Unrelated t
the three factors affecting which inferential test to use
- data? (level of measurement)
- difference? (testing for a difference or a correlation)
- design (independent groups or matched pairs/ repeated measures —> unrelated or related)
define a parametric test
a more robust test, that may be able to identify significance that other tests can’t
MUST BE…
- interval data
- P’s must be drawn from a normally distributed population
- the variance between P’s in each group must be similar
observed/ calculated value
is the value that is produced by the statistical test
critical value
value that is gathered from the calculations table for the specific test
- the cut off point between accepting and rejecting the null hypothesis
how do you know whether the observed value should be ≥ or ≤ the critical value, for test to be significant
“gReater rule”
if test has an R in it, the observed/ calculated value should be GREATER than or equal to the critical value
e. g. unRelated t
- Related t
- chi- squaRe
- peaRsons R
- spearman’s Rho
all should have an observed value ≥ critical value to be significant
(sign test, wilcoxon and mann whitney u must have observed value ≤ critical value, to be significant)
define nominal data
- presented in form of categories
- is discrete and non-continuous
define ordinal data
- presented in orders or ranked
- no equal intervals between units of data
- lack precision as subjective for to what someone sees a “4” as
- data is converted into ranks (1st, 2nd, 3rd) for statistical tests as raw scores are not accurate enough
define interval data
- continuous data
- units of equal, precisely defined sizes (often public measurement scales used - e.g. time, temperature)
- most sophisticated precise data - hence their use in parametric tests
experimental design(s) of related data
matched pairs
repeated measures
experimental design(s) of unrelated data
independent groups
type 1 and 2 error
type 1- false positive (said there was a significance when their wasn’t 𝗼𝗻𝗲)
type 2- false negative (𝘁𝗼𝗼 strict)
steps to complete sign test
- find difference between two scores (+ - 0 )
- select lowest number of + or - as ‘s’ observed value (same for Wilcoxon test)
- calculate N (no. of participants - 0’s)
- use hypothesis, probability and N value to find critical value
- s must be ≤ critical value, to be significant
perfect conclusion template for a statistical
using sign test for example
- observed value ‘s’ of 1.4 was ≤ critical value of 1.6 for N value of 10 at a probability of 5% for a one tailed test
- therefore, we can accept the alternative hypothesis showing that ‘the happiness score of toffees increases when Rafa is out, rather then when he is in’
what is the order of all sections of a scientific research report
abstract introduction method results discussion referencing
describe the abstract section of a scientific report
- short summary of the study
- includes all major elements: aims, hypothesis, method, results, discussion
- written last, at start of report
describe the introduction section of a scientific report
• large section of writing
- outlines relevant theories, concepts and other research- & how they relate to this study
- state aims an hypotheses
describe the method section of a scientific report
✰ section explaining how experimental is carried out, split into:
- design - experimental design (e.g. IG, MP, RM) ; experimental method (overt, naturalistic); IV & DV; and validity and reliability issues
- participants - sampling technique, who is studied (biological and demographic), how many P’s, target population
- apparatus/ materials needed
- procedure - step by step instructions of how it was carried out, include briefing and debrief to P’s
- ethics - DRIPC, how this was addressed
describe the results section of a scientific report
✰ summary of key findings, split into :
• descriptive statistics
- uses tables, graphs and measures of central tendency & dispersion
• inferential statistics
- test chosen, calculated and critical values, significance level, if it was significant, which hypotheses accepted
••••• if gathered qualitative data, likely to be in the form of categories or themes
describe the discussion section of a scientific report
✰ large piece of writing where researcher summarises and interprets the findings verbally and the implication of them
includes:
- relationship to previous research in introduction
- limitations of research- consider methodology and suggestions for improvement
- wider implications of research- real world applications and the contribution of research to current theories
- suggestions for future research
describe the referencing section of a scientific report
the full details of any source material mentioned in the report, are cited
describe how to do a book reference
surname, first initial (year published), title of book (italics), place of publication. publisher
e.g. Copland, S (1994), 𝘛𝘩𝘦 𝘤𝘩𝘳𝘰𝘯𝘪𝘤𝘭𝘦𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴, California, Puffin books
how to write a journal reference
author, date, article title, journal name (italics), volume (issue), page numbers
e.g.
Copland, S (1994) Effects of being sus on your ball knowledge, 𝘛𝘩𝘦 𝘜𝘭𝘵𝘪𝘮𝘢𝘵𝘦 𝘉𝘢𝘭𝘭 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘵𝘦𝘴𝘵 , 11 (12), 231-237
brief description of an appendix (not on illuminate scientific report, but also in there)
- contains any raw data, questionnaires, debriefs, consent forms, calculations
- evidence that don’t fit in the main body of report
outline what’s in a consent form
- aim
- what they will do, and for how long
- right to withdraw and confidentiality
- ask for questions
- place to sign & add date
outline what’s in a debrief
- aims
- discuss what went on in all conditions and any deception
- findings
- right to withdraw
- remind confidentiality
- where they can find more info
- any questions?
outline what’s in ‘instructions’
• step by step of everything P has to do
define validity
whether the observed effect of a study is genuinely due to the manipulation of IV (measuring what they set out to) and accurately generalised beyond the research setting
(e.g. across historical contexts, compared to well-recognised studies, measuring what set out to measure)
all different types of validity
- internal
- external
- ecological
- concurrent
- face
- temporal
define concurrent validity
extent to which findings have a correlation with the results from well-recognised studies with established validity
define temporal validity
extent to which findings can be generalised to other historical contexts/eras
define ecological validity
extent to which findings can be generalised to real life, outside of the research setting
define face validity
extent to whether on the surface, a study looks like it will measure what it’s set out t.
define internal validity
extent to which a study measures what it set out to measure
is observed effect on the DV, due to maniupulation of IV
define external validity
the extent to which the findings reflects the real world (in terms of the population (population), the environment (ecological), the time era (over time)
how to improve validity in
- questionnaires
- interviews
- experiments
- observations
• questionnaires
- incorporate redundant questions to create a ‘lie scale’ (account for social desirability bias)
- anonymity
- remove ambiguous questions
• interviews and case studies
- structured interview reduces investigator effects, but reduce rapport and so answers less accurate
- triangulate data
- gain respondent validity by checking you understood the p correctly and use quotes in findings (increase interpretive validity)
• experiments
- control group
- pilot study to expose extraneous variables
- change experimental design to reduce order effects or effect of participant variables
- standardise procedure
- counterbalancing, double blind, randomisation
• observations
- familiarise with BC so don’t miss anything
- operationalise BC, so it is clear what you’re looking for
- use covert or non participant
define a pilot study
a small scale trail run of the actual study completed before the real full scale research is completed
why use pilot studies
- can identify extraneous variables, that can be controlled for the real study
- can help improve reliability (test-retest)
- modify any flaws with procedure or design (reduce cost from messing up large scale)
- can allow training of observers
- can adapt or remove ambiguous or confusing questions in questionnaire or interview
- can identify areas where further randomisation, counterbalancing, standardisation etc… can be used, to limit any observed order effects, bias, investigator effect or demand characteristics
define peer review
assessment of research, done by other psychologists in a similar field, who provide an unbiased opinion of a study to ensure it is high enough quality for publication
describe the aims of peer review
- allocate research funding as people (and funding organisations) may award funding for a research idea they support
- ensure only high quality, … < useful >….. studies are published
- suggest amendments, improvements or withdrawal before publishment
process of peer review
- research is sent to an anonymous peer to objectively review all aspects of written investigstion
- they look for:
• clear and professional methods & design
• validity
• originality (not copied) and significance of the research in that field of psychology
• results - the statistics chosen and the conclusions