PAPER 2- TOPIC 3 RESEARCH METHODS ✅ Flashcards
define an aim
a general statement of what the researcher wants to investigate, and the purpose of it
e.g. to investigate whether…..has an effect on……
define a hypothesis
a testable predictive statement that states the relationship between the variables being investigated, in the study
e. g. there will be a difference between…
- must be operationalised
- directional or non directional
define operationalisation
clearly defining variables in a way that they can be easily measured
define extraneous variable
a nuisance variable that does not vary systematically with the IV
- random error that doesn’t affect everyone in the same way
- makes it harder to detect results, as “muddies results”
define a confounding variable
a form of extraneous variable that varies systematically with the IV, as it impacts the entire data set
- may confound all results, as this influence may explain results of DV
recall the 8 features of science and the pneumonic
PROPH(F)ET
- paradigms
- replicability
- objectivity
- paradigm shift
- hypothesis testing
- falsifiability
- empirical method
- theory construction
- objectivity
- falsifiability
- replicability
- theory construction
- hypothesis testing
- paradigms and paradigms shift
define objectivity
give example
ability to keep a critical distance, from own thoughts and bias
- forms basis to empirical method
- lab studies with most control, tend to be most objective
- —- e.g. Milgram, Asch
define empirical method
give example
scientific process of gathering evidence through direct observation of the sensory experience
- e.g. experimental method, observational method
- —-> Milgram ——-> Ainsworth SS
define falsifiability
give example of an unfalsifiable theory
theories admit the possibility of being proven false, through research studies
- despite not being “proven”, the strongest theories have survived attempts to falsify them
e. g. Freud’s Oedipus complex is unfalsifiable
define replicability
what does it help assess
example
extent to which the research procedures can be repeated in the exact same way, generating the same findings
- assess validity as repeated over different cultures and situations, to see the extent to which findings can be generalised
(e. g. Ainsworth SS, behavioural categories, standardised procedure)
define a theory
- describe their construction
- a set of general laws that explain certain behaviours
- this will be constructed based on systematic gathering of evidence through empirical method, and can be strengthened by scientific hypothesis testing
define hypothesis testing
•••example
statements, derived from scientific theories, that can be tested systematically and objectively
- only way to be falsified (using null hypothesis)
••• e.g. has STM got more than one store —> led to WMM
define a paradigm
a paradigm is a set of shared beliefs and assumptions in science
- psycholgy lacks a universally accepted paradigm
define a paradigm shift
•••example
- significant change in a dominant theory in a scientific division, causing a scientific revolution
- —> as a result of contradictory research that questions the established paradigm
- other researchers start to question paradigm and there becomes too much evidence against paradigm, to ignore, leading to a new paradigm
•••idea of brain’s function as holistic —> idea of localisation of function
define deduction
process of deriving new hypotheses from an existing theory
define a case study
features of typical case study
a detailed, in depth investigation and analysis, of an individual, group or event
- qualitative data
- longitudinal
- gather data from multiple sources (friends, family of individual also)
pros and cons of case study
pros
• rich, in depth data
• can contribute to understanding of typical functioning (HM research discovered the two separate LTM & STM stores)
• can generate hypotheses for further nomothetic research being done, based on contradictory case (whole theories may be revised)
cons
• rarely occur, so hardly generalisable
• ethical issues (e.g. patient HM always consented to be questioned as he didn’t remember them everyday for 10 years)
• researcher interprets the qualitative data and selects which data to use (bias)
—> also data from family and friends may have experienced memory decay
•
define content analysis
and the aim
a type of observational research, where P’s behaviour is indirectly studied using communications they’ve produced
aim is to systematically summarise the P’s form of communication and split into coding units to be counted (quantitative) or analysed as themes (qualitative)
- usually qualitative to quantitative
- communications (e.g. tests, emails, TV, film)
describe the steps of content analysis
- gather and observe/read through the communication
- the researcher identifies coding units, in order to categorise the information
- the communication is analysed by applying the coding units to the text, and the number of times the coding unit appears is counted
- data is then summarised quantitatively and so conclusions can be drawn
define thematic analysis
a form of content analysis, which uses qualitative method of analysing the data that involves identifying emergent themes within the communication used, in order to summarise it
describe steps of thematic analysis
- form of content analysis
- identify emergent themes (recurring ideas) from the communication
- —-> themes are more descriptive than coding units
(e. g. stereotyping is theme. women gets told to go to kitchen is coding unit) - these themes may be further developed into broader categories, to try and cover most of the aspects in the communication
- a new set of communication may be used to test the validity of the themes
- qualitative summary is then written up, using quotes from communication
pros and cons of content analysis
pros
• high reliability, as follow systematic procedures
- material is often public so don’t need consent & cheap to use secondary data
- flexible as can produce both quantitative and qualitative data
cons
• very time consuming, manually coding the data and identifying coding units or recurrent themes
- P’s are indirectly studied, so communications they produce are analysed out of the context it occurred in
- content analysis suffer from lack of objectivity as researchers interpret the communication themselves —> human error if interpreting more complex communications
acronym to remember the second column (related column) in the table for choosing statistical tests
S
W
R
sign
wilcoxon
related T
hint to remember all of the first column (unrelated data) from the table for choosing inferential tests for significance
all have U in them
chi sqUare
mann whitney U
Unrelated t
the three factors affecting which inferential test to use
- data? (level of measurement)
- difference? (testing for a difference or a correlation)
- design (independent groups or matched pairs/ repeated measures —> unrelated or related)
define a parametric test
a more robust test, that may be able to identify significance that other tests can’t
MUST BE…
- interval data
- P’s must be drawn from a normally distributed population
- the variance between P’s in each group must be similar
observed/ calculated value
is the value that is produced by the statistical test
critical value
value that is gathered from the calculations table for the specific test
- the cut off point between accepting and rejecting the null hypothesis
how do you know whether the observed value should be ≥ or ≤ the critical value, for test to be significant
“gReater rule”
if test has an R in it, the observed/ calculated value should be GREATER than or equal to the critical value
e. g. unRelated t
- Related t
- chi- squaRe
- peaRsons R
- spearman’s Rho
all should have an observed value ≥ critical value to be significant
(sign test, wilcoxon and mann whitney u must have observed value ≤ critical value, to be significant)
define nominal data
- presented in form of categories
- is discrete and non-continuous
define ordinal data
- presented in orders or ranked
- no equal intervals between units of data
- lack precision as subjective for to what someone sees a “4” as
- data is converted into ranks (1st, 2nd, 3rd) for statistical tests as raw scores are not accurate enough
define interval data
- continuous data
- units of equal, precisely defined sizes (often public measurement scales used - e.g. time, temperature)
- most sophisticated precise data - hence their use in parametric tests
experimental design(s) of related data
matched pairs
repeated measures
experimental design(s) of unrelated data
independent groups
type 1 and 2 error
type 1- false positive (said there was a significance when their wasn’t 𝗼𝗻𝗲)
type 2- false negative (𝘁𝗼𝗼 strict)
steps to complete sign test
- find difference between two scores (+ - 0 )
- select lowest number of + or - as ‘s’ observed value (same for Wilcoxon test)
- calculate N (no. of participants - 0’s)
- use hypothesis, probability and N value to find critical value
- s must be ≤ critical value, to be significant
perfect conclusion template for a statistical
using sign test for example
- observed value ‘s’ of 1.4 was ≤ critical value of 1.6 for N value of 10 at a probability of 5% for a one tailed test
- therefore, we can accept the alternative hypothesis showing that ‘the happiness score of toffees increases when Rafa is out, rather then when he is in’
what is the order of all sections of a scientific research report
abstract introduction method results discussion referencing
describe the abstract section of a scientific report
- short summary of the study
- includes all major elements: aims, hypothesis, method, results, discussion
- written last, at start of report
describe the introduction section of a scientific report
• large section of writing
- outlines relevant theories, concepts and other research- & how they relate to this study
- state aims an hypotheses
describe the method section of a scientific report
✰ section explaining how experimental is carried out, split into:
- design - experimental design (e.g. IG, MP, RM) ; experimental method (overt, naturalistic); IV & DV; and validity and reliability issues
- participants - sampling technique, who is studied (biological and demographic), how many P’s, target population
- apparatus/ materials needed
- procedure - step by step instructions of how it was carried out, include briefing and debrief to P’s
- ethics - DRIPC, how this was addressed
describe the results section of a scientific report
✰ summary of key findings, split into :
• descriptive statistics
- uses tables, graphs and measures of central tendency & dispersion
• inferential statistics
- test chosen, calculated and critical values, significance level, if it was significant, which hypotheses accepted
••••• if gathered qualitative data, likely to be in the form of categories or themes
describe the discussion section of a scientific report
✰ large piece of writing where researcher summarises and interprets the findings verbally and the implication of them
includes:
- relationship to previous research in introduction
- limitations of research- consider methodology and suggestions for improvement
- wider implications of research- real world applications and the contribution of research to current theories
- suggestions for future research
describe the referencing section of a scientific report
the full details of any source material mentioned in the report, are cited
describe how to do a book reference
surname, first initial (year published), title of book (italics), place of publication. publisher
e.g. Copland, S (1994), 𝘛𝘩𝘦 𝘤𝘩𝘳𝘰𝘯𝘪𝘤𝘭𝘦𝘴 𝘰𝘧 𝘣𝘦𝘪𝘯𝘨 𝘴𝘶𝘴, California, Puffin books
how to write a journal reference
author, date, article title, journal name (italics), volume (issue), page numbers
e.g.
Copland, S (1994) Effects of being sus on your ball knowledge, 𝘛𝘩𝘦 𝘜𝘭𝘵𝘪𝘮𝘢𝘵𝘦 𝘉𝘢𝘭𝘭 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘵𝘦𝘴𝘵 , 11 (12), 231-237
brief description of an appendix (not on illuminate scientific report, but also in there)
- contains any raw data, questionnaires, debriefs, consent forms, calculations
- evidence that don’t fit in the main body of report
outline what’s in a consent form
- aim
- what they will do, and for how long
- right to withdraw and confidentiality
- ask for questions
- place to sign & add date
outline what’s in a debrief
- aims
- discuss what went on in all conditions and any deception
- findings
- right to withdraw
- remind confidentiality
- where they can find more info
- any questions?
outline what’s in ‘instructions’
• step by step of everything P has to do
define validity
whether the observed effect of a study is genuinely due to the manipulation of IV (measuring what they set out to) and accurately generalised beyond the research setting
(e.g. across historical contexts, compared to well-recognised studies, measuring what set out to measure)
all different types of validity
- internal
- external
- ecological
- concurrent
- face
- temporal
define concurrent validity
extent to which findings have a correlation with the results from well-recognised studies with established validity
define temporal validity
extent to which findings can be generalised to other historical contexts/eras
define ecological validity
extent to which findings can be generalised to real life, outside of the research setting
define face validity
extent to whether on the surface, a study looks like it will measure what it’s set out t.
define internal validity
extent to which a study measures what it set out to measure
is observed effect on the DV, due to maniupulation of IV
define external validity
the extent to which the findings reflects the real world (in terms of the population (population), the environment (ecological), the time era (over time)
how to improve validity in
- questionnaires
- interviews
- experiments
- observations
• questionnaires
- incorporate redundant questions to create a ‘lie scale’ (account for social desirability bias)
- anonymity
- remove ambiguous questions
• interviews and case studies
- structured interview reduces investigator effects, but reduce rapport and so answers less accurate
- triangulate data
- gain respondent validity by checking you understood the p correctly and use quotes in findings (increase interpretive validity)
• experiments
- control group
- pilot study to expose extraneous variables
- change experimental design to reduce order effects or effect of participant variables
- standardise procedure
- counterbalancing, double blind, randomisation
• observations
- familiarise with BC so don’t miss anything
- operationalise BC, so it is clear what you’re looking for
- use covert or non participant
define a pilot study
a small scale trail run of the actual study completed before the real full scale research is completed
why use pilot studies
- can identify extraneous variables, that can be controlled for the real study
- can help improve reliability (test-retest)
- modify any flaws with procedure or design (reduce cost from messing up large scale)
- can allow training of observers
- can adapt or remove ambiguous or confusing questions in questionnaire or interview
- can identify areas where further randomisation, counterbalancing, standardisation etc… can be used, to limit any observed order effects, bias, investigator effect or demand characteristics
define peer review
assessment of research, done by other psychologists in a similar field, who provide an unbiased opinion of a study to ensure it is high enough quality for publication
describe the aims of peer review
- allocate research funding as people (and funding organisations) may award funding for a research idea they support
- ensure only high quality, … < useful >….. studies are published
- suggest amendments, improvements or withdrawal before publishment
process of peer review
- research is sent to an anonymous peer to objectively review all aspects of written investigstion
- they look for:
• clear and professional methods & design
• validity
• originality (not copied) and significance of the research in that field of psychology
• results - the statistics chosen and the conclusions
weaknesses of peer review
•bury ground breaking research
- may slow down rate of change
- also if research contradicts paradigm or mainstream research, itsmay be buried or resisted
•publication bias
- editor preferences may give false view of current psychology
- some publishers only want to publish positive news or headline grabbing research to boost the popularity of their journal (may ignore valuable research)
•anonymity
- the peers reviewing stay unidentified
- researchers competing for funding may be over critical
(some publishers now reveal after who is reviewing to combat this)
- reviewers may also resist findings that challenge their previous research
define reliability
how consistent a study’s data is, and the extent to which it would produce similar results if the study was repeated
two ways of assessing reliability
• inter-observer reliability
(inter-rater for forms like content analysis)
• test-retest reliability
define inter-observer reliability
the extent to which there is an agreement between two or more observers observing the same behaviour, using the same behavioural categories
define test-retest reliability
the extent to which there is a correlation between the scores of the same P’s on a test or questionnaire, completed on different occasions
describe how to carry out inter-observer reliability
- complete observation again with two or more observers watching the same observation and using same behavioural categories
- compare the results of the different observations using a spearman’s rho test (0.8+ for a strong correlation)
describe how to carry out test-retest reliability
- administer same test or questionnaire to the same P on different occasions
- not too soon - prevent recall of answers
- not too long - prevent the views or ability being tested changing
- use a correlation to compare the results, (0.8+ for a strong correlation)
how to improve reliability in
- questionnaires
- interviews
- experiments
- observations
• questionnaires
- closed questions
- clear, unambiguous questions
• interviews and case studies
- same researcher - limits leading or ambiguous questions
- structured interviews
• experiments
- standardised procedure and instructions
• observations
- operationalise and familiarise/train observers with behavioural categories
- two or more observers and compare results
-
two ways of assessing validity
• ‘eyeball’ test to measure face validity of the test or measure
OR
- pass to expert to measure face validity
AND
• compare with well-recognised test with established validity to create a correlation co-efficient, measuring concurrent validity (close agreement in +0.8)
define imposed etic
when we assume a technique that works in one cultural context will work in another
define a meta analysis
combination of research and findings from several studies on the same topic
- findings are weighted for its sample size
what is experimental method
aka types of experiment
lab
field
natural
quasi
define a quasi experiment
example
IV is a naturally occurring 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 difference (existing between people)
- –> not manipulated or changed
- measuring affect of naturally occurring IV on DV
- can be field or lab
Bahrick- studied duration of LTM of those within 15 years of graduation, and those within 50 years of graduation
define a natural experiment
example
IV is a naturally occurring event, not manipulated
- someone or something caused the IV to vary (not the researcher)
- measuring affect of naturally occurring IV on DV
- can be field or lab
Rutter’s Romanian Orphans study
define a field experiment
example
carried out in a natural, everyday setting (the usual environment the P’s are in)
- IV manipulation is under less control
Piliavin- studied how people react differently when they see someone collapse on the train, due to drinking or due to injury
define a lab experiment
example
- carried out under controlled conditions
- researcher manipulates IV to see effect on the DV
Milgram
strengths of lab
- control of variables
- –> easier to replicate
- –> an establish cause and effect of IV and DV as variables exactly manipulated (high internal validity)
- –> more accurate results
weaknesses of lab
- tasks may be artificial or trivial - low mundane realism
- P’s aware they are being studied- DC (low external validity)
- artificial environment- may behave differently - hard to generalise to real world (low external validity)
strengths of field
- higher mundane realism in natural environment
- P’s unaware being studied (no DC)
- higher external validity
weaknesses of field
- ethical issues- P’s can’t consent, and their privacy is invaded
- less control over variables - less valid, harder to replicate and harder to establish cause and effect between IV and DV
strengths of natural
- opportunity for research that would not be possible, for practical and ethical reasons
- high external validity, as study real world scenarios and problems
drawbacks of natural experiment
- IV isn’t deliberately changed so can’t say that IV caused the observed change in DV (cant claim cause and effect)
- events rarely occur, so hard to generalise findings
- P’s can’t be randomly allocated to conditions
- —> may experience confounding variables
strengths of quasi
completed in a field or lab setting
- if in field: P’s unaware being studied (no DC), mundane realism
- if in lab: control over variables, replication
drawback of quasi
- P’s can’t be randomly allocated to conditions
- —> may experience confounding variables
(e. g. all those who been in car crash may have higher trauma level than control group) - IV isn’t deliberately changed so can’t say that IV caused the observed change in DV (cant claim cause and effect)
define mundane realism
results are representative of everyday life
define experimental designs
different ways P’s can be arranged into experimental conditions
- RM
- IG
- MP
define repeated measures
one group of P’s experience both conditions of the experiment
- mean of condition A compared to mean of condition B
define independent groups
- two separate groups experience two different conditions of the experiment
- performance of each group is then compared
define matched pairs
- there are a separate group of P’s for each condition of IV
- but each P is matched to one other, based on certain shared characteristics relevant to the study
e. g. complete IQ tests, prior to actual study and match the no.1 score with no.2 score - each P does one condition and their scores are compared directly against their partner
strengths and weaknesses of repeated measures
+++++++
• no effect of individual variables
• more economical - only need one group of P’s (less time/£ on recruitment)
• order effects (boredom, fatigue, practice), also 1st condition may affect the 2nd condition (e.g. effects of coffee in C1 may continue into C2 where they have water)
• may realise what study is aiming to find, therefore DC may be present
strengths and weaknesses of independent groups
+++++++
• reduced order effects
• less chance of realising aim of study - no DC
• individual variables
• less economical- twice as many P’s needed
strengths and weaknesses of matched pairs
+++++++
• accounts for individual variables
• reduced order effects
• unlikely to realise aim of study
• more expensive and time consuming to find suitable matching P’s, also may need a pre-test
• although matched, individual variables will still be present
define standardisation
• what does it remove
- keeping procedures in a research study the same
- all participants treated the same - (so they have the same experience)
• removes experimenter bias
also makes the study replicable and easy to complete again accurately
define counterbalancing
where half of P’s do the first condition first followed by the second, and the other half do the second condition first and the first condition second
- control for order effects
define random allocation
each participant has an equal chance of being in each group/condition
- control for participant variables
define participant variables
individual characteristics that may influence how a participant behaves
define randomisation
- use of chance wherever possible to reduce bias or investigator effects (conscious or unconscious)
what variables do double blind and single blind procedures control
double blind: demand characteristics and experimenter bias
single blind: demand characteristics
define random sample
- all members of target pop have an equal chance of being selected
METHOD
- randomly selects the list of P’s in the sample
- assign all names in sampling frame a number, input all names into computer, and randomly generate n numbers, the P’s that these numbers correlate to, are in the sample
strengths and weaknesses of random sample
+++++++
• no bias, as everyone has equal chance (confounding and extraneous variables are equal divided across groups)
———-
• may still be unrepresentative as its random
• hard to obtain complete list of target pop
define systematic sample
every nth member of population is selected
METHOD
- organise sampling frame e.g alphabetically
- begin from randomly generated number and select every nth member after until sample complete
strengths and weaknesses of systematic sample
+++++++
• unbiased and objective, once n is selected investigator has no effect
———-
• may still be unrepresentative
• time consuming and unrealistic to obtain full sampling frame
define stratified sample
composition of sample is weighted to reflect the proportion of people in certain subgroups in the target population (e.g. race, religion, what football team support)
METHOD
- identify strata in population, work out the % of population the strata contains, use same % representations in sample
strengths and weaknesses of stratified sample
+++++++
• highly representative of target population - can generalise findings
• no bias
———-
• very time consuming and costly
• hard to determine what variables to split the population up, based on
define opportunity sampling
selecting the first people that are willing and able to take part in the study
- whoever, is available and around at the time of them completing their study
strengths and weaknesses of opportunity sample
+++++++
• convenient - time and low cost (don’t need full sampling frame)
• more likely to be people you know, so easier to conduct study
———-
• researcher may only go up to/ ask a certain type of person (only ‘approachable’ people) (researcher bias)
• unrepresentative, as only from one specific area
define volunteer sampling
researcher advertises the study (or even ask people to raise their hand) and P’s select themselves to take part
strengths and weaknesses of volunteer sample
+++++++
• minimum input from researcher, P’s come to you
• all of the P’s who put themselves forward will be willing to participate (all engaged too)
———-
• attract a certain personality of those who want to help, or please the investigator
• cost of advertising
• risk not enough people willing to take part
weakness of all sampling methods
- selected P’s may refuse to take part, ends up more like opportunity sampling
define population
define sample
- a group of people that the researcher is interested in studying
- a small subset selected from the target population to take part in the study, through a particular sampling method (assumes to be representative of target population)
define generalisation as an implication of sampling
- if sample is representative, the findings from the study can be applied to the wider target population
define bias as an implication of sampling
when certain subgroups of a target population are under or over represented within the sample
list observational techniques
- naturalistic
- controlled
- covert
- overt
- participant
- non participant
- structured
- unstructured
define naturalistic and controlled observation
• naturalistic
- completed in field setting, where target behaviours would normally occur
- investigator doesn’t interfere with setting or variables
• controlled
- completed in lab setting under controlled conditions
- investigator controls E and C variables, and manipulate variables to observe effect
define covert and overt observation
covert (covert operation- under cover)
- participants are NOT aware they are being observed
- behaviour must be in public and happening anyway (to be ethical)
overt
- participants ARE aware they are being observed
define participant and non participant observation
participant
- observer involves themself in the group of people they are observing
- sometimes only possible if involve (e.g. to see how factory workers are treated, join them)
non participant
- observer remains separate from group they are observing, and don’t interfere
strengths and weaknesses of overt and covert observations
OVERT \+++++++ • can obtain informed consent --------- • more likely DC • likely susceptible to hawthorne effect (act differently as knew you're being watched)
COVERT
+++++++
• unlikely for DC - increasing internal validity
• less likely susceptible to hawthorne effect (act differently as knew you’re being watched)
———-
• can’t obtain informed consent, so are observing people without them knowing
strengths and weaknesses of naturalistic and controlled observation
NATURALISTIC
+++++++
• higher mundane realism, external validity
• P’s more likely to act naturally
———-
• struggle to judge patterns of behaviour as cant control extraneous or confounding variables
• harder to replicate
CONTROLLED \+++++++ • easier to replicate • more control over variables, higher chance of cause and effect ---------- • DC • artificial environment- less mundane realism • hawthorne effect
strengths and weaknesses of participant and non participant observation
PARTICIPANT
+++++++
• may be necessary (e.g. to see how factory workers are treated, join them)
• can understand reason behind behaviour as gain idea of emotions
• can build rapport with P’s and gain more insight and detail
———-
• observer may go native (too invested in experiment, they lose track of aims)
• may miss important observations when involved
• may lose objectivity if adopt the “local lifestyle” (identity too strongly with P’s)
NON-PARTICIPANT
+++++++
• may be necessary when studying certain social groups (e.g. 50 year old man can’t blend in and join a group of 15 year olds boys)
• no risk of observers going native
• objective
———-
• cant gain understanding of why people are behaving how they are, or may miss further insight)
strengths and weakness of using observations, in general
pros
- give actual insight into behaviour, people don’t always act how they say they do
cons
- observer bias - depends on how observer interprets the situation and behaviour
- cant demonstrate cause and effect as variables aren’t as closely manipulated, may be confounding variables
- cant understand why people are behaving how they are
define structured and unstructured observation
structured
- uses pre-determined behavioural categories to record the frequency that these target behaviours occur
- —-> used in larger studies where there is too much going on
unstructured
- observe all relevant behaviour with no standardised checklist of behavioural categories
- —> used in small scale observation with few P’s
strengths and weaknesses of structured and unstructured observation
STRUCTURED
+++++++
• produce quantitative data- easier to analyse and draw conclusions
• inter rater reliability increased - all observers looking for same behaviours
———
• not as detailed insight
• may miss important behaviours
UNSTRUCTURED
++++++
• rich in data
• understand reason behind behaviour
———-
• hard to replicate, as interpretations of observer important
• so much information - time consuming to analyse and make conclusions
• observer bias - e.g. may only record behaviours that stand out
define behavioural categories
standardised checklist of operationalised target behaviours, that have been broken up to be more measurable and observable
- only in structured observations
strength and weakness of behavioural categories
+++
•replicable
• know what to look out for
• allow for inter-observer reliability
• some categories may be wasted, or empty
• must be clear, operationalised and not overlap otherwise up for interpretation
• some behaviours may be missed, if no category for it
sampling methods of observations
- continuous recording - making note of all target behaviours that occur (unstructured)
- event sampling - tallying the number of times a particular behaviour occurs in
- time sampling - recording all target behaviour that occurs at set intervals for x amount of time
sampling methods of observation
- continuous recording - record all instances of target behaviour (unstructured)
- event sampling - tallying the number of times particular target behaviours occur
- time sampling - record behaviour that occurs within a pre-established time frame, at set intervals
strength and weakness of event sampling
+++
• easier to complete
• focus on chosen behaviours, so don’t miss anything
• if lots happen at same time, may be hard to keep up
• may have some wasted categories
strength and weakness of time sampling
+
•time efficient
- • may miss out in between periods
define self report techniques
define interview
a live conversation where an interviewer asks the interviewee questions to assess their individual thoughts and experiences
describe 3 types of interview
• structured
- standardised pre determine questions asked in a fixed order
• semi-structured
- list of questions to ask in advance, but can ask follow up questions
• unstructured
- no set questions in any fixed order
- only a general aim to discuss a certain topic
- interviewer encouraged to elaborate on answers, by interviewer
- like a conversation
strengths and weaknesses of structured interview
+ easier to analyse
+ replicable
+ interviewer requires less skill and training
- answers may be restricted by questions
- increase social desirability bias, as don’t have too justify
strengths and weaknesses of unstructured interview
+ more detailed insight, can understand reason behind
+ responses tend to be more honest, as have to be able to justify
- hard to analyse and make general conclusions, as lots of different questions and answers
- different questions may be interpreted differently by different P’s
- interviewer bias may affect what questions are asked
define questionnaire
a set of written questions that a respondent answers to assess their individual thoughts, experiences and behaviour
common issues with questionnaires
………. what do they lead to
- overuse of jargon - using specialist topic vocabulary and assuming the respondent knows more about the topic then they actually do
- emotive language - author of Q portrays their emotion through the emotions of the word they use
- leading questions - phrasing of the question indicates the respondent to answer in a particular way
- double barrelled questions - contains two Qs in one
- double negative - two forms of negative in same Q
…. all decrease clarity, increase confusion and misinterpretation
describe open questions
closed questions
open questions are Qs that allow the respondent to answer how they wish, with no sort answers to choose from
closed questions are Qs with a fixed number of responses
strengths of questionnaires
weaknesses of questionnaires
+ cost effective
+ wide geographical use
+ convenient (don’t need researcher present)
+ can generate large volumes of data easily
+ easily analyse and compare data
- may be subject to social desirability bias
- response bias (e.g. acquiescence bias or responding without fully reading the Q
define ethical issues in psychological research
• the problems that arise from how of P’s in the study are treated
these exist when there is conflict between the aims of the experiment to produce valid research data, and the rights or safety of P’s
code of issues created by British Psychological Society (BPS)
describe all the ethical issues
and acronym
DRIPC
Deception- deliberately withholding information and misleading P’s at any time
Right to withdraw- the rights of P’s to be able to withdraw themselves and their data from the study at any time
Informed consent- making sure P’s know what the aims of the research, the procedure, their rights and show their data will be used
Protection from harm- idea that P’s emotional and physical health is top priority and should be protected
Confidentiality- the right of P’s to not share their data to the public, or doing so anonymously to protect their identity
how to deal with informed consent
….if impractical to get consent form:
• consent form - a form that a P signs to confirm they know the aims of there experiment and what they will have to do
if impractical to get consent form:
• retrospective consent - ask P’s for consent after participation, during debrief
• prior general - P’s give their permission to take part in a number of studies,
• presumptive consent - ask people of a similar group to P’s if they would consent to the study
how to deal with deception and protection from harm
- debrief - after study, explain true aims of experiment and any information that was withheld from them
- reminded they can withdraw themselves and their data at any time
- provide support through counselling or therapy if P’s need it
- re-assure that their behaviour was normal/typical
how to deal with confidentiality
• usually don’t record any personal data to maintain anonymity
—> often use initials or numbers to describe P’s
•during briefing and debrief, remind P’s their data is private and won’t be shared
define quantitative data
numerical data that involves statistics
- easy to analyse
- lack detail
define qualitative data
non numerical data expressed in words
- hard to analyse —> hard identfying patterns
- interpreted–> bias
- rich in detail –> true feelings–> greater external validity
define primary data
examples
1 strength
data gathered first hand by the researcher, particularly for the research project
- gathered through observations, interviews, questionnaires
• can be designed in a way to target collecting the data that they need for the investigation
define secondary data
examples
1 weakness
data collected by someone else, other than the researcher, that already exists before the research project
e.g books, journals, websites, government statistics
• may be outdated
define a variable
independent
dependent
any element of an investigation, that can change or vary
- the variable that is manipulated, so that the effect on the D can be measured
- the variable that is measured by the researcher
define demand characteristics
- type of extraneous variable
- P’s think they may have guessed the aims of the research and therefore act in a different way
- either help or hinder the experimenter to find what they want)
define investigator effects
define participant reactivity
- the unwanted influence of the investigator’s actions on the behaviour of P’s
- the differences in behaviour of P’s as they try to adjust to the study environment
extraneous/confounding variables that result from participant reactivity
- demand characteristics - P’s act different as they think they know the aims of study
- hawthorne effect - P’s act different as they know someone is watching
- social desirability bias - P’s act alter their behaviour to be seen as more socially acceptable
extraneous/confounding variables that result from investigator effects
• experimenter bias - experimenter affects the results through interpretation, body language, facial expressions and gender bias (male researchers prefer female Ps)
• interviewer bias - ways in which the interviewer influences the response of the P
—->nodding, leading questions, interpretations
• greenspoon effect - interviewer affects the way the P responds in interview by making affirmative noises
define situational variables
examples
aspects of the environment and situation a P is in, that might affect their behaviour
e.g. light, temp, noise
two examples of how psychological affected the economy
- Bowlby’s theory of monotropy, and further research into role of father
- development of treatments for disorders
describe the financial implications on the economy of Bowlby’s theory of monotropy, and further research into role of father
- Bowlby suggested only mothers can form the monotropic attachment bond with their baby
- This co-erced mums to stay at home due to law of accumulated separation and law of continuity, and meant the father, who may be earning less, had to work
- recent research into role of father (e.g. Frodi found fathers and mothers show same physiological responses to babies crying on clips)
- –> has allowed more flexible working arrangements (fathers to be stay at home, and mothers work if they earn more)
- —> because it shows both parents are suitable to provide necessary emotional support
- can maximise income and contribute positively to economy
describe the financial implications on the economy on development of treatments for mental disorders
- creates a more healthy workforce, contributing to labour
- workers can manage their symptoms and return to work efficiently
- also cost of mass producing drugs outweighs the £15 million cost of absence from work for the economy
ways to get consent in deception study
- presumptive consent
- —-> ask similar group of people how they would feel about taking part, if they agree, there assume real P’s would to
- prior general consent
- —-> asking P’s to give permission to take part in a number of different studies, including one that will involve deception