research methods Flashcards
define aim
description of what you are researching and why
define hypothesis
states the relationship between the variables and predicts the results
define directional hypothesis
states the direction and correlation the experiment is expected to go in based on previous research
define non directional hypothesis
predicts there will be a difference in results but the direction is unknown as there is no previous research
define null hypothesis
predicts there will be no difference
define IV
variable we change
define DV
variable we measure
define operationalisation of variables and how to do it
clearly defining the variables and stating how they will be measured by adding values and units
define extraneous variables
variables other than IV that may have an effect on DV if not controlled and doesn’t relate to IV
define demand characteristics
clues that allow participant to guess the aim and changes their behaviour to help or sabotage the experiment
define social desirability
when the participant tries to please the researcher or try to make themselves look better
define the hawthorne effect
when people are interested so they show a more positive response which leads to artificially high results
define investigator/experimenter effects
experimenter unconsciously conveys to participant how they should behave
examples of investigator effects
tone, accent, body language, leading questions
define situational variables
aspects of environment that may affect the participants behaviour
examples of situational variables
temperature, noise, authenticity of experiment
define participant variables
the ways each participant varies and how this affects their results
examples of participant variables
trauma, mood, intelligence, anxiety, gender, culture
how can you control extraneous variables
single blind design
double blind design
experimental realism
randomisation
standardisation
controls
define single blind design
participant is not aware of the research aims
define double blind design
participant and experimenter are unaware of aim and hypothesis
define experimental realism
researcher makes the task engaging that the participant doesn’t know they are being observed
define randomisation
randomly allocating tasks and roles to avoid bias
define standardisation
experience of experiment is kept almost identical
define confounding variable
variable other than IV that had a direct effect on the DV and is related to IV
define pilot studies
small scale practice investigations to help identify potential problems before doing the real experiment, so money and time is saved and stops floor and ceiling effect
define validity
the extent to which a study measures what it intends to measure
define internal validity
whether the effects observed are due to the IV and not another factor
define mundane realism
how realistic the task is
define external validity
how well you can even compare your findings to other people, places and times
define ecological validity
the extent to which the results reflect real life
define population validity
how well the sample can be used to generalise to represent the population as a whole
define temporal validity
the extent to which the findings are valid when we consider differences in time progressions
define face validity
the test/questionnaire looks like it measures what it intends to
define concurrent validity
whether the results can be compared to another existing, well established test which measures the same thing and follows the same correlation
how to improve validity
- control group- compare results with the experimental group to see if IV changes
- covert observations- participant doesn’t know they are being watched so they are natural
- questionnaires- keeping them anonymous so they are more truthful
- qualitative methods- interviews have high ecological validity as they represent humans more accurately
- standardise procedures and instructions
- single blind or double blind
- assure results are anonymous so participants are truthful
- incorporate a lie scale to assess the consistency of responses
- triangulation- use of different sources of evidence
define experimental design
researcher has to decide how they will use their participants
define a repeated measures experiment
same group if participants in all conditions
advantages of repeated measures experiment
- no participant variables
- fewer participants so more economical
disadvantages of repeated measures experiment
- order effects
- demand characteristics
define order effects and what are the different types
doing the same task twice
boredom, fatigue, practise
solutions to repeated measures experiment
counterbalancing and randomisation
define counterbalancing
when two groups do the tasks in different order to cancel our order effects
define independent group design
different groups perform only one condition
advantages of independent group design
- no practice effects
- reduces demand characteristics
disadvantages of independent group design
- needs more participants
- participant variables between groups
solutions to independent group design
random allocation as each participant had an equal chance to being in either group and tried to avoid imbalance of participant variables in either group
define matched pairs design
pair up participants on a certain quality that is believed to affect the performance on the DV and their results are compared
advantages of matched pairs design
- participant variables reduced
- no order effects
disadvantages of matched pairs design
- larger number of participants needed
- difficult to match on characteristics like personality
- difficult to know which variables are relevant
solutions to matched pair design
pilot study to help choose which variables are most important to match on
define ceiling effect
task is too easy so all the scores are high
define floor effect
task is too difficult so all scores are low
define construct validity
extent to which a test captures a specific construct or trait and it overlaps with some other aspects of validity
define experiment
IV that is changed so that the effect on DV can be observed and aims to establish cause and effect relationship
define laboratory experiment
takes place in a carefully controlled lab where the IV is manipulated by the experimenter so the DV can be measured
pros of laboratory experiment
- extraneous variables are closely controlled so increases internal validity
- easily repeated as it is controlled so increases reliability
- shows cause and effect relationship
cons of laboratory experiment
- artificial nature so lacks ecological validity
- know they are tested so may lead to demand characteristics
- lacks mundane realism
define field experiment
conducted in natural setting where the IV is still manipulated so the DV can be measured
pros of field experiment
- higher mundane realism
- naturalistic so high ecological validity
- demand characteristics are less likely
cons of field experiment
- harder to control extraneous variables
- ethical issues as the participant don’t know they are being studied
- harder to replicate
- IV may be operationalised in a way that lacks mundane realism
define natural experiment
IV naturally occurs, and would take place even without the research taking place, and DV is then measured
pros of natural experiment
- high external validity
- provides opportunities for research that would otherwise be impossible to replicate
- reduced demand characteristics
cons of natural experiment
- less control over extraneous variables
- very unlikely to be able to replicate
- random allocation of participant not possible so there may be bias and lead to participant variables
define quasi experiment
participants are automatically assigned to a condition depending on their characteristics or features that don’t change
pros of quasi experiment
- controlled experiments so can be replicated
- high ecological validity as you can compare to real life
cons of quasi experiment
- cannot randomly allocate so more chance of extraneous variables
- demand characteristics as they may become more aware
- DV may be articulate and reduced ecological validity
define sampling
choosing a group of people to represent the target population
define target population
population to which the researcher would like to generalise their results to
define opportunity sample
using people who are available at the time of testing
define random sample
each member of the population has an equal chance of being selected like names in a hat or random generator
define stratified sample
subgroups are identified and participants are chosen at random from each group in proportion to target population
define volunteer sample
participants out themselves forward to take part
define systematic sample
using a system to pick a pattern of participants e.g every nth time
pros of opportunity sampling
- quick and cheap
- can have face to face ethical debriefings
cons of opportunity sampling
- researcher bias as they bc hoods who they want
- depends on who’s available, different factors eliminate who is free
pros of random sampling
- avoids bias
- aims to be fair and representative
cons of random sampling
- impossible to have all names of target populayion
- doesn’t guarantee full representation
- time consuming
pros of stratified sampling
- highly representative so has population validity
cons of stratified sampling
- time consuming and difficult to gather
pros of volunteer sampling
- give their informed consent
- will be interested and less likely to withdraw
- large number may apply so it gives more accurate results and in depth analysis
- helpful to find people who can be seen as atypical
cons of volunteer sampling
- biased as it is not representative of the whole population
- hawthorne effect
- demand characteristics
pros of systematic sampling
- normally representative
cons systematic sampling
- may not be able to identify all members of the population
- unexpected bias that has a pattern
- starting point and deciding on list type may be biased
what are the ethical issues
- informed consent
- deception
- right to withdraw
- protection from harm
- privacy and confidentiality
features of informed consent
- making participants aware of the aims of the research, procedures, risks, rights and what their data will be used for
- use consent forms
- under 16s need parental consent
- consent cannot be given by those under the influence
define presumptive consent
similar group of people are told the details of the study and asked if it is acceptable, and their answer will presume the answer of the actual participants
define prior general consent
participants give their permission to be deceived but not knowing how
define retrospective consent
asking them after they have taken part of their data can be used
limitations of informed consent
- invalidate purpose of study
- participants do not know fully what they are getting in to
- demand characteristics
features of deception
- BPS only allows when there is scientific justification and no alternative procedure
- full debrief after to discuss concerns
- cost-benefit analysis
limitations of deception
- cost- benefit decisions are flawed
- debriefing cant turn back time
- distrust in psychologists
features of right to withdraw
- enticed by financial incentives
- fully informed consent so they know what they are doing and less likely to withdraw
- volunteer samples as people are more eager and won’t withdraw
limitations of right to withdraw
- time consuming
- guilty to withdraw
- economic pressure because they are getting paid
features of protection from harm
- physical or psychological
- should be in same state after the experiment as they were before
- no greater harm than what they would experience in every day life
- offer therapy and counselling at the end
- stop the study immediately if the participant is harmed too much
limitations of protection from harm
- harm may not be apparent or obvious yet
- don’t always know what will be harmful beforehand
features of privacy and confidentiality
- protected under data protection act
- using code names and anonymity
- deleting unnecessary daya
limitations of privacy and confidentiality
still work out participants from limited amount of information
define naturalistic observation
behaviour in natural situation or environment without any intervention
advantage of naturalistic observation
high external and ecological validity
disadvantages of naturalistic observation
- less control over extraneous variables
- replication is difficult
define controlled observations
some variables are controlled by the researcher and participants are likely aware that they are being studied
advantages of controlled observation
- more control over extraneous variables
- easy replication
disadvantage of controlled observation
- low ecological validity and mundane realism
- demand characteristics
define overt observation
participants are aware they are being observed, but observers may try to be as unobtrusive as possible
advantages of overt observation
infrom participants and ask for consent
disadvantage of overt observation
- demand characteristics
- social desirability bias
define covert observation
participants are unaware they are being observed
advantages of covert observation
- higher ecological validity
- natural behaviour so high internal validity
disadvantages of covert observation
ethical concerns because they cannot give consent
define non participant observation
observer is merely watching or listening to the behaviour of others and not interacting with
advantage of non participant observation
observer effects less likely
disadvantage of non participant observation
less insightful
define participant observation
observer is part of the group being observed
advantage of participant observation
more insightful
disadvantage of participant observation
- demand characteristics
- lose objectivity
advantages of observations
- high validity
- captures spontaneous and unexpected behaviour
disadvantages of observation
-observer bias
- only observable behaviour can be recorded
- hard to replicate
define inter-observer reliability
if several observers are coding behaviours, their codings agree with each other
features of inter observer reliability
- should agree beforehand the behavioural categories and their interpretations of them
- carry observations at the same time but in different places
- total number of agreements / number of observations
define unstructured observation
all relevant behaviours is recorded and no system is used
evaluation of unstructured observation
+ greater insight
- observer bias, unnecessary behaviours noted (time wasting)
define event sampling
counting the number of times a certain behaviours occurs
evaluate event sampling
+ focuses on an event, find averages
- cant note abnormal behaviours, no indication of when it happened
define time sampling
recording behaviour at preset intervals of time
evaluate time sampling
+ frequencies within observation, more objective
- may miss something, demand characteristics, observer bias, social desirability bias
define structured observations
use various systems to record behaviour
evaluate structured observations
+ smaller risk of observer bias
- less insight as they are nothing frequencies of behaviour, interesting behaviours unwritten
define behavioural categories and criteria of them
target behaviour is operationalised so it’s more reliable and measurable
- objective- no inferences have to be made
- cover all possible behaviours and no waste basket
- criteria shouldn’t overlap
define self report techniques and why they are useful
participants give information about themselves, including their experiences, beliefs and feelings
types of closed questions
likert scale- indicated agreement from strongly agree to strongly disagree
ranked scale- from 1 to 10
semantic differential scale- indicate where they fall between two extremes
multiple choice- choose from
options
advantages of closed auestions
easy to analyse
disadvantages of closed questions
- forced to pick an option that doesnt represent them
- waste baskets
advantages of open questions
- more detail and can expand on answers
- allow for unexpected answers
disadvantages of open questions
- worry of confidentiality
- qualitative data not produced
advantages of questionnaires
- can be distributed to large numbers cheaply and quickly
- may be more willing to participate
disadvantages of questionnaires
- not accessible to all (literate)
- social desirability bias
- leading questions so response bias
- takes a long time to design
- participant bias
- sample not representative
- acquiescence bias (tendency to agree with things)
criteria for questionnaire design
- easily analysed so more likely closer questions
- free from bias and leading questions
- should be clear and avoid using double negatives
- make language understandable for all
- contain filler questions to reduce demand characteristics
- sequence questions sensibly
- avoid double barrelled questions with more than one answer
- pilot study
define correlation
relationship and strength between two variables
define positive correlation
as one co-variables increases, the other increases too
define negative correlation
as one co-variable increases, the other decreases
define no correlation
no relationship between the two variables
define intervening variables
another variables that had not been studied
define continuous variable
variable that can take on any value within a certain range and not categorised
explain correlation coefficients
- between -1 and +1
- show strength of the co-variables
- coefficients above 0.8 have a strong correlation and are reliable and valid
define quantitative data
data in the form of numbers
strengths of quantative data
- reliable
- can be analysed statistically
- easy to compare and analyse
weaknesses of quantitative data
- lacks detail
- may oversimplify
define qualitative data
data in the form of words
strengths of qualitative data
-detailed
weaknesses of qualitative data
- subjective
- unreliable
- hard to compare
- time consuming
- researcher bias
define triangulation
use of a mixture of qualitative and quantitative data
define primary data
information observer and collected directly from first hand experience, including designing and carrying out the study
strengths of primary data
- more reliable
- can cater to your research
weaknesses of primary data
- time consuming and costly
- many not have access to groups and data you need
- ethical considerations
define secondary data
information that was collected from other studies
benefits of secondary data
- use data from bigger samples
- access to information you wouldn’t be able to reach
- meta- analysis
- quicker
- objective and detached
weaknesses of secondary data
- may not be exactly what you are researching
- may not understand research in detail
- takes time to analyse
- outdated
- unreliable
define meta analysis
analyse results from loads of different studies and come up with general conclusions
benefits of meta analysis
- help to identify trends
- increase sample size and reliability of findings
weaknesses of meta analysis
- publication bias like file drawer problem where researcher intentionally does not publish all the data
- some research may contradict each other
what are the measures of central tendency and how do you work them out
mean- add all number together and divide by how many values there are
median- putting the numbers in order and finding the middle number
mode- most common number
evaluation of mean
+considers all data, used for further calculations
- can be skewed by extreme values and make it unrepresentative, can give unrealistically precise values that don’t work for discrete data
evaluation of median
+ will not be affected by extreme values
- may not be representative, little further use
evaluation of mode
+ will not be affected by extreme values, makes more sense when presenting discrete values, easy to use
- does not use all data, may have more than one mode, little further use
what are the dispersion techniques and how to work them out
range- highest minus lowest
standard deviation- spread around the mean
evaluation of range
+ can see consistency, easy to calculate
- affected by extreme values, fails to account distraction, does not account numbers in the middle
evaluation of standard deviation
+ precise measure where all values are taken into account
- difficult to calculate, affected by extreme values
define longitudinal studies
studies conducted over a long period of time to observe long term effects between the same individual
evaluation of longitudinal studies
+ in depth, reduces participant variables
- extraneous variables, people might drop out
define cross sectional studies
group of participants are compared to another group at the same point in time
evaluation of cross sectional studies
+ efficient, more control over experiment
- participant variables
define cross cultural studies
compare behaviours in different cultures
how to display quantitive data
- table
- line graph
- histogram
- bar chart
- scattergram
- pie chart
features of tables
- clearly present data and show any patterns
- raw data to show scored before analysis
features of line graphs
- can show more than one set of data
- continuous data in list form
- independent on x and dependent on y
- join each point up
- see trends over time
features of histograms
- continuous scale
- uses class intervals
- columns touch each other
- frequencies of scores
features of bar charts
- non- continuous data
- columns do not touch
features of scattergrams
- show relationship and correlation between two variables
- continuous data on both axis
- draw line of best fit
features of pie chart
- sectors of a circle to show proportion
define content analysis
quantifying qualitative data through the use of coding units
features of content analysis
-indirect form of observation as you analyse artefact people have produced
- put into categories or typologies, quotations and summaries
what sampling method in content analysis
analysing content every n number of times
how to carry out coding in content analysis
- watch/read the sample and identify potential categories which have emerged
- compare categories/ coding unit with another psychologist and use the ones they have agreed upon
- give examples of the categories that they would be looking for and operationalise
- carry out content analysis separately and counting the number of examples that fall into each category
- compare examples to look for agreement
define thematic analysis
recurring themes identified during coding and are described further in greater detail, perhaps by conducting further analysis
define test-retest reliability
conduct the content analysis and then recode them at a later date and compare the two sets of data
evaluation of content analysis
+ easy to perform, non-invasive and ethical, high ecological validity, easily repeated and reliable
- observer bias, subjective, non-descriptive, cultural bias, may not have ecological validity compared to real life, choice of content can be biased
define case study
in depth investigations of a single person, group of people or event
features of a case study
- represents thoughts, emotions, experiences and abilities
- longitudinal- follow over an extended period of time
- qualitative data like interviews, observations and questionnaires
- examples like HM and KF
evaluation of case studies
+ rich detail, help construct theories, help study the unusual
- hard to generalise, ethical issues like confidentiality and psychological harm, objectivity, past records may be biased or incomplete, hard to establish cause and effect
how to assess validity
face validity or concurrent validity
define reliability
measure of consistency
how to assess reliability
- test-retest reliability- administering the same test or questionnaire in the same person on different occasions, 2 week time frame
- inter-observer reliability- assess observations by conducting content analysis
how to improve reliability
- questionnaires can be rewritten so they aren’t ambiguous
- use same interviewer
- properly trained interviewers
- no leading or ambiguous questions
- structured interviews
- operationalised behavioural categories that do not overlap
define a normal distribution curve
symmetrical pattern of data that creates a bell shaped curve, all measures of central tendency are th same or similar and in the middle
what is a positively skewed distribution
data is concentrated to the right (ceiling effect) and mean is mean higher
what is a negatively skewed distribution
data is concentrated to the right (floor effect) and mean is much lower
what is nominal data
data is in separate categories
what is ordinal data
data is ordered in some way but the scores do not use standardised scales
what is interval data
data that is measured using equal intervals and can go into minuses
what is ratio data
data measured with equal intervals but cannot go into minuses
define statistical testing
provides a way of determining whether a hypothesis should be accepted or rejected
factors affecting choice of statistical test
levels of measurement- nominal, ordinal, interval or ratio
type of test- difference or correlation tion
design- related (repeated measures/matched pairs) or unrelated (independent groups)
test for unrelated, nominal data
chi squared
test for related, nominal data
sign test
test for unrelated, ordinal data
mann whitney
test for related, ordinal data
wilcoxon
test for correlated, ordinal data
spearmans
test for unrelated, interval data
unrelated t test
test for related, interval data
related t test
test for correlated, interval
pearson’s
how to conduct a sign test
- state the hypothesis
- find out if each participants score increased, decreased or stayed the same
- find s value- number of participants with least frequent sign
- find n value- number if participants either change in results
- check results in statistical table to find the critical value
- state a conclusion
how do you know if your results are significant or not
s value must be equal to or less than critical value for it to be significant and reject the null
otherwise, it is not significant and you accept the null
what is peer review
a way of assessing the scientific credibility of a research paper by other psychologists who work in a similar field
ways of conducting a peer review
single blind- names of reviewers not revealed
double blind- both reviewers and researchers are anonymous
open- both reviewers and researchers are known to each other
purpose of peer review
- allocation of research funding
- publication of research in journals and books
- assess research rating of a univeristy
evaluation of peer review strengths
+ check validity of research and determine how important it is
+ anonymity means reviewers can be honest
evaluation of peer review weaknesses
- appropriate experts may not conduct the review
- may be biased towards prestigious researchers
- anonymity may mean some are too harsh or critical
- potential for research to be stolen- affect social relationships
- publication bias where only positive results are published
- can be misleading as once published it is in the public domain, even if it is wrong, like MMR link to autism
- prevents progress of new ideas as radical ideas are often overlooked
- takes a long time and may become out of date
what does p<=0.05 mean
5% likelihood of the results occurring by chance, so you can be 95% certain your results are significant
what is a type 1 error
when experimental hypothesis is accepted when actually results were chance findings so null should have been (usually due to high probability)
what is type 2 error
when null hypothesis is accepted when there was a real difference (due to low probability)
what to remember when completing other statistical tests
- independent groups designs have two N values
- N is number of participants
- for correlation tests, ignore the sign but focus on the magnitude
- for t-tests and pearsons, degree of freedom (N) is N-2
- for chi squared, degrees of freedom (N) is (number of rows - 1) x (number of columns - 1)
how are the different methods of central tendency and dispersion useful when designing a study
- mean- cannot be used with nominal data
- median- appropriate for ordinal data
- mode- only method that can be used for nominal data
- range- useful for ordinal data
- SD- best used with mean to describe interval/ratio data that is normally distributed
tips when designing a study
- stick to what they ask you to do
- use bullet points as subheadings
- correct terminology
- justify why you have chosen to do what you suggest
- suggestions must be well thought out, practical and ethical
- link everything to study
- plan
sections of a psychology report
- title
- abstract
- introduction
- method
- results
- discussion
- references
- appendices
what is a title in a psychology report
short clear description of study
what is in the abstract of a report
- summary of study including aims, hypothesis, method and results
- allows reader to determine if the report is worth reading
what is in the introduction of a report
- review of previous research that is relevant
- start eternal and become more specific (funnel)
- end with stating aims and hypothesis
what’s in the method of a report
- detailed descriptions of what the research did
- in great detail that it could be replicated
- design, participants, material, procedures, ethics
what is in the results of a study
- what the research found
- descriptive statistics- graphs and measures of central tendency and dispersion
- inferential statistics with significance levels and justification with hypothesis rejected or accepted
what is in the discussion of a study
- interpret the results
- relationship to previous research
- strengths and weaknesses of methodology
- implications for theories and real work application
- suggestions for future research
- contribution to research in the current field
how to write references for a journal
last name. first initial, middle initial. (year). title of journal article. name of journal, volume number (issue number), page numbers.
how to references for a book printed
last name. first initial, middle initial. (year). book title. place of publication: publisher
how to write reference for book online
last name. first initial, middle initial. (year). book title. retrieved from URL
how to write reference for website
last name. first initial, middle initial. (year, month date published). article title. retrieved from URL
what is in the appendix of a study
supporting material like raw data
features of science
- empirical method- method of gaining knowledge through direct observation or testing rather than unfounded beliefs
- objectivity- empirical data should not be affected by bias
- replicability- ability to repeat research to check the validity of results
- theory construction- explanation or theories must be constructed to make sense of facts
- hypothesis testing- validity is tested to see if results are significant
- falsifiability- prove a hypothesis wrong in order to be sure of results
define theory
collection of general principles that explain observations and facts
define inductive research
begins with research question which helps form hypotheses and theory
define deductive research
research is theory driven which guides data collection
define a paradigm and paradigm shift
-shared set if assumptions about the subject matter of a discipline
- paradigm shift is when a new minority idea is accepted
- Kuhn argues something is a science if it has a paradigm