Year 2 research methods Flashcards
What must a psychology report include?
Title, abstract, introduction, procedure, results, discussion, references and appendices
What is the title?
Must be concise, clear, gives the reader an idea of the investigations concerns
what is the abstract?
a brief summary of the investigation , written last, appears at the front
what must be included in the abstract?
a one sentence summary description of participants and sampling technique description of procedure description of results conclusion
what is the introduction?
a literature review of the general area of investigation
relevant theories, concepts and studies
logical progression - become more specific until aim and hypnosis are presented
What is the method?
Four sub-sections design and overview participants and investigators apparatus procedure
What is included in the design and overview?
research method used and why research design use and justified state iv and dv any other variables controlled how you delay with ethical issues
what is included in participants and investigators
who and how were they selected sampling procedure the investigator number of participants how they were allocated into groups
what is included in the apparatus and materials
a list of everything used
what is included in the procedure?
exactly what you did from start to finish
any pilot studies
refer to appendices for standardised instructions, debrief, copies of materials
a verbatim of everything said to participants
what is included in the results?
summarise key findings
summary table
fully labelled graph
do not include raw data
what is included in inferential statistics ?
were the results statistically significant?
is it possible to reject null hypothesis?
statistical test must be fully justified
what is included in the discussion ?
state findings in psychological terms, relating to aims and hypothesis
state whether findings support those of your background study
compare findings to existing research
what is included in the discussion of limitations and modifications ?
critical look at research strengths of study weaknesses of research look at confounding variables that could have affected the results suggest modifications
what is included in the discussion: implications and suggestions for further research?
suggest ideas for further research
real world applications and implications
What is included in references ?
background study
any internet resources
in alphabetical order
what is included in the appendices ?
raw data calculations stimulus materials standardised instructions debrief
what are descriptive statistics?
can not tell us whether the results are significant or not
measures of central tendency
measures of dispersion
graphs and tables
what are inferential statistics ?
refers to use of statistical tests which tell us whether the relationship found is significantly signficant or not
helps decide which hypothesis to accept and which to reject
what is probability ?
how likely the results are due to chance factors
what is the level of probability used by psychologists?
5%
0.05
what is a null hypothesis ?
any difference between the two conditions is caused by chance
what is a type one error?
occurs when a null hypnosis is rejected when it is in fact true
overly optimistic
more likely to occur when a less stringent level of significance is applied
what is a type two error?
occurs when a null hypothesis is retained when it is in fact false
pessimistic
more likely to occur when a more stringent level of significance is applied
When do you used a Chi squared test?
when it is a test of difference
unrelated data
nominal data
when do you use a sign test?
test of difference
related data
nominal data
when do you use a chi squared test (2)
test of correlation/ relationship
nominal
when do you use Mann-Whitney test?
test of difference
unrelated data
ordinal
When do you use a Wilcoxon test?
test of difference
related data
ordinal
when do you use Spearman’s Rho test?
test of correlation
ordinal
When do you use an unrelated t-test?
test of difference
unrelated data
interval
when do you use a related t-test?
test of difference
related data
interval
when do you use Pearson’s r test?
test of correlation
interval
What are the parametric tests?
unrelated t-test, related t-test, Pearson’s r test
what are non-parametric tests?
chi-squared
sign test
Mann Whitney
Wilcoxon
Spearman’s Rho
What is nominal data?
data in separate categories
data is discrete
What is ordinal data?
the data is ordered in some way
does not have equal measurements between scores
What is interval data?
data is measured using units of equal measurement
How to do a sign test
Convert data into nominal data by subtracting one category score from the other - either a + or a -
add up the + and add the -
take the less frequent sign and call this a and the number of participants N
compare calculated value with critical value
How to find the critical value using chi squared test
Degrees of freedom (df)= (number of rows-1) x (number of columns -1)
What is content analysis?
the method used to analyse qualitative data
researcher observes indirectly through visual, written or verbal material
content analysis
What is coding?
involves placing quantitative and qualitative data into catergories
What is thematic analysis?
a theme in content analysis refers to any idea that is recurrent
what is the process of content analysis?
- make design decisions
- read artefacts in an unbiased way
- break data into meaningful units
- review all data , record
- combine simple codes into larger themes
strengths of content analysis
high ecological validity
can be replicated easily
limitations of content analysis
observer bias reduces internal validity
culture biased
Strengths of case studies
in-depth as longitudinal
unique - unusual behaviour can be learnt about
ethics- can study something which wouldn’t usually be ethical
weaknesses of case studies
ethics- consent
generalisation
What is a histogram used for
interval/ continuous data
bars touch as data is continuous
What is reliability?
how consistent the findings from an investigation are
how do you asses the reliability of an experiment
replication - same conditions but with different participants
how do you improve reliability of an experiment
strict control of variables
standardised instructions
how do you asses reliability of an observation or content analysis?
inter- rater reliability - measuring the consistency of scoring between raters, higher is indicated by a significant positive correlation
how do you improve reliability of an observation or content analysis?
observers should be trained thoroughly
operationalisation of behavioural categories
raters should have the opportunity to discuss problems
how do you asses the reliability of a test/ questionnaire/ interview?
test- retest method - same ps given same test
correlation to test test retest reliability
how do you improve reliability of a test/ questionnaire/ interview?
revising questions that may be unclear
rephrasing instructions for clarity
revising test procedures
do a pilot study before
what is validity ?
whether an experiment produces an effect that is legitimate
what is face validity?
whether an experiment appears to measure what it claims to measure
what is concurrent validity?
the extend to which a psychological measure relates to an existing, established measure
what are types of internal validity ?
face validity
concurrent validity
what is ecological validity ?
the extend to which findings from a research study can be generalised to other settings and situations
what is temporal validity?
the extend to which findings can be generalised to other historical times and eras
what are examples of external validity?
ecological validity
temporal validity
how do you asses internal validity
threats- demand characteristics, investigator effects and confounding variables.
face - intuitive measurement
concurrent - comparaspm of those results to those previously achieved
how to improve internal validity
using double blind procedure
change design features
how to asses ecological validity
replication of study in other settings
more realised tasks if due to mundane realism
how to improve ecological validity
realistic settings so findings can be generalised
different settings so refined methodology
improve the mundane realism of the tasks
how to assess temporal validity
replication of study over time and comparing research findings
how to improve validity of experiments
use a control group
standardise procedures
use single and double blind procedures
how to improve the validity of questionnaires
lie scale
tell ps they will remain anonymous
how to improve validity of observations
use covert observation
use more specific behavioural categories
how to improve validity of qualitative methods
interpretative validity by direct quotes from ps
triangulation - looking for extra evidence from other sources
what is the definition of science
a means of acquiring knowledge through systematic and objective investigations
what are the 7 features of science
objectivity replicability theory construction hypothesis testing empirical method falsifiability paradigms