research methods year 2 Flashcards
what are the five features of science?
paradigm and paradigm shifts:
Kuhn (1962)- what distinguishes between a science and a non-scientific discipline is a shared set of assumptions and methods - a paradigm
a paradigm shift is when a critique of a pre-existing theory gathers pace and popularity - leading to a paradigm shift (where there is too much contradictory evidence to ignore)
theory construction and hypothesis testing:
theory construction is the process of developing an explanation for the causes of behaviour by systematically gathering evidence and then organising it into a coherent account
hypothesis testing is a key feature of a theory, a statement which can be tested and only in this way can be falsified
falsifiability:
Popper (1934) - argued a key part of a scientific theory is its falsifiability, it’s possibility to be proved untrue
replicability:
the extent to which scientific procedures and findings can be repeated by other researchers- this has a role in deeming validity and generalisability
objectivity and empirical method:
objectivity is the idea that all sources of personal bias should be minimised to not distort the research process- ‘critical distance’ must be kept
empirical method- scientific approaches that are based on gathering evidence through direct observation and experience
outline validity
1 mark :
validity is the extent to which an observed behaviour is genuine
it includes internal validity which is the extent to which researchers have measured what they wanted to measure.
also includes external validity which is the extent generalisation scan be made beyond the research setting ton difference settings/people/time periods (ecological, population and temporal)
Through the control of variables, internal validity can be achieved.
What are confounding variables?
What are extraneous variables?
confounding variables are variables hat affect everyone in the same group in the same way
extraneous variables are variables outside of the study that affect the IV (can be participant - age/IQ or situational)
External validity can be affected my MUNDANE REALISM - what is this?
how much the study reflects the reality of real life (how realistic it is)
How could psychologists access validity?
face validity:
a basic form of validity
a measure is scrutinised to determine whether it appears to measure what it is supposed to measure (by looking yourself or passing onto expert to check)
concurrent validity:
the extent to which psychological measure relates to an existing/similar measure. (happens during pilot study- look to see if results established in study match a study whose results have already been published - look for a correlation coefficient)
How could researchers improve validity of the following types of research?
Experimental Research
Questionnaires
Observations
Qualitative methods
Experimental Research
-use control group
-standardise procedures
-single/double blind trials
Questionnaires
-use a lie scale
-assure respondents of anonymity
Observations
-covert observation (behaviour more authentic)
-behavioural categories (shouldn’t be broad, overlapping or ambiguous)
Qualitative methods
-case studies/interviews (better able to reflect participant’s’ reality) (use direct quotes due to researcher’s interpretative validity)
-triangulation (use number of different sources ; family interviews, diaries)
How do psychologists assess/measure reliability?
Test-retest:
administering same test/questionnaire to same person/people on different occasions. If reliable, same/very similar results will be produced each time
Inter-observer reliability:
checking there is an agreement between two or more observers involved in observation of behaviour. (may involve pilot study to check observers are applying behavioural categories in the same way)
define reliability
how consistent a measuring device is (including psychological tests and observations which assess behaviour)
How would researchers improve reliability of the following types of research ?
questionnaires
interviews
experiments
observations
questionnaires:
-‘deselect’ some items / rewrite them
-replace open ended Qs with fixed choice
interviews:
-structured interview
-same interviewer each time
experiments:
standardise procedures so they are consistent each time
observations:
-behavioural categories have been properly operationalised (categories don’t overlap)
-observers take further training/discuss behavioural categories together
outline content analysis
content analysis is observing people indirectly through material produced (books/adverts/diaries)
can be coding or thematic
coding:
1) read sample of material
2)create behaviour checklist
3) read entire material
4) tally each time behaviour seen
5) calculate total (quantitative data)
thematic:
1) read entire material
2)note reoccurring themes
3) can branch themes together into overarching themes
4) write up using direct quotes to illustrate theme (qualitative data)
evaluate content analysis
+ can circumnavigate ethical issues as lots of the info already in public domain
+flexible > can be quantitative or qualitative
- people are studied indirectly , out of context info was made, may misconstrue what speaker/writer meant and portray something they did not intentionally intend
outline case studies
case studies are an in depth investigation , description and analysis of individual/group/institution/event.
usually produces qualitative data.
researchers may construct aa case history using interviews/observations/questionnaires/experimental testing.
tends to be longitudinal.
may involve gathering additional info from friends/family.
evaluate case studies
+ rich and detailed > can analyse both typical and atypical behaviour. For example case of HM demonstrated typical memory processing.
+ may generate hypotheses , or revise entire theory
- generalisation may be difficult due to small sample size
-final report base on subjective selection/interpretation of researcher
-family/friends input may be inaccurate , recall may lack validity (memory decay)
outline reporting psychological investigations
ABSTRACT
short summary with main elements (hypothesis, methods, results/conclusions)
INTRODUCTION
literature review of general area of research past conducted, including aims and hypotheses of current investigation.
METHOD
design:
type of design and justification
sample:
how many pps/biographical and demographic info/sampling method/target population
apparatus/materials:
detail of any assessment instruments used, and any other relevant material
procedure:
list of everything that happens including , briefing , standardised instructions and debriefing.
ethics:
how these were addressed in study.
RESULTS:
summary of key findings from investigation
descriptive stats> tables/graphs/measure of central tendency and dispersion
inferentiual stats> ref to choice of statistical test/calculated and critical values/level of significance/final outcome (which hypothesis was rejected)
any raw data/calculations in appendix, not main body of report
DISCUSSION
evaluating outcome
summary of findings in verbal form
relationship of results to previous research
limitations of study (address how to solve in future)
wider , real world implications of research
REFERENCING
author>date>title>journal name (italics)>volume>page numbers
What are inferential tests used for?
inferential tests are used to test whether at a given probability the results gathered may have occurred by chance. psychologists usually want to be 95% certain that there is significance in their results.
they are generally happy with there being a probability of < 0.05 that results have occurred by chance.
the critical value is the number a test statistic must reach for a null hypothesis to be rejected.