Research methods Flashcards
aim def
- a statement outlining why the psychologist is doing the research
- e.g “the aim is to discover/find out/see/investigate…”
hypothesis def
- a precise and testable statement of the relationship between two variables
- e.g “daycare makes children more aggressive than staying at home with a parent”
directional vs non directional hypothesis
- non directional refers to a difference in results without clarifying what the difference will be
- directional does say what the difference will be
independant variable def
variable that isn’t changed by other variables
dependant variable def
variable that depend on independant variable
extraneous variable def
any variable that you’re not controlling that can affect the dependant variable if not controlled
what is a control condition
a baseline for comparison
confounding variable def
a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.
(like another independent variable)
how is a hypothesis operationalised
the independent variable is defined and how the dependent variable will be measured is stated.
E.g Not “intelligence” but “score on a IQ test” Not “memory” but “nimber of items recalled”
what is an experiment
where an experimenter holds all variables constant whilst systematically manipulating one.
what are the 4 types of experiment
lab, field, natural and quasi
lab experiment features
- take place in a controlled environment
- researcher controls the independent variable
- participants are randomly allocated to the conditions of the independent variable
field experiment features
- researcher controls the independent variable
- participants are randomly allocated to the conditions of the independent variable
- take place in a real world setting
natural experiments features
- independant variable is not controlled/ manipulated by the experimenter, but is naturally realised
quasi experiment features
- IV is based on existing difference between people (e.g. age, gender, ethnicity, presence of mental disorder).
- Noone has manipulated this - it simply exists and cannot be changed.
why is random allocation important
evenly distributes participant characteristics across the conditions of the experiment
what is standardisation
using exactly the same formalised procedures/ instructions for all participants in a study
why is standardisation important
so that non-standardised changes in procedure do not act as extraneous variables
what is randomisation
the use of chance methods to control for the effects of bias when designing materials and deciding the order of experimental conditions
(E.g Allocating participants to tasks, selecting samples of participants, and so on, should be left to chance as far as possible, to reduce the investigator’s influence on a study.)
why is randomisation important
controls investigator effects
what are demand characteristics
cues from the researcher/ situation that may reveal the purpose of the investigation to participants. this may change the participants behaviour
what are investigator effects
researchers unconscious biases when designing an investigation
what is a single blind experiment
where only the researcher doing the study knows which treatment/ intervention the participant is receiving. reduces demand characteristics
what is a double blind experiment
where neither the participant or experimenter knows who is receiving a particular treatment. reduces demand characteristics and investigator effects
what are the 3 types of experimental design
independent groups, repeated measures and matched pairs
what are independent groups
- participants are allocated yo different groups where each group represents an experimental condition.
- they don’t know what is being measured
what are repeated measures
all participants take part in all conditions of the experiment
what are matched pairs
- pairs of participants are first matched on some variables that may affect the dependant variable
- one member of the pair is assigned to condition a and the other to condition b
what are the sampling methods
random, systematic, stratified, opportunity and volunteer
what is random sampling
random- equal chance of being selected
what is systematic sampling
e.g selecting every 5th, 7th, 10th person
what is stratified sampling
looking at each subgroup which reflects proportions in society
e.g 60% males, 40% females in experiment, if this reflects the situation
what is opportunity sampling
selecting people who are most easily available at the time of the study
what is volunteer sampling
inviting people to take part
Quantitative data
-numbers
-easier to draw conclusions but less detail
Qualitative data
-words
-richer info: more detail + more valid
- difficult to analyse
Primary data
Collected specifically for the purpose of the experiment. Comes first hand from participants.
Secondary data
Data that has been collected before the current research by someone else. It is used in meta-analysis (when a number of studies collected for a similar purpose are pooled together and a conclusion is drawn)
What is publication bias
- A bias on what is published based on the direction/ strength of study findings.
- It may lead some researchers to manipulate their results to ensure statistically significant results. One example of this is resorting to data dredging, or running statistical tests on a set of data until something statistically significant happens.
Strengths + weaknesses of correlational analysis
Strengths: easy starting point to suggest what should be researched again
Weakness: don’t show cause or effect. Can be misinterpreted
Pro/con of mean
Pro: included all data, so is representative
Con: can be distorted by anomalies
Pro/con of median
Pro: not distorted by anomalies
Con: doesn’t include all data
Pro/ con of mode
Pro: can be used for categorical data
Con: doesn’t include all data
What is standard deviation
Tells how far the scores are scattered around the mean. (Measure of central tendency is the mean). Precise but is distorted by anomalies as it uses all data.
Pros/ cons of range
Pros: easy to calculate + quick overview
Cons: can include anomalies + unprecise
What is dispersion and what are the 2 ways of measuring dispersion
- the degree to which a set of scores deviate from the mean
- Range and standard deviation
Co-variables def
Co-variables are variables that are used when looking at correlations.
Correlation def
Correlation is a statistical measure that expresses the extent to which two variables are related
Correlation coefficient def
A statistical measure of the strength of a linear relationship between to variables
Positive correlation
An increase in one variable leads to a decrease in another, OR decrease in one variable leads to an increase in another variable
Negative correlation
An increase in one variable leads to an increase in another, OR a decrease in one variable leads to a decrease in another
Zero correlation def
A change in one variable has no effect or change in the other
Descriptive statistics def
A set of methods used to summarise and describe the main features of a dataset
Inferential statistic def
Inferential statistics are ways of analyzing data using statistical tests that allow the researcher to make conclusions about whether a hypothesis was supported by the results.
Validity def
Extent to which an observed effect is genuine - does it measure what it was supposed to measure and can it be generalised beyond the research setting within which it was found?
Internal validity def
Extent to which the observed effect is due to manipulation of IV and not other factors.
Face validity def
Basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure - for instance does a test of anxiety look like it measures anxiety. This is content-related (suitable content).
Concurrent validity def
The extent to which a psychological measure relates to an existing similar measure. This is criterion related (relationship to other measures).
e.g A new test of intelligence, would have concurrent validity if the correlation between it and the Wechsler IQ test was positive.
External validity
External validity is the validity of applying the conclusions of a scientific study outside the context of that study. In other words, it is the extent to which the results of a study can be generalised to and across other situations, people, stimuli, and times.
Ecological validity
A type of external validity: The extent which findings from a research study can be generalised to other settings and situations.
Population validity
A type of external validity: Refers to whether you can reasonably generalise the findings from a research study to a larger group of people (the population).
How can we check for validity?
- Face validity - This can be determined by simply ‘eyeballing’ the measuring instrument. For example, a questionnaire used to measure anxiety should have questions relating to anxiety symptoms. If it does, then it has face validity. This is a quick, informal and easy way to check for validity. This is a way of checking the validity of the content.
- Concurrent validity - If a new intelligence test is being created, the findings of this test should be close to the findings in an intelligence test which is well-established. Close agreements between the two sets of data would indicate the new test has high concurrent validity. Careful not to confuse this with reliability. In this case we are checking the validity of the criterion (the way something is judged). In order to check our judgement (IQ score) is correct, we compare this with the score of a test already known to be accurate for many years. This is looking for accuracy of judgement.
Reliability def
Refers to how consistent the findings from an investigation or measuring device are. A measuring device is reliable if it produces consistent findings everytime.
Internal reliability
Refers to the degree of internal consistency among the items on a test. For high internal reliability,
items are consistent with one another and measure the same thing.
Split-half method def
A way to assess internal reliability. Data collected is split randomly in half and compared. This is done to see if results taken from each part of the measure are similar.
External reliability def
The extent to which a measure is consistent when assessed over time (test-retest) or when assessed by different individuals (interrater reliability).
Test-retest method def
A way to assess the extent to which results on a test are consistent over time. This is done by assessing the same individual on two different occasions and correlating the scores - the correlation coefficient must be +.80 or more for data to be judged reliable. This provides an estimate of the stability of the test.
Inter-rater reliability def
The extent to which there is an agreement between two or more independent evaluators. This is
measured by correlating the rating scores - the correlation coefficient must be +.80 or more for data to be judged reliable.
Reliability def
The consistency of research study or measuring test
What are the ways of assessing reliability
Test-retest, inter-observer
What are the ways of assessing reliability
Test-retest, inter-observer
What is External reliability
- The consistency of a measure from one use to another. For example, if a participant took an IQ test once a year, and then took the same test a year later and got a similar score, external reliability would be high
- this can be measured through test re-test or through inter-rater
What is Test-retest and Inter-observer reliability
- TEST RETEST:
- a method of assessing the reliability of a questionnaire, test or interview, by assessing the same person on two separate occasions.
- the two sets of results are then correlated to see if they are similar, if the correlation shows a strong positive relationship (0.8 or over), the test is reliable - INTER-OBSERVER RELIABILITY
- looks at the agreement between two or more observers involved in observations of a behaviour
- this involves a small pilot study in the observation to check if the observers apply the behaviour categories in the same way,
- the two observers will watch the same event and record it individually. They will then share results.
- this is measured by correlating the observations of two or more observations of two or more observers. If the results show a strong positive relationship (over 0.8), then the test is seen as reliable
What is validity
the extent to which an observed effect from a psychological test/observation/experiment etc. is genuine
Describe the three types of validity
- Internal validity; does it measure what it was supposed to measure? (Are the effects observed due to the manipulation of the IV and not some other factor)
- External validity; can it be generalised beyond the research setting within which it was found (is it an everyday activity in an everyday setting)
-temporal validity: do findings hold true over time? - population validity; is the sample representative of the wider population?
Describe the three ways of assessing validity
- Face validity; whether the test appears (at face value) to measure what it claims to. This can be done by simply looking at the measuring instrument.
- Concurrent validity; is demonstrated when the results obtained are very close to or match those obtained by another recognised and established test. Close agreement (+0.8) indicated that the new test has high concurrent validity
How to improve validity in an experiment
- control group
- standardise procedures (minimises participant reactivity and investigator effects)
- single-blind and double blind
How to improve validity in a questionairre
- guarantee data is anonymous
- include a lie scale (two similar questions assessing same thing, to test response consistency)
How to improve validity in an observation
- naturalistic setting
- covert
How to improve validity in qualitative research
- assess if the researchers interpretation match with the participants reality; use direct quotes from the participant
- triangulation: use a number of different sources of evidence: interviews with family, friends, personal diaries, observations
What are the factors that need to be considered when choosing a statistical test
- DESIGN- Experimental design or related/ unrelated data
- DATA - Level of measurement
- DIFFERENCE - is the experiment looking to find a difference between two conditions or a correlation between two variables
Choosing a statistical test table
Nominal data def
Data that can be placed into separate categories. It usually involves counting the frequency of behaviours. E.g hair colour, show size etc.
Temporal validity def
A type of external validity: the extent to which findings from a research study can be generalised to other historical times and eras
Type 1 vs type 2 errors
TYPE 1:
- ‘false positives’; researcher claims they have found a significant difference but there is not one.
- this means they wrongly accept the experimental hypothesis accept and reject the null hypothesis
- this is mos likely to happen when the significance level is too lenient (e.g 10%)
TYPE 2:
- ‘false negatives’; the researcher claims that there has been no significant difference but there has been.
- this means they wrongly reject the experimental hypothesis and accept the null.
- this is most likely to happen when the significance level they use is too strict
How to use statistical tables step by step
To work out the correct critical value:
- One or two tailed test? Was the hypothesis directional or non directional? The level of probability doubles when two tailed tests are used
- The number of p.s - this is usually the N value. Sometimes this is shown as degrees of freedom, which is the number of participants -1.
- Also the level of significance (p value) - accepted usually at 0.05%
What is a paradigm
- A shared set of beliefs; something we don’t seem to have in psychology, but we do have in science
- paradigm shifts are shifts in such beliefs; this is a key feature of science
What are the stages of a paradigm shift
- A group of researchers begin questioning the accepted paradigm with evidence
- This critique gathers popularity among the scientific community
- Eventually, there is too much contradictory evidence to ignore
- There is a paradigm shift; a new paradigm
What is a theory?
- a set of general laws or principles that have the ability to explain particular events and behaviours
- theory construction occurs through gathering evidence via direct observation
What is the role of hypothesis testing
An essential component of theory is that it can be scientifically tested. Theories should suggest a number of possible hypotheses
What is falsifiability?
- Karl Popper suggested that this is a major feature of science
- it is a principle that a theory cannot be considered a science unless it admits the possibility of being proved untrue.
- genuine scientific theories should hold themselves up for hypothesis testing, as he argued that even when a scientific principle had been successfully and edm it was not necessarily true. It has not been proven false…YET. This is the falsification theory.
Why is the psychodynamic approach unfalsifiable
- the approach argues that our behaviour and feelings are affected by unconscious motives and that behaviour/ feelings as adults are rooted in our childhood experiences.
- how is it possible to scientifically study concepts like the unconscious mind? How accurate are the childhood traumas ‘revealed’ during therapy?
- It can be argued that the approach is not falsifiable as theories can’t be empirically investigated.
What is replicability?
- the extent to which scientific procedures and findings can be repeated by other researchers
What is the relationship between replicability and validity
What is objectivity?
- When all sources of personal bias are minimised so as not to distort or influence the research process.
- in psychology, lab experiments have the greatest objectivity
How is objectivity linked to the empirical method?
Does psychology have a paradigm?
What are descriptive statistics?
Descriptive statistics analyse data to help describe, show or summarise it in a meaningful way. Examples are measures of central tendency and measures of dispersion.
What are inferential statistics?
- the use of statistical tests which tell researchers whether the differences/ relationships they have found in the co-variables are statistically significant or not.
- this helps decide which hypothesis to accept and which to reject. The correlation coefficient is calculated using a statistical test
Null hypothesis vs experimental hypothesis
What is a significance level?
the confidence with which the selected hypothesis can be accepted
In an independent groups design study, what must be included in the debrief?
Why do we use statistical tests?
to determine the likelihood that the difference/relationship they have found has occurred due to chance.
What are the components needed in experimental design
How are observational studies watched/recorded
What is a correlation coefficient
Psychologists use a statistic called a correlation coefficient to measure the strength of a correlation (the relationship between two or more variables). A correlation coefficient can range between -1.0 (perfect negative) and +1.0 (perfect positive).