Research Methods Flashcards
Aim [definition]:
A statement of what the researcher intends to find out in a research study.
For example: Investigating the effect of caffeine on memory
Debriefing [definition]:
A post-research interview designed to inform participants of the true nature of the study and to restore them to the state they were at the start of the study
How is debriefing useful? [2]:
- It is a means of dealing with ethical issues
- Can be used to get feedback on the procedures of the study
Independent variable [definition]:
The variable that changes in an experiment
Dependent variable [definition]:
Dependent on the independent variable
Control variable [definition]:
The one that doesn’t change
Confounding Variable [definition]:
A variable under the study that is not the IV but varies systematically with the IV
Extraneous variables [3]:
- Do NOT vary systematically with the IV
- They do not act as an alternative IV but instead have an effect on the DV
- They are nuisance variables
Internal validity [definition]:
The degree to which an observed effect was due to the experimental manipulation rather than other factors such as confounding/extraneous variables
External validity [definition]:
The degree to which a research finding can be generalised to other settings (ecological validity)
Validity vs Reliability:
Reliability = consistency of a measure Validity= accuracy of a measure
Confederate [2]:
An individual in a study who has been instructed how to behave, by the researcher
- In stanford prison experiment
Directional hypothesis [2]:
- States the direction of the predicted difference between two conditions
- example: Women will have higher scores than men will on Hudson’s self-esteem scale
Non-directional hypothesis [2]:
- Predicts simply that there is a difference between conditions of the iv
- There will be no difference between men’s scores and women’s scores on Hudson’s self-esteem scale
Pilot study [definition]:
- A small-scale trial run of a study to test any aspects of the design, to make improvements before the final study
When do psychologists use a directional hypothesis?
When past research suggests that the findings will go in a particular direction
When is a non-directional hypothesis used?
When there is no past research on the topic studied or past research is contradictory
What are 3 types of experimental design?
- Repeated measure design
- Independent measure design
- Matched pairs design
Repeated measures design [3]:
ALL participants experience ALL levels of the IV
+ Participant variables are reduced since its the same person
+ Fewer people are needed as they take part in all conditions
Limitations of repeated measure design [2]:
- Order effects e.g getting tired. Can be avoided by using counterbalance
- Participants may guess the aim of the experiment and behave a certain way e.g purposely do worse in the second half. Can be avoided by using a cover story
Independent measure design [2]:
Participants are placed in separate groups and only experience one level of the IV each
+ Avoids order effects
Limitations of independent measure design [2]:
- Participant variables e.g different abilities or characteristics [participants are randomly allocated]
- Needs more participants than repeated measure
Matched pairs design [3]:
Participants are matched by key characteristics or abilities, related to the study
+ Reduces participant variables
+ Reduces order effects
Limitations of matched pairs design [3]:
- If one participant drops out you lose 2 PPs’ data
- Very time-consuming trying to find closely matched pairs
- Impossible to match people exactly
Lab experiments [2]:
- Conducted in an environment controlled by researcher
- Researcher manipulates the IV
lab experiment examples [2]:
- Milgram’s experiment on obedience
- Bobo’s doll
Strengths of lab experiments [2]:
- It is easier to replicate. This is because standard procedure is being used
- They allow for precise control of extraneous and independent variables
Weakness of lab experiments:
- The artificiality of the setting may produce unnatural behavior that does not reflect real life (low ecological validity)
Field experiments [3]:
- Conducted in the participant’s everyday setting
- Researcher manipulates the IV, but in a real-life setting (can’t rlly control extraneous variables)
- example: Hofling’s hospital study on obedience (involves medicine cabinet used by nurses in hospital and tested nurses)
Strengths of field studies [2]:
- Behavior in a field experiment is more likely to reflect real life because of its natural setting
- There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied (in covert experiments)
Weakness of field experiments [2]`:
- There is less control over extraneous variables that might bias the results.
- This makes it difficult for another researcher to replicate the study in exactly the same way.
Natural experiments [3]:
- Conducted in everyday life
- Researcher does NOT manipulate the IV cus occurs naturally
- Hodges and Tizard’s attachment research (1989) compared the development of children who had been adopted to children who spent their lives with their biological families
Strengths of natural experiments [3]:
- Behavior in a natural experiment is more likely to reflect real life because of its natural setting
- There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied
- Can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g. researching stress
Weaknesses of natural experiments [2]:
- They may be more expensive and time consuming than lab experiments
- There is no control over extraneous variables that might bias the results. makes it difficult replicate
Quasi experiments [3]:
+ Can be done in a controlled environment
- IV is not made, it is a pre-existing difference
- Sheridan and King 1972 tested obedience between the genders by making them shock a puppy with increasing strength. Male obedience was 54% and female was 100%
Strengths of quasi experiment:
Allows comparisons between different types of people
Weaknesses of quasi experiments [2]:
- Participants may be aware they are being studied, creating demand characteristics
- The dependent variable may be an artificial task reducing mundane realism
Mundane realism [definition]:
The degree to which the procedures in an experiment are similar to events that occur in the real world
Single blind design:
Participant is not aware of research aims and/or which condition of the iv they are in
Double blind design [2]:
- Both participant and researcher are unaware of condition of IV or aim
- The person conducting the experiment is less likely to give away the aim of the experiment
Experimental realism:
If the researcher makes an experimental task sufficiently engaging the participant pays attention to the task and not the fact that are being observed
Generalisation [definition]:
Applying the findings of a particular study to the population
Opportunity sample [3]:
People who are the most convenient or available are recruited
+ Easiest method cus u can just use the first suitable subject
- Biased sample cus only a small part of the population
Random sample [3]:
Uses random methods like picking names out of a hat
+ Unbiased/ all members of target population have an equal chance of getting chosen
- Time consuming (needs to have a list of all population members)
Stratified sample [3]:
Strata (subgroups) within a population are identified. Then members of the strata are chosen
+ More representative of the population than other samples
- Very time consuming
Systematic sample [3]:
A predetermined system is used to select participants
+ Unbiased as it uses an objective system
- Not truly random
Volunteer sample [3]:
Advertised on the newspaper, noticeboard or the internet and people volunteer
+ Gives access to variety of participants which can make the sample more representative
- Sample is biased because participants are more highly motivated to be helpful
Random techniques [3]:
- Random number table
- Random number generator
- Lottery method (pulling names out of a hat)
Ethical issues [6]:
- Deception
- Informed consent
- Privacy
- Confidentiality
- Protection from harm
- Right to withdraw
Deception- Participant POV [3]:
- It’s unethical.
- The researcher should not deceive anyone without good cause.
- Deception prevents informed consent
Deception- researcher’s POV [2]:
- Can be necessary otherwise participants may alter behaviour
- Can be dealt with by debriefing participant when study is completed
Informed consent- Participant POV [2]:
- They should be told what will be required to do in the study so that they know what they are agreeing too
- It is a basic human right
Informed consent- Researcher POV [2]:
- Means revealing true aims of the study
- Can get presumptive consent
Right to withdraw- Participant POV [3]:
- It is an important right
- Allows patient to leave if uncomfortable
- The withdraw may be compromised if payment was used as an incentive
Right to withdraw- Researcher POV [3]:
- Can lead to a biased sample if people leave
- They lose money if the person was paid and withdrew
- Researcher has to inform the participant of this right before the study
Protection from harm- Participant POV [2]:
- Nothing should happen to them during a study that causes harm
- It is acceptable if the harm is no greater than what the subject would experience in ordinary life
Protection from harm- Researcher POV [3]:
- Some more important questions in psychology involve a degree of distress to participants
- It is difficult to guarantee protection from harm
- Harm is acceptable if the outcome is more beneficial than the harm
Confidentiality- Participant POV [2]:
- The data protection act makes confidentiality a legal right
- It is only acceptable for personal data to be recorded if the data is not made available in a form that identifies participants
Confidentiality- Researcher’s POV [3]:
- Can be difficult because the researcher wishes to publish the findings
- A researcher can guarantee anonymity but it may still be obvious who the subjects were
- Researchers should not record the names of participants
Privacy- Participant POV:
- People do not expect to be observed in certain situations
Privacy- Researcher POV [2]:
- It may be difficult to avoid invasion of privacy when studying participants in public
- Do not study anyone without informed consent unless in a public place and displaying public behaviour
BPS ethical guideline strengths and weaknesses [3]:
+ The guidelines are quite clear
- They’re vague
- The guidelines absolve the individual responsibility because they can just justify their research claiming they followed the guidelines
Controlled observation [definition]:
A form of investigation in which behaviour is observed but under conditions where certain variables have been organised the researcher
Covert observations [definitions]:
Observing people without their knowledge.
Knowing that behaviour is being observed is likely to alter the participant’s behaviour
Inter-observer reliability [definition]:
The extent to which there is agreement between 2 or more observers involved in observations of a behaviour
Naturalistic observation [definition]:
An observation carried out in an everyday setting, in which the investigator does not interfere in anyway but merely observes the behaviours in question
Non-participant observation [definition]:
The observer is seperate from the people being observed
Overt observation [definition]:
Observational studies where participants are aware they are being observed
Participant observation [definition]:
Observations made by someone who is also participating in the activity being observed, which may affect their objectivity
Naturalistic observation evaluation [2]:
+ Gives a realistic picture of spontaneous behaviour (high ecological validity)
- There is little control of all other things that are happening (sumn unknown may cause the behaviour being observed)
Controlled observation evaluation [2]:
+ Observer can focus on particular aspects of behaviour
- control comes at the cost of the environment (artificial feeling)
Covert observation evaluation [2]:
+ Behaviour is more natural
- Participants cannot give consent
Overt observation [-]:
- Participants are aware they are being watched and may behave unnaturally
Participant Observation evaluation [3]:
+ May provide insight into behaviour from the ‘inside’
- Likely to be overt and so have participant awareness issues
- Might be biased
Non-participant observation evaluation [2]:
+ Observers are likely to be more objective because they are not part of the group being observed
- More likely to be covert and so there is ethical issues
Event sampling [definition]:
An observational technique in which a count is kept of the number of times a certain behaviour occurs
Time sampling [definition]:
An observational technique in which the observer records behaviours in a given timeframe
Structured interview [definition]:
Any interview with predetermined questions
Unstructured interview [definition]:
The interview starts with some general aims and possibly some questions, and lets the interviewee’s guide subsequent questions
Questionnaire evaluation [3]:
+ Can reach large numbers of people easily (large sample)
+ Respondents may be more willing to give personal information in a questionnaire than an interview
- can only be filled by literate people = sample is biased
Structured interview evaluation [4]:
+ Can be easily repeated cus questions are standardised
+ Easier to analyse than unstructured interview
- low reliability: different interviewers behave differently
- Interviewer bias
Unstructured interview evaluation [3]:
+ more detailed information than in structured
- Require interviewers with more skill than structured
- In-depth questions may lack objectivity compared to predetermined ones
Correlation [definition]:
A relationship between two variables
Correlations [3]:
- Participant provides data for both variables
- In a correlation design, there are no independent or dependent variables, but co-variables
- We only use a correlation when testing the relationship between 2 variables
Structured observation [definition]:
A researcher uses various systems to organise observation such as behavioural categories and sampling procedures
What happens in unstructured observations?
The researcher records all relevant behaviour but has no system
Features of structured observations [2]:
- Behavioural categories
- Time/event sampling
Rules of behavioural categories [3]:
- Categories should be objective
- Cover all possible component behaviours
- Categories should be mutually exclusive
What are the self report techniques [3]:
- Structured interview
- Unstructured interview
- Questionnaire
Rules of writing a questionnaire [3]:
- Questions must be clear
- Bias can lead to a participant to give a particular answer
- Questions need to be written so that answers are easily to analyse
What to add in a questionnaire [2]:
- Filler questions to distract participant from true aim
- Easier questions first
Meta analysis [definition]:
When a researcher looks at findings from a number of different studies and produces a statistic to represent the overall effect
Review [definition]:
A consideration of a number of studies that have investigated the same topic in order to reach a general conclusion about a particular hypothesis
Content analysis [definition]:
A type of observational study where behaviour is observed INDIRECTLY in written or verbal materials (interviews, questionnaires)
Effect size [definition]:
A measure of the strength of the relationship between two variables
Meta analysis strengths [2]:
+ Increases validity of conclusion as they are based on wider sample of participants
+ Groups of studies on same topic often contradict, meta analysis helps us to reach an overall conclusion with stats
Limitations of meta analysis [2]:
- Experimental designs in different studies may vary so research will never be truly comparable
- Putting them all together to calculate the effect size may not be appropriate
Why is the mean the most sensitive measure of central tendency? *
It takes account of the exact distance between all the values of data
When is a scattergram used in psychology?
When correlations lol
When is a line graph used in psychology?
With continuous data
When is a histogram used in psychology?
- Can’t draw histogram with data in categories
- When frequency is mentioned lmao
When is a bar chart used in psychology? [2]:
- When data is not continuous
- Can be used when categorical / nominal data
When is a table used in psychology?
when displaying raw data
Skewed distribution [definition]: *
There is a number of extreme values on one side
Positive skewed distribution =
Scores on the left side
Negative skewed distribution =
Scores to the right side
Quantitative data =
numbers n shiiii
Qualitative data =
cannot be quantified
Quantitative data evaluation [2]:
+ Easy to analyse using descriptive stats or stats tests
- Data may oversimplify reality
Qualitative data [2]:
+ Provides richer and detailed information about people’s experiences
- Complexity makes it more difficult to analyse/summarise and draw conclusions from
Primary data evaluation [2]:
+ researcher has control of the data and how it is collected
- Lengthy and expensive process
Secondary data [definition]:
Information used in research that was collected by someone else
Secondary data evaluation [2]:
+ It is simpler and cheaper to access someone else’s data
- Data may not exactly fit the needs of the study
When is a sign test used? [3]:
- Paired or related data
- Repeated measure design
- matched pair design
How to sign test:
s value = no. of smallest sign
Nominal data [definition]:
Named data which can be separated into discrete categories which do not overlap
Ordinal data [definition]:
Data which is placed into some kind of order or scal
Interval data [definition]:
Data which comes in the form of a numerical value where the difference between points is standardised and meaningful
What are the types of data? [4]:
- Nominal data
- Ordinal data
- Interval data
- Ratio
Peer review evaluation
*
Content analysis [definition]:
A type of observational study where behaviour is observed indirectly in visual or verbal material
Thematic analysis [explanation]:
Themes or categories are identified and data is organised into these themes
Content analysis [3]:
- Researcher has to pick whether they are using a time or event sample
- Then have to pick behavioural categories for them to tally
- Represent data by analysing data quantitatively or qualitatively
Example of quantitative content analysis [4]:
- Anthony Manstead & Caroline McCulloch
- Interested in the way men and women were presented in tv ads
- observed 170 ads over one week
- Focused on adult figure & recorded frequency of desired behaviour in a table
Thematic analysis [3]:
- qualitative content analysis
- quali data summarised by identifying repeated themes
- Very lengthy process cus every thing is heavily analysed
Strengths of content analysis [2]:
+ have high ecological validity cus its based on observations of what people acc do
+ Content analysis can be replicated cus sources can be accessed by others
Weaknesses of content analysis [2]:
- Observer bias reduces OBJECTIVITY & VALIDITY of findings cus different observers may interpret behavioural categories differently
- likely to be culturally biased cus observer is judging behaviours based on their standards
What are the intentions of thematic analysis? [3]:
- to impose some kind of order on data
- summarise data to reduce page no.
- ensure that ‘order’ is representative
Case study [2]:
- detailed study of an individual
- provide a rich record of human experience
Case study example [3]:
- Henry Molaison: his hippocampus was removed cus epi seizures, can’t form new memories
- Little Hans
- Phineas Gage: iron rod through brain
Case studies strengths [2]:
+ ideographic- in-depth data provides new insights
+ Allow us to investigate rare instances of human behaviour e.g Romanian orphanages
Case studies weaknesses [2]:
- Difficult to generalise/ apply to population
- ethical issues like confidentiality and informed consent, protection from harm
Inter-observer reliability [definition]:
The extent to which there is an agreement between two or more observers in an experiment
Test-retest reliability [definition]:
same test or interview is given to same participant on diff occasion to see if they get the same result
How to asses reliability [3]:
- Have 2 or more observers making separate records and then compare
- inter-observer- reliability is the extent they agree
- calculated using correlation coefficients
Improving inter-observer reliability [2]:
- Clearer behavioural categories (may not have been clear before)
- Observers may need more practice using the categories
Improving reliability [2]:
- Reduce ambiguity of items in tests
- Standardise procedure
Concurrent validity [definition]:
establishing validity by comparing an existing test with the one ur interested in
Ecological validity [definition]:
ability to generalise research effect beyond the research setting
Face validity [definition]:
the extent which test items look like what the test claims to measure
Mundane realism [definition]:
How a study mirrors the real world/ is it realistic?
Temporal validity [definition]:
whether research can be generalised beyond the time period of the study
Validity [definition]:
whether an observed effect is a genuine one
How to improve validity [2]:
- If face = make better questions bro
- If internal/ external = use better research design bro smh
What are the features of science? [5]:
- Empirical methods
- Objectivity
- Replicability
- Theory construction
- Hypothesis testing
What are empirical methods?
When info is gained thru observation or experimentation rather than unfounded beliefs
Theory construction [explanation]:
[2]:
Facts alone are meaningless, theories/ explanations must be made to make the facts make sense
- can be done through hypothesis testing
Falsifiablility [definition]:
The possibility that a statement or hypothesis can be proven wrong
Type 1 error [definition]:
When a researcher rejects a null hypothesis that’s true
Type 2 error [definition]:
When a researcher accepts a null hypothesis that is not true
When is p ≤ 0.01 used?
When a researcher is replicating another study because results need to be more certain
What is a parametric test? [3]:
- A test that has interval data or ratio level of measurement.
- Be drawn from population with normal distribution.
- Both samples have equal variances.
Non- parametric tests of difference [4]:
- Wilcoxon test
- Mann-whitney
- Sign test
- Chi-square
Parametric tests of difference [2]:
- Related t test
- Unrelated t test
Tests of correlation [2]:
- Spearman’s Rho (non-parametric)
- Pearson’s R (parametric)
Wilcoxon test [3]:
- Hypothesis states a difference
- Related data (repeated measure/ matched pairs)
- Ordinal data
Wilcoxon significance [2]:
- Calculated value of ‘T’ must be ≤ than the critical value to be significant
- If not significant we accept the null
Mann-Whitney [3]:
- Hypothesis states a difference
- Unrelated data (independent measure)
- Ordinal data
Mann-Whitney significance [2]:
- Calculated value of ‘U’ must be ≤ than the critical value to be significant
- If not significant we accept the null
Related t- test [3]:
- Hypothesis states a difference
- Related data (repeated measure/ matched pairs)
- Interval data
Related t-test significance [2]:
- Calculated value of ‘t’ must be ≥ than the critical value to be significant
- If not significant we accept the null
Unrelated t-test [3]:
- Hypothesis states a difference in data
- Unrelated data (independent measure)
- Interval data
Unrelated t-test [2]:
- Calculated value of ‘t’ must be ≥ than the critical value to be significant
- If not significant we accept the null
Spearman’s Rho [3]:
- Hypothesis states a corrrelation
- Related data (repeated measure/ matched pairs)
- Ordinal data
Spearman’s Rho significance [2]:
- Calculated value of ‘rho’ must be ≥ than the critical value to be significant
- If not significant we accept the null
Pearson’s R [3]:
- hypothesis states a correlation
- Related data (repeated measure/ matched pair)
- Interval data
Pearson’s R significance [2]:
- Calculated value of ‘r’ must be ≥ than the critical value to be significant
- If not significant we accept the null
Chi square [3]:
- Hypothesis states a difference/ association
- Unrelated/ independent data
- Nominal data
Chi square significance [2]:
- Calculated value of ‘x2’ must be ≥ than the critical value to be significant
- If not significant we accept the null
Sign test [3]:
- Hypothesis states a difference
- Related data (repeated measure/ matched pairs)
- Nominal data
Sign test significance [2]:
- Calculated value of ‘S’ must be ≤ than the critical value to be significant
- If not significant we accept the null
One-tailed test [definition]:
Form of test used with a directional hypothesis
Two-tailed test [definition]:
Form of test used with a non-directional hypothesis
Degrees of freedom
*
levels of measurement =
ration , nominal , ordinal , interval
Report structure [6]:
- abstract
- intro
- Method
- results
- discussion
- references
What is an abstract?
A summary of the study e.g aims, hypothesis method
what measure of central data is used for nominal data?
Mode
what measure of central data is used for ordinal data?
Median
what measure of central data is used for interval/ratio?
Mean
What is the order of robustness for the measures of central tendency?
- Mean
- Median
- Mode