Research methods - A2 Flashcards
What is reliability and how can it be achieved?
- Reliability is achieved if a study has been set up so that the IV can be seen to affect the DV:
- If the procedure is replicated, the study should show similar results
- Replicating a study and finding similar results shows that the measure is consistent
If a study is completed using a standardised procedure and obtains the same results, it is said to be reliable
Why are lab experiments the most reliable method in psychology?
- Take place in neutral space under controlled conditions
- Follow a standardised procedure
- Use random allocation to conditions
- Tend to use a control group as comparison to the experimental group
- Generate quantitative data which is easy to compare and analyse
Why are natural and field experiments not as reliable research methods?
- Field experiments: generate quantitative data & implement an IV but are subject to an array of extraneous variables over which the research has no control
- Natural experiments: generate quantitative data but they use a naturally-occurring IV over which the research has no control
What is the difference between internal and external reliability?
- Internal reliability: The extent something is consistent with itself
- External reliability: The extent a test measure is consistent over time
How is reliability assessed? What type of reliability does each method measure?
The reliability of a questionnaire can be assessed using two methods:
- Test-retest method measures external reliability: same ppts are given the same questionnaire at separate time intervals (e.g. a 6-month gap between testing sessions)
- If the same result is found per ppt then external reliability is established
- Split-half method measures internal reliability: researcher splits the test in half & analyses the responses given to the 1st half of the questionnaire compared to the 2nd half of the questionnaire
- If similar responses are given in both halves then internal reliability is established
What is inter-observer reliability and what does it reduce?
- Inter-observer reliability is the level of consistency between 2 or more trained observers when they conduct the same observation
- Reduces chance of researcher bias interfering with observation
How is inter-observer reliability assesed?
- All observers must agree on the behaviour categories + how they are going to record them before the observation begins
- The observation is conducted separately by each observer to avoid conformity
After the observational period:
- The observers compare the 2 independent data sets (usually a tally chart)
- They then test the correlation between the 2 sets
- Strong positive correlation between the sets = good inter-observer reliability & that behaviour categories are reliable
What is validity?
- Validity focuses on accuracy & accurately measuring what the study sets out to
- What extent can the findings be generalised to the wider population and out of the research setting
What is internal validity?
- Internal validity: measures whether the results are due to the manipulation of the IV and not confounding variables
- Internal validity can be improved by reducing investigator effects & demand characteristics
What is external validity?
- External validity: measures whether the results can be generalised beyond the research setting, looking at;
- Ecological validity: whether it can be generalised to other settings
- Population validity: whether it can be generalised to other people
- Temporal validity: when it can be generalised over time
- External validity can be improved by setting research/experiments in naturalistic environments
How is validity assessed?
- Predictive validity: assesses validity by predicting how well a test does at predicting future behaviour
- Temporal validity: assesses how valid it remains over time
- Concurrent validity: assesses through correlation, correlating scores from research already existing & known to be valid
- Face validity: assesses whether something is what it looks like, to what extent does the item look like what the test measures
What are case studies and how is data collected on them?
- Case studies are detailed and in-depth investigations of a small group or an individual
- They allow researchers to examine individuals in great depth
- Data is often collected through interviews or observations, generating mostly qualitative data
- Most case studies tend to be longitudinal i.e. ppts experience/progress is tracked & measured
What are the strengths of case studies?
- Holistic approach - the whole individual & their experiences are considered
- Allows researchers to study unique behaviours & experiences which would be unethical or impossible to manipulate in controlled conditions
- Case studies provide rich, in-depth data which is high in explanatory power
- Case studies may generate hypotheses for future study, and even one solitary, contradictory instance may lead to the revision of an entire theory
What are the weaknesses of case studies?
- Results are not generalisable or representative due to (usually) only one person being the focus of the study
- Researcher may be biased in their interpretation of the information
- Often case studies rely on their ppts having a good memory which means that information/details can be missed which would impact the validity of the findings
What are the different statistical tests and what is the acronym to remember them?
Carrots Should Come Mashed With Swede Under Roast Potatoes
- Chi-squared
-Sign Test
- Mann-Whitney
- Wilcoxon
- Spearman’s Rho
- Unrelated-t-Test
- Related-t-Test
- Pearson’s r
What factors determine the choice of statistical test?
- Is it a test of difference or association (correlation)?
- Is the design related (repeated measures, matched pairs) or unrelated (independent groups)
- Is the data nominal, ordinal, or interval?
What is content analysis?
A research technique that enables the indirect study of behaviour by examining communications that people produce (eg. In emails, TV, Film and other media)
What is thematic analysis?
- A qualitative approach to analysis that involves identifying implicit or explicit ideas within the data
- Themes will often emerge once the data has been coded
What is coding (research methods)?
The stage of content analysis in which the data being studied is put into categories (eg. words, sentences, phrases, etc.)
What are the stages of conducting a content analysis?
- The researcher chooses the research question
- They select a sample of pre-existing qualitative research e.g. interview transcripts, diaries, video recordings
- The researcher will decide on the coding of the categories/coding units
- The researcher works through the data creating a tally which shows the categories/codes that are most common in the qualitative data
How does the researcher test for reliability after conducting a content analysis?
- Test-retest reliability: run the content analysis again on the same sample and compare the results; if they are similar then this shows good test-retest reliability
- Inter-rater reliability: a second rater conducts the content analysis with the same coding categories & data and compares them; if results are similar = good inter-rater reliability
What are the strengths of content analysis?
- Reliability is established as a content analysis is easily replicable
- Allows statistical analysis to be conducted
- Not overly time-consuming compared to thematic analysis of qualitative data
- Complements other research methods and can be used to verify results from other research
What are the weaknesses of content analysis?
- Researcher bias can happen as the researcher has to interpret the data
- May lack validity due to extraneous variables, e.g., diary entries tend to be highly subjective
- The data is purely descriptive - no explanatory power
- Lacks causality as the the data was not collected under controlled conditions
- Results can often be flawed due to the over-representation of certain events & using material that is already available - e.g. negative events usually have more coverage than positive - this could skew the data to give an invalid representation of behaviour