Research methods - A2 Flashcards
What is reliability and how can it be achieved?
- Reliability is achieved if a study has been set up so that the IV can be seen to affect the DV:
- If the procedure is replicated, the study should show similar results
- Replicating a study and finding similar results shows that the measure is consistent
If a study is completed using a standardised procedure and obtains the same results, it is said to be reliable
Why are lab experiments the most reliable method in psychology?
- Take place in neutral space under controlled conditions
- Follow a standardised procedure
- Use random allocation to conditions
- Tend to use a control group as comparison to the experimental group
- Generate quantitative data which is easy to compare and analyse
Why are natural and field experiments not as reliable research methods?
- Field experiments: generate quantitative data & implement an IV but are subject to an array of extraneous variables over which the research has no control
- Natural experiments: generate quantitative data but they use a naturally-occurring IV over which the research has no control
What is the difference between internal and external reliability?
- Internal reliability: The extent something is consistent with itself
- External reliability: The extent a test measure is consistent over time
How is reliability assessed? What type of reliability does each method measure?
The reliability of a questionnaire can be assessed using two methods:
- Test-retest method measures external reliability: same ppts are given the same questionnaire at separate time intervals (e.g. a 6-month gap between testing sessions)
- If the same result is found per ppt then external reliability is established
- Split-half method measures internal reliability: researcher splits the test in half & analyses the responses given to the 1st half of the questionnaire compared to the 2nd half of the questionnaire
- If similar responses are given in both halves then internal reliability is established
What is inter-observer reliability and what does it reduce?
- Inter-observer reliability is the level of consistency between 2 or more trained observers when they conduct the same observation
- Reduces chance of researcher bias interfering with observation
How is inter-observer reliability assesed?
- All observers must agree on the behaviour categories + how they are going to record them before the observation begins
- The observation is conducted separately by each observer to avoid conformity
After the observational period:
- The observers compare the 2 independent data sets (usually a tally chart)
- They then test the correlation between the 2 sets
- Strong positive correlation between the sets = good inter-observer reliability & that behaviour categories are reliable
What is validity?
- Validity focuses on accuracy & accurately measuring what the study sets out to
- What extent can the findings be generalised to the wider population and out of the research setting
What is internal validity?
- Internal validity: measures whether the results are due to the manipulation of the IV and not confounding variables
- Internal validity can be improved by reducing investigator effects & demand characteristics
What is external validity?
- External validity: measures whether the results can be generalised beyond the research setting, looking at;
- Ecological validity: whether it can be generalised to other settings
- Population validity: whether it can be generalised to other people
- Temporal validity: when it can be generalised over time
- External validity can be improved by setting research/experiments in naturalistic environments
How is validity assessed?
- Predictive validity: assesses validity by predicting how well a test does at predicting future behaviour
- Temporal validity: assesses how valid it remains over time
- Concurrent validity: assesses through correlation, correlating scores from research already existing & known to be valid
- Face validity: assesses whether something is what it looks like, to what extent does the item look like what the test measures
What are case studies and how is data collected on them?
- Case studies are detailed and in-depth investigations of a small group or an individual
- They allow researchers to examine individuals in great depth
- Data is often collected through interviews or observations, generating mostly qualitative data
- Most case studies tend to be longitudinal i.e. ppts experience/progress is tracked & measured
What are the strengths of case studies?
- Holistic approach - the whole individual & their experiences are considered
- Allows researchers to study unique behaviours & experiences which would be unethical or impossible to manipulate in controlled conditions
- Case studies provide rich, in-depth data which is high in explanatory power
- Case studies may generate hypotheses for future study, and even one solitary, contradictory instance may lead to the revision of an entire theory
What are the weaknesses of case studies?
- Results are not generalisable or representative due to (usually) only one person being the focus of the study
- Researcher may be biased in their interpretation of the information
- Often case studies rely on their ppts having a good memory which means that information/details can be missed which would impact the validity of the findings
What are the different statistical tests and what is the acronym to remember them?
Carrots Should Come Mashed With Swede Under Roast Potatoes
- Chi-squared
-Sign Test
- Mann-Whitney
- Wilcoxon
- Spearman’s Rho
- Unrelated-t-Test
- Related-t-Test
- Pearson’s r