Research methods - A2 Flashcards

1
Q

What is reliability and how can it be achieved?

A
  • Reliability is achieved if a study has been set up so that the IV can be seen to affect the DV:
  • If the procedure is replicated, the study should show similar results
  • Replicating a study and finding similar results shows that the measure is consistent
    If a study is completed using a standardised procedure and obtains the same results, it is said to be reliable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why are lab experiments the most reliable method in psychology?

A
  • Take place in neutral space under controlled conditions
  • Follow a standardised procedure
  • Use random allocation to conditions
  • Tend to use a control group as comparison to the experimental group
  • Generate quantitative data which is easy to compare and analyse
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why are natural and field experiments not as reliable research methods?

A
  • Field experiments: generate quantitative data & implement an IV but are subject to an array of extraneous variables over which the research has no control
  • Natural experiments: generate quantitative data but they use a naturally-occurring IV over which the research has no control
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between internal and external reliability?

A
  • Internal reliability: The extent something is consistent with itself
  • External reliability: The extent a test measure is consistent over time
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How is reliability assessed? What type of reliability does each method measure?

A

The reliability of a questionnaire can be assessed using two methods:

  • Test-retest method measures external reliability: same ppts are given the same questionnaire at separate time intervals (e.g. a 6-month gap between testing sessions)
  • If the same result is found per ppt then external reliability is established
  • Split-half method measures internal reliability: researcher splits the test in half & analyses the responses given to the 1st half of the questionnaire compared to the 2nd half of the questionnaire
  • If similar responses are given in both halves then internal reliability is established
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is inter-observer reliability and what does it reduce?

A
  • Inter-observer reliability is the level of consistency between 2 or more trained observers when they conduct the same observation
  • Reduces chance of researcher bias interfering with observation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is inter-observer reliability assesed?

A
  • All observers must agree on the behaviour categories + how they are going to record them before the observation begins
  • The observation is conducted separately by each observer to avoid conformity

After the observational period:
- The observers compare the 2 independent data sets (usually a tally chart)
- They then test the correlation between the 2 sets
- Strong positive correlation between the sets = good inter-observer reliability & that behaviour categories are reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is validity?

A
  • Validity focuses on accuracy & accurately measuring what the study sets out to
  • What extent can the findings be generalised to the wider population and out of the research setting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is internal validity?

A
  • Internal validity: measures whether the results are due to the manipulation of the IV and not confounding variables
  • Internal validity can be improved by reducing investigator effects & demand characteristics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is external validity?

A
  • External validity: measures whether the results can be generalised beyond the research setting, looking at;
  • Ecological validity: whether it can be generalised to other settings
  • Population validity: whether it can be generalised to other people
  • Temporal validity: when it can be generalised over time
  • External validity can be improved by setting research/experiments in naturalistic environments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How is validity assessed?

A
  • Predictive validity: assesses validity by predicting how well a test does at predicting future behaviour
  • Temporal validity: assesses how valid it remains over time
  • Concurrent validity: assesses through correlation, correlating scores from research already existing & known to be valid
  • Face validity: assesses whether something is what it looks like, to what extent does the item look like what the test measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are case studies and how is data collected on them?

A
  • Case studies are detailed and in-depth investigations of a small group or an individual
  • They allow researchers to examine individuals in great depth
  • Data is often collected through interviews or observations, generating mostly qualitative data
  • Most case studies tend to be longitudinal i.e. ppts experience/progress is tracked & measured
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the strengths of case studies?

A
  • Holistic approach - the whole individual & their experiences are considered
  • Allows researchers to study unique behaviours & experiences which would be unethical or impossible to manipulate in controlled conditions
  • Case studies provide rich, in-depth data which is high in explanatory power
  • Case studies may generate hypotheses for future study, and even one solitary, contradictory instance may lead to the revision of an entire theory
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the weaknesses of case studies?

A
  • Results are not generalisable or representative due to (usually) only one person being the focus of the study
  • Researcher may be biased in their interpretation of the information
  • Often case studies rely on their ppts having a good memory which means that information/details can be missed which would impact the validity of the findings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the different statistical tests and what is the acronym to remember them?

A

Carrots Should Come Mashed With Swede Under Roast Potatoes
- Chi-squared
-Sign Test
- Mann-Whitney
- Wilcoxon
- Spearman’s Rho
- Unrelated-t-Test
- Related-t-Test
- Pearson’s r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What factors determine the choice of statistical test?

A
  • Is it a test of difference or association (correlation)?
  • Is the design related (repeated measures, matched pairs) or unrelated (independent groups)
  • Is the data nominal, ordinal, or interval?
17
Q

What is content analysis?

A

A research technique that enables the indirect study of behaviour by examining communications that people produce (eg. In emails, TV, Film and other media)

18
Q

What is thematic analysis?

A

A qualitative approach to analysis that involves identifying implicit or explicit ideas within the data
- Themes will often emerge once the data has been coded

19
Q

What is coding (research methods)?

A

The stage of content analysis in which the data being studied is put into categories (eg. words, sentences, phrases, etc.)

20
Q

What are the stages of conducting a content analysis?

A
  1. The researcher chooses the research question
  2. They select a sample of pre-existing qualitative research e.g. interview transcripts, diaries, video recordings
  3. The researcher will decide on the coding of the categories/coding units
  4. The researcher works through the data creating a tally which shows the categories/codes that are most common in the qualitative data
21
Q

How does the researcher test for reliability after conducting a content analysis?

A
  • Test-retest reliability: run the content analysis again on the same sample and compare the results; if they are similar then this shows good test-retest reliability
  • Inter-rater reliability: a second rater conducts the content analysis with the same coding categories & data and compares them; if results are similar = good inter-rater reliability
22
Q

What are the strengths of content analysis?

A
  • Reliability is established as a content analysis is easily replicable
  • Allows statistical analysis to be conducted
  • Not overly time-consuming compared to thematic analysis of qualitative data
  • Complements other research methods and can be used to verify results from other research
23
Q

What are the weaknesses of content analysis?

A
  • Researcher bias can happen as the researcher has to interpret the data
  • May lack validity due to extraneous variables, e.g., diary entries tend to be highly subjective
  • The data is purely descriptive - no explanatory power
  • Lacks causality as the the data was not collected under controlled conditions
  • Results can often be flawed due to the over-representation of certain events & using material that is already available - e.g. negative events usually have more coverage than positive - this could skew the data to give an invalid representation of behaviour