Research methods Year 2 Flashcards

1
Q

Definition of a case study

A

In-depth investigation, description and content analysis of single individual, group, institutions or event

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Definition of content analysis

A

A research technique that enables the indirect study of behaviour by examining communications that people produce such as texts, emails, Tv, film

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two stages of content analysis and are they quantitative or qualitative?

A

Coding - quantitive data

Thematic analysis - qualitative data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What does coding and thematic analysis entail in content analysis

A

Coding- creating categories and counting how many times this specific category appears.
Thematic- A theme is decided, descriptive words
Once satisfied, compared to new set of data to observe validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Case studies Strengths and Limitations

A

Strengths:

  • Offer rich detail, contribute to typical functioning
  • Generate hypothesis for future study

Limitations:

  • Generalisation of small studies to wider scale may be wrong.
  • Final information on report may be based on selection and interpretation by the researcher.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Content analysis Strengths and Limitations

A

Strengths:
-Get around ethical issues because all data is normally already available to the public, high external validity

Limitations:
-Indirect studying so outside of the context in which it occurred. Researchers may add their opinion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is reliability

A

How consistent a measuring device is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the two ways to test reliability

A
  • Test-retest

- Inter-observer reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is test-retest and inter-observer reliability

A

Test-retest:
-Administering the same test. If the test is reliable the same result should be produced.
Sufficient time between tests should occur in order to allow participants to be unable to remember

Interobserver reliability:
-The extent to which there is agreement between 2 or more observers involved in observations of a behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the inter observer reliability equation?

A

(Total number of agreements/ total number of observations) >.80
If it is higher than +.80 then there is high inter-observer reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Ways to improve questionnaire reliability?

A
  • Test-retest
  • If low reliability then adjusting to make clear questions
  • Replace open questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Ways to improve interview reliability?

A
  • No leading or ambiguous questions
  • Use structured interview
  • Same interviewer (to have same skill level)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Ways to improve Observation reliability?

A
  • Operationalise all variables
  • No overlapping categories observed
  • Further training for observers to pick up on the same categories/moments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ways to improve experiment reliability?

A
  • Consistent procedure

- Standardised procedures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is validity

A

The extent to which observers effects are genuine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the two types of validity, define them

A

Internal validity- effects observed are due to manipulation of the IV

External validity- what was observed can it be applied to real life.(high mundane realism)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is temporal validity

A

Does the theory stand over time? Like penis envy by Freud

18
Q

What are the two ways of assessing validity, define them

A

Face validity- whether a test, scale or measure appears ‘on face value’ to measure what it is suppose to
Concurrent validity- those results obtained can be compared to another well-obtained test

19
Q

Ways to improve experiment validity?

A
  • Have a control group
  • Standardise procedures as much as possible to minimise participant reactivity ad investigator effects
  • single and double blind procedures
20
Q

Ways to improve questionnaire validity?

A

-Incorporate a lie scale within the questions to assess the consistency of a respondents response and control effects of social desirability bias

21
Q

Ways to improve observations validity?

A

-Behavioural categories too broad, overlapping or ambiguous have a negative impact on the validity

22
Q

Ways to improve qualitative research validity?

A

-More ecological validity but interpretive validity where the researcher may come to their own conclusions. Should use ‘direct quotes’ and triangulation (use of number of sources)

23
Q

What is meant by ecological validity

A

The extent to which findings from a research study can be generalised to other settings and situations.

24
Q

What is a statistical test

A

Used in psychology to determine whether a significant difference or correlation exists

25
What are the stages of choosing a statistical test
1) Is it a difference or correlation? 2) What experimental design is it? Independent= unrelated Matched+repeated= related 3) What kind of quantitive data is it: nominal, ordinal or interval? Nominal=Categorical data Ordinal= Data is ordered in some way Interval= Numerical scales of which is clear intervals
26
What is a null hypothesis
That there is no significant differences between variables
27
What is the opposite of a null hypothesis
Alternative hypothesis
28
What is the usual level of significance in psychology
0.05 = 5%
29
Difference between entailed and two tailed test
One tailed - directional | Two tailed - non directional
30
What are the 2 types of possible errors
Type I: Null hypothesis is rejected when it should be accepted Type II: Null hypothesis is accepted when it should be rejected
31
What are the 7 types of statistical significance tests used in Psychology
``` Mann-whitney Wilcoxon Related and unrelated t-test Spearmans rho Pearsons r Chi-squared ```
32
What did Kuhn suggest
That what distinguishes scientific disciplines from non-scientific is a shared set of assumptions and methods - paradigm
33
Suggested by Kuhn how should Psychology be seen
As a pre-science
34
What is a paradigm
A set of shared assumptions and agreed method within a scientific discipline
35
What is a paradigm shift
The result of a scientific revolution when there is significant change in the dominant unifying theory within a scientific discipline
36
What is a theory
Theory construction through gathering evidence via direct observations . Then hypothesis is tested and see if it supports
37
What is hypothesis testing
A key part of theory where a hypothesis must be tested to check if it is falsified
38
What is falsifiability
Theory cannot be considered scientific unless out admits possibility of being untrue
39
What is empirical method
Scientific approaches that are based on the gathering of evidence through direct observation and experience
40
What did Popper say
That falsification is very important, possibility of being false.
41
Why is replicability important
Because it determines validity, repeating in different studies allows the theory to be generalised
42
Why is objectivity important when devising a theory
'Critical distance', not allow any personal opinion or biases to the data they collect or influence .