research methods part 2 Flashcards

1
Q

define reliability

A

refers to how consistent a measurement devise is internally or externally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are 3 ways of assessing reliability

A
  1. test retest
  2. inter-observer reliability
  3. measuring reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

test retest

A
  • repeating an experiment on multiple occasions
  • sufficient time between
  • correlate results to make sure they are significant
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

inter-observer reliability

A
  • use two psychologist to observe the same behavior.
  • applies to behavioral categories
  • data should be correlated to test reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

how is reliability measured

A

with a correlation analysis.

the result should have a co-efficient should be higher than +0.80 to be significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

how to improve reliability in a questionnaire

A
  • deselect or rewrite questions

- use closed questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

how to improve reliability in interviews

A
  • use the same trained interviewer
  • avoid using ambiguous questions
  • use structed interviews
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

how to improve reliability in observations

A
  • probably operationalized behavioral categories

- train the observers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

define validity

A

the extent to which an observed affect is true or generalizable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what are the 4 type of validity

A
  1. internal validity
  2. external validity
  3. temporal validity
  4. ecological validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

internal validity

A

relates to if the observed effects are coursed by the manipulation of the independent variable or other external factors.
(mainly affect by demand characterists)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

external validity

A

relates to external factors outside the investigation. e.g. generalizability
a type of external validity is ecological validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

temporal validity

A

relates to the issue if finding or studies hold true overtime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

ecological validity

A

is a type of external validity that relate to if the study can be applied to different settings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

2 ways of assessing validity

A
  1. face validity

2. concurrent validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

face validity

A
  • looks at if a study/research is measuring what it is suppose to.
  • done by giving it to an expert or checking if it seems correct
17
Q

concurrent validity

A
  • seeing if the results seem similar to previous studies

- correlation of the to scores should excessed +0.80

18
Q

how can you improve validity in experiments

A
  • use control group (make sure the iv is affect the dv)
  • standardized procedures
  • single/ double blind procedures (no demand characteristics)
19
Q

how to improve validity in questionnaires

A
  • lie scale (assess the consistency of the results and control social desirability bias)
  • anonymous responses
20
Q

how to improve validity in observations

A

(tend to have high ecological validity)

  • use covert observation
  • make sure the behavioral categories aren’t to board
21
Q

how to improve validity in qualitative research

A

high ecological validity = less interpretative methods - conclusion

  • make sure experimenter and participants perceive the research in the same way
  • triangulation
22
Q

define the sign test

A

a statistical test used to analysis the difference in scores between related items. the data used is nominal.

23
Q

define probability

A

look at the certain likelihood an event will occur

24
Q

define a peer review

A

is the assessment of scientific work by others specialists in the same field, to ensure that any research intended for publication is of high quality.

25
what are the 3 aims of a peer review
1. to allocate research funding 2. suggest amendments/improvements 3. validate quality and relevance of research
26
evaluations of peer review
- publication bias - burying groundbreaking research +/- anonymity
27
what are the 6 parts of a scientific report.
1. abstract 2. introduction 3. method 4. results 5. discussion 6. references
28
what should be in the abstract
- short summary (150 to 200 words long) | - include the aim, hypothesis, methods, results and conclusion.
29
what should be in the introduction of a scientific report
- literature review of relevant research, concepts and theories in the research.
30
what should be in the method section of a scientific report
- detailed part that allow others psychologists to repeat the study and test replicability. (experimental design, sample, ethical considerations and materials)
31
what should be included in the results section of a scientific report
- descriptive statistics (graphs) - inferential statistics - no raw data (if qualitative result they should be analyzed instead )
32
what should be included in the discussion section of a scientific report.
- results analyzed verbally now - discuss the context of evidence - suggest limitations - suggest real world applications
33
referencing
Journal: author, data, article title, journal name (in italics), volume, page number. Books: author, date, book title, place of publication. Web references: source, data, title, weblink, data accessed.
34
what are the 5 features of a science
1. paradigm / paradigm shift 2. theory construction and hypothesis testing 3. falsifiability 4. replicability 5. objectivity / empirical methods