WEEK 7) Principles of research critical appraisal Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Why is it imp to study research design?

3 main things, hint minimise and maximise

A
  • Minimize potential for bias or unclear interpretation
  • Maximize efficiency of resource utilization (dont want wasted resources). e.g. draw conclusions which are contrary to whats really going on. Wasted resources, made conclusions about a treatment that could be harmful.
  • Minimise ethical issues with subjects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Distinguish btn basic research and applied research.

A

BASIC RESEARCH: you’re addressing a question but you
dont want to use the results of the study to relevant things in the real world

vs

APPLIED RESEARCH: attempt to apply to real world problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why do we study critical appraisal of research?

A

Application of scientific method to human studies not always easy

Cannot treat 􏰀research results􏰁 as a black box in which we have uncritical faith

Practicing psychologists are the experts and need to decide for themselves whether a given study is to be believed e.g. psychologists need to be able to critically evaluate articles and then maybe apply to real world therapy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the scientific method steps for a normal experimental design?

A
Ask q 
Do background r
construct hypothesis 
test with experiment 
analyse results/ draw conclusions 
Decide whether hypothesis is true or partially true or false
Report results 
Think and try again (go back to construct h).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is good about the scientific method?

A
  • Conclusions based on quantifiable and reproducible evidence
  • evidence-based psychology
  • Control of variables other than that of interest ! confidence in attribution of effect
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the scientific method steps for a non-experimental design?

A
Have idea 
do background r
pose q
collect data
Analyse results/ draw conclusions 
Generate a hypothesis 
report results 
think again and go back to pose q.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Random sampling?

A

When every member of the population has an equal probability of selection

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is good about random sampling ?

A
  • then your sample is truly representative.

- And should be easily generalisable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

If youve used random sampling, can you assume its randomised?

Why is this?

A

In most cases you cant ensure its completely random. Samples may be technically non-random but arbitrary.

In practice, samples are often obtained for convenience, higher chance of being non- representative e.g. all 300 level psych students.

(Applies to surveys as much as experimental studies)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are four key measurement issues?

List their importance from lowest to highest importance

A

• Most closely match research question / hypotheses
• Best (psychometric) properties – Reliability/validity
– Minimum variance (you want to minimise error v).
– Most responsive (to experimental factor)
• Most feasible
• Least cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

This is what critical appraisal is not, tell me what it is then…

negative dismissal of any piece of research

A

balanced assessment of benefits and strengths.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

This is what critical appraisal is not, tell me what it is then…

Assessments of results alone

A

assessment of results and r process

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

This is what critical appraisal is not, tell me what it is then…

based entirely on detailed statistical analysis

A

consideration of qual and quant aspects of r

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

This is what critical appraisal is not, tell me what it is then…

to be undertaken by expert researchers /statisticians

A

to be undertaken by all health professionals as part of their work.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

True or false, good appraisal is inherently retrospective

A

true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is CONSORT in critical appraisal of r?

A

CONSORT: way of assessing areas of a research report.
Ensures various criteria are met. Authors should address
this checklist.. This is used in randomised trials.

17
Q

What are 10 questions one could ask to help you make sense of evidence / randomised controlled trials?

A

1. Did the study ask a clearly defined question
2. Is it a randomized controlled trial
3. Subjects appropriately allocated to groups
4. Staff and subjects blinded
5. All subjects who entered accounted for
6. All subjects followed-up consistently
7. Sample size adequate for power
8. How are the results presented and key message
9. How precise are the results
10. Were all important outcomes considered

18
Q

What are some biases in these appraisal guidelines (the 10 q ones on randomised controlled trials).

A
  • Primarily geared to intervention studies; CBT, pharmacotherapy, …
  • Not so much attention given to other forms of research
  • Seeking scientific evidence of potential effects attributable to intervention
  • RCTs considered the holy grail
19
Q

Your perspective will affect which aspects of appraisal you give most weight to. You MUST consider the three reasons why you are bothering to read the article. What could they be?

A

– Research direction

– Clinical practice
(implications for your own clinical practice.)

– Non-specific
(your purpose for reading article will affect the emphasis you place on parts. )

20
Q

Is it bad to not explicitly state your hypothesis in an intro?

A

no not necessarily, so long as after the intro you have a clear sense of where the report is going, it should be fine. Even if aims arent explicitly stated.

21
Q

It is a randomly controlled trial if

A

the participants are dbl blind =neither the participants nor the researchers know which participants belong to the control group, nor the test group.

e.g. random assignment.

(Double blind means both subject and investigator blinded to treatment allocation)

22
Q

Should an article explicitly state that all subjects are accounted for?

A

It should, otherwise it should at least mention drop outs etc. Otherwise its suspicious.

can infer that this is suspicious. They should report
the completion rate e..g if one or two partiicpants dropped out.
Not reported which suggests suspicious missing data.

23
Q

One must also state that the sample size is adequate for power. What should the report say?

A

Sample size specified which is good “there were 86 men as participants” but no reference to adequacy of statistical power

24
Q

True or false you will have more power with a repeated measures design rather than a btn subjects design.

A

True.

25
Q

Precision of the results in the practice example given?

A
  • No confidence intervals given although they could be calculated from the statistics provided
  • SDs are relatively small
  • Cohen􏰂s d reported which makes comparison of effects across outcomes possible
26
Q

What could be the effect of having your paper funded by a certain industry that wants a particular outcome?

Was anything missed?

A

Report could be biased.

  • Motivated by altruistic desire to ascertain the truth ?
  • Unconscious biases ?
  • Did the clinical investigators feel free to modify the protocol or analytical approach to satisfy themselves ?
  • How independent were the independent biostatisticians ?
27
Q

What is impact factor (IF) of research?

A

a measure of the impact articles published in a given journal have in their field on average

28
Q

Does IF measure quality ?

A

Does the IF measure quality ?
– Consider discipline with small numbers of researchers, cannot compete with psychology !
– Many citations may just be the squeaky wheel

29
Q

Can the IF mislead?

A

– Some types of articles are more likely to be cited than others ! higher IF
– Journals can play games to increase IF

30
Q

What is a quality score?

A

Take appraisal criteria (e.g. CONSORT) and assign a score to each. Sum scores for an article to obtain overall score

31
Q

What are the pros and cons of a quality score?

A

• Pros:
– Quantifies quality
– Enables comparisons across studies – Regardless of journal IF or quality
• Cons:
– Assumes we all apply same weight to a given item
– A score could fail certain (key) items but still get an overall pass

(largely discarded scoring measure today).

32
Q

What are three ways a journal can be rated on?

A
  • Rated by prestige
  • Rated by reviewer difficulty
  • Rated by impact factor
33
Q

Impact factor scores for common journals are?

A

– Am J. Psychiatry = 5.9
– Journal of adolescence = 1.2

•  By comparison
–  Nature = 27.1
–  Lancet = 17.5
–  Psychological bulletin = 9.75
–  Australian J. psychology = 0.64