WEEK 7) Principles of research critical appraisal Flashcards
Why is it imp to study research design?
3 main things, hint minimise and maximise
- Minimize potential for bias or unclear interpretation
- Maximize efficiency of resource utilization (dont want wasted resources). e.g. draw conclusions which are contrary to whats really going on. Wasted resources, made conclusions about a treatment that could be harmful.
- Minimise ethical issues with subjects
Distinguish btn basic research and applied research.
BASIC RESEARCH: you’re addressing a question but you
dont want to use the results of the study to relevant things in the real world
vs
APPLIED RESEARCH: attempt to apply to real world problems.
Why do we study critical appraisal of research?
Application of scientific method to human studies not always easy
Cannot treat research results as a black box in which we have uncritical faith
Practicing psychologists are the experts and need to decide for themselves whether a given study is to be believed e.g. psychologists need to be able to critically evaluate articles and then maybe apply to real world therapy
What is the scientific method steps for a normal experimental design?
Ask q Do background r construct hypothesis test with experiment analyse results/ draw conclusions Decide whether hypothesis is true or partially true or false Report results Think and try again (go back to construct h).
What is good about the scientific method?
- Conclusions based on quantifiable and reproducible evidence
- evidence-based psychology
- Control of variables other than that of interest ! confidence in attribution of effect
What is the scientific method steps for a non-experimental design?
Have idea do background r pose q collect data Analyse results/ draw conclusions Generate a hypothesis report results think again and go back to pose q.
What is Random sampling?
When every member of the population has an equal probability of selection
What is good about random sampling ?
- then your sample is truly representative.
- And should be easily generalisable.
If youve used random sampling, can you assume its randomised?
Why is this?
In most cases you cant ensure its completely random. Samples may be technically non-random but arbitrary.
In practice, samples are often obtained for convenience, higher chance of being non- representative e.g. all 300 level psych students.
(Applies to surveys as much as experimental studies)
What are four key measurement issues?
List their importance from lowest to highest importance
• Most closely match research question / hypotheses
• Best (psychometric) properties – Reliability/validity
– Minimum variance (you want to minimise error v).
– Most responsive (to experimental factor)
• Most feasible
• Least cost
This is what critical appraisal is not, tell me what it is then…
negative dismissal of any piece of research
balanced assessment of benefits and strengths.
This is what critical appraisal is not, tell me what it is then…
Assessments of results alone
assessment of results and r process
This is what critical appraisal is not, tell me what it is then…
based entirely on detailed statistical analysis
consideration of qual and quant aspects of r
This is what critical appraisal is not, tell me what it is then…
to be undertaken by expert researchers /statisticians
to be undertaken by all health professionals as part of their work.
True or false, good appraisal is inherently retrospective
true
What is CONSORT in critical appraisal of r?
CONSORT: way of assessing areas of a research report.
Ensures various criteria are met. Authors should address
this checklist.. This is used in randomised trials.
What are 10 questions one could ask to help you make sense of evidence / randomised controlled trials?
1. Did the study ask a clearly defined question
2. Is it a randomized controlled trial
3. Subjects appropriately allocated to groups
4. Staff and subjects blinded
5. All subjects who entered accounted for
6. All subjects followed-up consistently
7. Sample size adequate for power
8. How are the results presented and key message
9. How precise are the results
10. Were all important outcomes considered
What are some biases in these appraisal guidelines (the 10 q ones on randomised controlled trials).
- Primarily geared to intervention studies; CBT, pharmacotherapy, …
- Not so much attention given to other forms of research
- Seeking scientific evidence of potential effects attributable to intervention
- RCTs considered the holy grail
Your perspective will affect which aspects of appraisal you give most weight to. You MUST consider the three reasons why you are bothering to read the article. What could they be?
– Research direction
– Clinical practice
(implications for your own clinical practice.)
– Non-specific
(your purpose for reading article will affect the emphasis you place on parts. )
Is it bad to not explicitly state your hypothesis in an intro?
no not necessarily, so long as after the intro you have a clear sense of where the report is going, it should be fine. Even if aims arent explicitly stated.
It is a randomly controlled trial if
the participants are dbl blind =neither the participants nor the researchers know which participants belong to the control group, nor the test group.
e.g. random assignment.
(Double blind means both subject and investigator blinded to treatment allocation)
Should an article explicitly state that all subjects are accounted for?
It should, otherwise it should at least mention drop outs etc. Otherwise its suspicious.
can infer that this is suspicious. They should report
the completion rate e..g if one or two partiicpants dropped out.
Not reported which suggests suspicious missing data.
One must also state that the sample size is adequate for power. What should the report say?
Sample size specified which is good “there were 86 men as participants” but no reference to adequacy of statistical power
True or false you will have more power with a repeated measures design rather than a btn subjects design.
True.
Precision of the results in the practice example given?
- No confidence intervals given although they could be calculated from the statistics provided
- SDs are relatively small
- Cohens d reported which makes comparison of effects across outcomes possible
What could be the effect of having your paper funded by a certain industry that wants a particular outcome?
Was anything missed?
Report could be biased.
- Motivated by altruistic desire to ascertain the truth ?
- Unconscious biases ?
- Did the clinical investigators feel free to modify the protocol or analytical approach to satisfy themselves ?
- How independent were the independent biostatisticians ?
What is impact factor (IF) of research?
a measure of the impact articles published in a given journal have in their field on average
Does IF measure quality ?
Does the IF measure quality ?
– Consider discipline with small numbers of researchers, cannot compete with psychology !
– Many citations may just be the squeaky wheel
Can the IF mislead?
– Some types of articles are more likely to be cited than others ! higher IF
– Journals can play games to increase IF
What is a quality score?
Take appraisal criteria (e.g. CONSORT) and assign a score to each. Sum scores for an article to obtain overall score
What are the pros and cons of a quality score?
• Pros:
– Quantifies quality
– Enables comparisons across studies – Regardless of journal IF or quality
• Cons:
– Assumes we all apply same weight to a given item
– A score could fail certain (key) items but still get an overall pass
(largely discarded scoring measure today).
What are three ways a journal can be rated on?
- Rated by prestige
- Rated by reviewer difficulty
- Rated by impact factor
Impact factor scores for common journals are?
– Am J. Psychiatry = 5.9
– Journal of adolescence = 1.2
• By comparison – Nature = 27.1 – Lancet = 17.5 – Psychological bulletin = 9.75 – Australian J. psychology = 0.64