PAS 3 Flashcards

1
Q

What does taking the mean of multiple observations do to uncertainty?

A

Decreases but does not eliminate uncertainty.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the percentage CI of around 1 sem around the mean

A

68%, meaning there is a 68% chance that the real answer is within that interval.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What does a region of +2 sem around the mean mean?

A

It is a 95% CI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an issue of sem with small numbers of observations?

A

Underestimates MOE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What do you need to do for triplicate observations sem?

what needs to be done to sem for triplicate observations

A

You need to double the sem error bars to get a 68% CI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the 2 types of experiments and what do they mean?

What is paired data

A

Have independent measures
Have repeated measures: where you have paired data, multiple repeated measures. Paired data typically arises when you make measurements on the same subject (before and after treatment)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is effect size

A

Difference between means of two groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can uncertainty be estimated?

A

Uncertainty can be estimated roughly by looking at sem error bars. Or more exactly by calculating a 95% CI for the difference in the means

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is wrong with the old statistics?

A

a. Based on a wrong conceptual model
i. There is an effect (alternative hypothesis is true) or no effect (alternative hypothesis is true). Ignores effect size and biological significance.
b. Encourages definite decisions (accept or reject null hypothesis) based on inadequate data.
i. P ≤ 0.05 is only weak evidence for a real effect. P > 0.05 is little or no evidence against a real effect
c. Uses the semantically misleading term “statistically significant”
i. It really means the data is statistically indicative
ii. It does not tell you about the biological significance of the real effect
10. For paired data, why measure each subject twice, before and after?
This eliminates the uncertainty due to variation between individual subjects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you estimate effect size for paired data?

A
  • calculate the effect size for each individual

- take the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you estimate uncertainty for effect size

A

Calculate the 95% CI for the effect using a paired t-test

Do not try comparing sem error bars

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is bonferroni correction?

A

Multiply the p-value by the number of independent tests. if it is still less than 0.05, then it may be significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why perform conservative test

A

helps to avoid Type 1 errors (false positives) but may cause a lot of Type 2 errors (false negatives)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What does Post hoc mean

A

Afterwards tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Example of a post hoc test

A

Tukeys post hoc test. This compares 2 groups at a time. Gives an individual confidence interval and p value for the comparison. It includes a multiple testing correction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is ANOVA

A

It is a one way analysis of variance.
• Looks at all of the data in all of the groups together
• Looks at the overall variation (variance) within the groups
• This measures the overall variability of the data
• Then looks at the overall variation (variance) between the groups
• Is the variability within the groups sufficient to explain the variation between groups?
• Technically, what is the probability of getting that large a variance between groups, if there is no real effect, so assuming only random variation (null hypothesis is true)?
• Looks at all of the data in all of the groups together
• Looks at the overall variation (variance) within the groups
• This measures the overall variability of the data
• Then looks at the overall variation (variance) between the groups
• Is the variability within the groups sufficient to explain the variation between groups?
• Technically, what is the probability of getting that large a variance between groups, if there is no real effect, so assuming only random variation (null hypothesis is true)?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Why are send not used for small number of observations

A

Underestimated the uncertainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the types of experiments and give examples

A

Independent measures eg patients vs control or transgenic animals vs control animals
Repeated measures, paired data or multiple repeated measures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Why should you not compare sem error bars for compared data

A

Idk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is p hacking

A

Crossing out or not involving certain data.
Eg is p=0.052, close to 0.05, so they change experiment condtions to get p<0.05
Changing the measurement to get p as smaller than or equal to 0.05
or to make all sorts of different comparisons and tests, and if one of them comes out significant, you publish that one and do not publish the others.
Publishing significant data and leaving out unsignificant ones. This is scientific fraud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Structure of spliced rna called ?

A

Lariat

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In an independent groups experiment

A

the effect size is the best estimate of the true answer. The uncertainty (MOE) can be estimated from the sem error bars. This gives a roughly 80% CI, (assuming at least 10 observations in each group and a rough approximate distribution). A better approach is to use a t-test to calculate the exact 95% CI, and to use half the 95% CI instead of sem on the error bars on the graph

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For paired data

A

The sem error bars for each group are not a good indicator. Calculate the difference for each pair of data points. Then plot the mean and 95% CI for these results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Multiple groups and multiple tests

A

Increased likelihood of type 1 errors.
Important to show all tests and comparisons
if there are a large number of comparisons.
a) use a multiple ttesting correction or
b) use anova followed by a post hoc test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Quantitative experiments

A

We want to measure the size of an effect
Random error/ random variation causes uncertainty in the result
Taking the mean of multiple observations decreases, but does not eliminate, uncertainty
How large is the uncertainty (margin of error-MOE)?
Assuming a normal distribution:
S.e.m. is one measure of uncertainty (MOE)
A region ± 1 s.e.m. around the mean is a 68% confidence interval (CI)
There is a 68% chance that the real answer is within that interval
A region ± 2 s.e.m. around the mean is a 95% confidence interval (CI)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

what does independent measures mean?

A

The 2 groups have no relation whatsoever.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What things do you need to assume in order to be able to compare error bars?

A

Comparing error bars will work, assuming
The groups are independent measures, not paired data
There are at least 10 observations
The data is roughly similar to a normal distribution

Comparing error bars gives a rough estimate of the 80% CI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What do the upper and lower ends of CI tell you?

A

The upper end of the CI tells you how big the real effect might perhaps be
Possibly bigger and more biologically significant than the difference in the means
The lower end of the CI tells you how small the real effect might perhaps be
Possibly too small to be biologically significant. Possibly zero or even negative.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Describe paired data

A

Typical example is when you measure each subject twice, once before and once after treatment
This eliminates uncertainty due to variation between individual subjects
To estimate effect size
Calculate effect size (before to after) for each individual
Then take the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

what does a paired t test tell you?

A

95% confidence interval

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

when is it possible to just use a t test to get 95% CI

A

small groups are of positive and negative controls , known hypothesis
Given the assumptions
There are a fairly small number of different groups, and
It is clear in advance what the experimental question is- what is the key comparison
Then it is reasonable to calculate the 95% CI (and a p-value) using a t-test

Not to use in large groups because p value will show higher chance of getting significant data when it could be false-type 2 error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

how to estimate uncertaininty for paired data values

A

To estimate uncertainty (1)
Take the list of effect sizes for each individual
Calculate the s.e.m. for this data
Plot error bars of ± 2 s.e.m. to estimate the 95% CI
Works reasonably for >10 observations

Or better (2)

Use a paired t-test to calculate the 95% CI for the effect using a paired t-tes
DO NOT try comparing s.e.m. error bars

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

what does p<0.05 mean

A

P < 0.05 tells us the chance of getting data that looks like a real effect, when there is no real effect, and only random variation, is α = 0.05
Which is 5% or 1 in 20
p value tells you something that looks real from random data, and if you do to many experiments, the values start to look signifiant, even though they arent. A lot of type 1 errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

what is the correct way to deal with multiple testing

A

Report all comparisons and statistical tests
Then those reading the paper know whether to be sceptical
Performing multiple tests, and only reporting the ones that are
statistically significant, is an example of “p-hacking”
Use a multiple testing correction
Bonferroni correction- multiply the p-value by the number of independent tests. Is it still less than 0.05?
Conservative test: helps to avoid Type 1 errors (false positives) but may cause a lot of Type 2 errors (false negatives)
Use 1-way analysis of variance (ANOVA)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Describe 1 way Analysis testing ANOVA

A

Looks at all of the data in all of the groups together
Looks at the overall variation (variance) within the groups
This measures the overall variability of the data
Then looks at the overall variation (variance) between the groups
Is the variability within the groups sufficient to explain the variation between groups?
could variation in groups, explain variation between groups?

Technically, what is the probability of getting that large a variance between groups, if there is no real effect, so assuming only random variation (null hypothesis is true)?

ANOVA does the same thing as T test but uses multiple groups and looks at the data overall, not the individual comparisons.

If the p-value is large, it is quite possible that the differences between groups are just random variation
If the p-value is small, it is less likely that these are random differences

If p ≤ 0.05 (old criterion) or p ≤ 0.005 (new criterion) then the data is “statistically significant”
Meaning that the data is statistically indicative, not necessarily that the effects are biologically significant.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a problem of ANOVA

A

iF YOU GET A SIGNIFICANT result, you want to know which treatment, which groups show significance, and ANOVA doesnt tell you that. Therefore we move onto POST hoc tests.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are post hoc tests for?

A

If first completed ANOVA, and ANOVA gives you a small p value, then it is reasonable to ask which comparisons are responsible for the difference. First graph data, look at error bars, any thing that looks scientifically significant? then do post hoc test. But first look atdata, then do anova followed by post hoc test.
A post hoc test is like a t-test, where you take 2 different conditions and then compare them. It takes account ANOVA.
Which post hoc test depends on which statistical package is available in the lab. Eg, Tukeys post hoc test.
As researchers, we don’t want to know that something is going on, we want to know what is going on
Which groups show an effect/ are different from the others?

FIRST- look at the data. Hopefully it should be obvious.
THEN- confirm with a post hoc test, e.g. Tukey’s post hoc test
This compares two groups at a time (like a t-test)
Gives an individual confidence interval and p-value for the comparison
But includes a multiple testing correction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Why may results not be significant

A

too small biologically or could be biologically significant but not enough repeats, too much variation.

39
Q

Type 1 vs type 2 errors

A

a type I error is the rejection of a true null hypothesis (also known as a “false positive” finding), while a type II error is the failure to reject a false null hypothesis (also known as a “false negative” finding).

40
Q

what is MOE

A

margin of error-standard error, or 95% CI is a margin of error
1 MOE up or 1 MOE down is the confidence interval.
MOE is not the limit of error, but it is a 68/95% C.I, it is not a 100% CI
But it is a margin of error, at a certain level of confidence.
So CI, is + or - 1 MOE.
MOE can be estimated from SEM error bars.

41
Q

what % do sem error bars give?

A

80% Confidence interval, as long as at least 10 observations and similar to a normal distribution, but not very skewed.

42
Q

In an independent groups experiment

A

In an independent groups experiment
The effect size (difference in the means of the two groups) is the best estimate of the “true” answer
The uncertainty (MOE) can be estimated from the s.e.m. error bars
This gives a roughly 80% confidence interval
Assuming at least 10 observations in each group, and a rough approximation to a normal distribution
A better approach is to use a t-test to calculate the exact 95% CI
And to use half the 95% CI instead of s.e.m. on the error bars on the graph

43
Q

For paired data

A

For paired data
The s.e.m. error bars for each group are not a good indicator of uncertainty
Calculate the difference for each pair of data points
Then plot the mean and 95% CI for these results

just do paired t test to get 95% ci

44
Q

For multiple groups and multiple tests

A

Multiple groups and multiple tests
Increased likelihood of type 1 errors, because of large numbers, will look like there is a real effect when infact there is none. Eg in 20, 1 will be significant.
Important to show all tests and comparisons, otherwise p hacking.
If there are a large number of comparisons
Use a multiple testing correction, or
Use ANOVA followed by a post hoc test

45
Q

what is the aim of statistics to scientists

A

to increase rigour.

46
Q

why are many studies that are statistically significant unable to be reproduced?

A

because statistical significance at p< or equal to 0.05 is a weak criterion.

47
Q

what is the meaning of statistically significant

A

indicative

48
Q

If we know something is statistically significant, how do we know if it is biologically significant also?

A

Look at the data. Is the effect size large enough to be biologically significant?
Look at the error bars or confidence interval. Could the real effect be much bigger or much smaller than the observed effect?
Look at the actual p value
P= 0.05 means statistically weak evidence
P= 0.05 to 0.01 means statistically weak to moderate
P < 0.05 means statistically moderate to strong

Has the experiment been repeated?
Has the biological effect been tested in other experiments?
This only tells you statistical results, not scientific, doesnt tell you good design, or what controls.
Just tells you it doesnt look like random data.
if P>0.05 doesnt mean its wrong, just means it is statistically very weak.

49
Q

Why can reuslts around p=0.05 be unreliable?

A

Another reason why results around p =0.05 can be unreliable is a behaviour called p-hacking
The old statistics presents a cliff edge effect
If p= 0.0499 the effect is data is statistically significant
If p= 0.0501 the data is not statistically significant
Researchers want their result to be “significant”
When p is slightly above 0.05, there is an enormous temptation to massage the dataExamples:

Rejecting some data points, because they don’t look right, might be outliers, you had doubts about that experiment

Using multiple measures, and only publishing the measure that gives p≤ 0.05

Only publishing experiments which give p≤ 0.05

50
Q

Why are irreproducible tests less likely to be found in laboratory studies than in clinical studies eg experimental psychology?

A

It is possible that the number of irreproducible studies is slightly less in laboratory studies than in clinical studies or, for example, experimental psychology
Clinical studies usually only represent a single experiment
Laboratory studies often, though not always, investigate a hypothesis with several different experiments
If the results of different experiments, with different methods agree
Good confidence in the conclusion
ALso laboratory studies put graphs in their paper, regularly, so even visually, scientists do look at the effect size. Hence not so much a problem in lab studies.

51
Q

What are the 2 reasons a effect may not be statistically significant?

A

An effect can be not statistically significant because
The effect is really zero, or at least too small to be biologically significant
There could be a biologically significant effect, but the data is too weak to confirm this
Too much variation
Not enough repeats
When a paper says that an effect was not significant
Usually means that that the data was not statistically significant

This has an even greater chance of misunderstanding than “significant”

52
Q

How to evaluate statistics in published work?

A

Look at the data. Is the effect size large enough to be biologically significant?
Look at the error bars or confidence interval. Could the real effect be much bigger or much smaller than the observed effect?
This is especially important for interpreting “not significant”. This is because, the error bars can show that even given the uncertainty in a measurement, it is still not big enough to be biologically or clinically significant. Hence should reject result. Or error bars can tell you that although small effect, there is also a large effect, so evidence is weak, effect may seem biologically large, hence may be biologically significant, but data is too weak for you to draw stronger conclusions on that.

Look at the actual p value
P= 0.1 to 0.05 is evidence, albeit statistically very weak evidence
It means there is evidence but very weak evidence,
Has the experiment been repeated?
Has the biological effect been tested in other experiments?

53
Q

If get results from p=0.1-0.05, why should you not decline results straight away?

A

It means that there is evidence but very weak evidence, but if you repeat experiment and get close to 0.1, then it gives moderate evidence for real effect. But this evidence is weak, so you should do more experiments, but not rush to say no effect, not statistically significant.

54
Q

How do you report results using the old statistics?

A
Always start with the effect size
There was a large increase in ..
There was a moderate decrease .
The level in patients was 47% higher than in controls
Never use the word “significant” without making the meaning clear
 statistically significant
 biologically significant
 clinically significant

Do not describe the “effect” as statistically significant or not statistically significant, it is the data that is statistically significant
To a scientist, the word “effect” means the biological effect
To a statistician, the effect is just the difference between the treated and the control, so when you say “effect” you are just talking about the data
But you are writing for scientists and clinicians, not for statisticians

55
Q

What conclusion would you draw from this phraseDrug 3 appeared to have a major effect on blood pressure, but the data did not reach statistical significance

A

Looking at the effect size, it seems the drug might have worked, but looking at statistics, the data/evidence is weak. Hence it is worth repeating this experiment.

56
Q

What are the wrong and right questions in stats

A

The wrong question
Is there an effect or no effect?
Is the null hypothesis true or false?

The correct question
How big is the effect?
Is it large enough to be biologically significant?

57
Q

Why is rejecting the null hypothesis a lie?

A

A result with p > 0.05 is evidence for a real effect
It is very weak evidence
But it is evidence

58
Q

when 95% CI not available, under what assumptions can you use sem error bars to estimate C.I

A

When the 95% C.I. is not available, the confidence interval can be estimated by looking at the s.e.m. error bars.

Notes
Only valid if you are comparing independent groups, not paired data
With fewer than 10 observations, the s.e.m. underestimates the uncertainty-if 3 observations, need to double the error bars

59
Q

How to answer

why is conclusion inappropriate questions

A

-It is basically, for which answer would the conclusion be appropriate.
statements can be correct but will not impact the conclusion-they are not the reason why the conclusion in inappropriate.
-Catchphrase questions-look at the catch phrases
-say what you see
With ‘study needs to be repeated before reliable conclusions can be drawn’ - you can say this for anything, you may not get any conclusions.

60
Q

What is the difference between positive and negative control

A

A negative control uses a treatment that is known not to influence results. A positive control uses a treatment that is known to influence results. Often, a positive control is predicted to achieve results similar to your hypothesis.

A negative control is part of a well-designed scientific experiment. The negative control group is a group in which no response is expected. It is the opposite of the positive control, in which a known response is expected

A positive control is a group in an experiment that receives a treatment with a known result, and therefore should show a particular change during the experiment. It is used to control for unknown variables during the experiment and to give the scientist something to compare with the test group.

61
Q

How would you describe the results of a study

A

1 mark for ‘statistically significant or non statistically significant increase/decrease’
1 mark for correctly specifying which groups are being compared
1 mark for including relevant details of what is being tested, specific details relating to the relevant end point, and experimental conditions.

62
Q

List an additional control/ treatment

A

A negative control, not exposed to anything
a control where they are exposed to current treatment, on the market not novel.
Active comparator, positive control/positive control enables evaluation of whether the model used functioned correctly (I.e did the drug that is suppose to reduce cough have that effect) or enables comparison of the novel compounds efficacy versus the best currently available treatment.

The control and treated animals under another challenge agent, to show it is not specific to that agent used before (protrusive agents)

Negative control, animals treated with nothing just the protrusive

63
Q

Would a t test be appropriate statistical use for analysis if multiple data? Why?

A

No-it is likely to result in type 1 error ( false positives) due to not correcting for multiple comparisons.
T test for 2 groups only
1 way Anova for more than 2 groups and then perform a post hoc test

64
Q

What is research and what is not research

A

Research is a quest for an answer driven by a
specific question or idea.
• Characteristics of Research:
– It originates with a new question, a new idea or a
problem with no acceptable solution
– It requires a clear articulation of a goal
– It follows a specific plan of procedure
– It is bounded by certain critical assumptions
Research is not just information gathering.
– A person collecting information on a specific subject is not
research.
• Research is not rearranging of facts.
– A person writing a report on a known subject is not
research
• Research is not a sales pitch.
– A new improved product developed after years of
“research” is rarely a research.

65
Q

Define research

A

OECD, “Any creative systematic activity undertaken in order to increase the stock of knowledge, including knowledge of man, culture and society, and the use of this knowledge to devise new applications.”
The Merriam-Webster Online Dictionary: “studious inquiry or examination;especially: investigation or experimentation aimed at the discovery and interpretation of facts, revision of accepted theories or laws in the light of new facts, or practical application of such new or revised theories or laws”
(last accessed: 05-01-2018)

FEATURES: creative, rigorous (systemic and systematic) leading to new knowledge (and its application and communication), through the use of methods fit for the research question being asked

66
Q

What is paradigm

A

a typical example or pattern of something; a pattern or model.
It is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.
A way of seeing
A ways of understanding the world around us
A framework of thought or beliefs through which one’s world or reality is interpreted.

67
Q

Describe a positivist paradigm

A

Phenomena that are ‘out there’ to be discovered, measured and analysed
Objectivity – no apparent intrusion of the researcher into the researched
Reliability and validity
Data is collected and reduced to numbers and statistics
Phenomena are fixed realities that cross cultures
Replicability
Positivism is the term used to describe an approach to the study of society that relies specifically on scientific evidence, such as experiments and statistics, to reveal a true nature of how society operates.

68
Q

Describe The interpretist paradigm

A

Illumination (not measurement or proof)
Understanding human actions and how humans make sense of phenomena
Reliance on words that reflect the complexity of a phenomenon, and their interpretation
Iterative
Awareness of the influence of context, and of the researcher on the ‘researched’
Acceptance of complexity (reality is not the same everywhere)
Non-replicability

Interpretivism is one form of qualitative methodology. Interpretivism relies upon both the trained researcher and the human subject as the instruments to measure some phenomena, and typically involves both observation and interviews.

69
Q

what are the challenges of positivist paradigm?

A

The social world is different from the natural world
What does a one-off blood test actually measure apart from a deviation from a given standard? To be meaningful it has to be interpreted in the context of the person particular activities, over time, so that a norm for that particular person can be established

70
Q

What are the challenges of an interpretist paradigm?

A

Researcher’s presence and bias
Lack of generalizability
Poor replication and validity

71
Q

Describe typical qualitative methods

A

Interviews (structured and semi-structured)
Participant observations
Focus groups
Textual analysis (diaries, documents, pictures etc.)
Action research

72
Q

Describe biomedical research (continual trials)

A

Contextual research (making sense of patients’ perceptions of/reactions to a given clinical intervention; understanding the social environments in which a clinical innovation should/is taking place; views of potential healthcare users on a new surgical procedure; how a certain healthcare group perceives their work and challenges etc.)

73
Q

Steps of a research project

A
Step 1: Identify the Problem
Step 2: Review the Literature
Step 3: Clarify the Problem
Step 4: Clearly Define Terms and Concepts
Step 5: Define the Population
Step 6: Develop the Instrumentation Plan
Step 7: Collect Data
Step 8: Analyze the Data
74
Q

What is critical analysis

A

Simply = asking questions about information (e.g. published research) in order to evaluate its veracity, meaning, significance, and implications, typically regarding a specific question or subject.

75
Q

Why is it important to develop critical analysis/thinking skills?

A

Critical thinking/analysis is an integral part of a scientist’s or clinician’s job (similar to most all other professions).
It will be involved in many assessments during your degree, particularly your SSP1 report and final year project dissertation.

The two major aspects of this that you will need to master in order to succeed are:
Evaluating scientific information and arguments
Constructing and communicating your own arguments

76
Q

How do scientists obtain information?

A

Performing their own experiment
Reading the published research of others.

This necessitates:
Acquiring accurate and reliable information
Interpreting it correctly with regards to a specific problem/what are the key aspects of it, in regards to what you are interested in learning.

77
Q

What might potentially cause the information in a scientific journal article to be inaccurate/unreliable?

A
Flaws in methodology/design
Limited/changing understanding of a topic
Flaws in the interpretation of results
Politics
Intellectual dishonesty,
78
Q

Can the findings of peer-reviewed research be taken/trusted at face value?

A

Evaluation of scientific questions should not depend on the authority or subjective opinion of the author (although reputation can sometimes be relevant).
It should instead be based on objective interpretation of the available evidence.
Thus, research must ‘show its working’, which anybody else should be able to replicate (i.e. what are the clear steps in the study/argument that resulted in the conclusion?)
Given the evidence and reasoning presented, (in theory) everybody should come to the same conclusion.
There are still debates and controversies in science because there is incomplete understanding, 2 published articles may say opposite/alternative things so you need to draw conclusions form both, find reasoning.

79
Q

what makes up a scientific argument

A

A question,
evidence,
explanation of the evidence, how it relates to the problem. Key point = it should be objective (it shouldn’t depend on the opinion or identity of the author).

80
Q

Describe how to critically evaluate a scientific argument

A

Critical evaluation of such an argument involves interrogating:
The quality, veracity, reliability, and relevancy of the evidence (i.e. what info/data is there that is relevant to the problem?)
The appropriateness of the reasoning (i.e. how does that info/data answer the problem?) the way author has interpreted the evidence.

81
Q

How do you evaluate evidence?

A

What is the specific claim that is being made? (what evidence would be required to support that claim)
Does the evidence presented sufficient to support that claim? (does it rule out other reasonable explanations?)
Is the source of the evidence reliable?
Is the evidence presented really evidence for what the author claims it is? (experimental models vs. clinical pathology, specific participants vs. the general population)
Are there limitations or specific issues with the evidence presented? (methodological flaws)
Is there any evidence that contradicts the claim?

82
Q

What are the common/potential critiques of biomedical research papers/experiments

A

Is the agent (drug, protein, tag, antibody) used sufficiently selective?
Are the results observed biologically/clinically meaningful?
What was the rationale for choosing the dose/ concentration of chemical agents used?
Does the setting of the experiment adequately replicate the setting to which results have been translated/applied?
What was the rationale for using the n number chosen?
Did the biological intervention (transgenesis, inducing a disease, administering a drug) used result in off target effects?
Are the findings consistent with previous research?

83
Q

Describe effective communication of scientific criticism

A

What was the problem?
Why is it a problem?
What is your justification for asserting that it is a problem? (e.g. evidence and reasoning)
What is the likely effect/implications of the problem as per the data, findings, or conclusions of the research?When critiquing scientific research, it is important to consider the relevance and significance of any issues identified.

84
Q

Q: How do you determine what is relevant to include in a critique? E.g. how to differentiate minor vs. major issues?

A

The key question to ask is whether the issue identified could potentially result in meaningful differences in the data (i.e. that could reasonably be expected to yield different conclusions).

85
Q

Q: What are the key attributes that determine the quality or effectiveness of scientific writing?

A

Accuracy (is the information ‘correct’)
Precision (is the information sufficiently specific to avoid ambiguity?
Objectivity (do the arguments avoid subject value judgments?)
Conciseness and relevancy (does the writing avoid unnecessary information?)
Coherency/Clarity (can it be understood?)
Uses appropriate reasoning and evidence used to support arguments

86
Q

Why is the use of personal terms (I) avoided when writing scientific arguments?

A

Interpretation of the scientific evidence should not (generally) depend on the identify/values of the author. Therefore the fact that ‘I think………’ is irrelevant. The critical point is that its the most appropriate interpretation of the available evidence for the reasons described.

87
Q

In what ways should scientific writing be precise?

A

Scientific writing should be sufficiently precise (in terms of the objectivity and specificity of the language used) to avoid ambiguity.
Specificity = the reader cannot misinterpret what is written
Objectivity = the understanding of the reader will not vary depending on the author’s or reader’s identity/values/opinion

88
Q

Q: How do you determine the most appropriate source to reference when trying to evidence a particular point?

A

The key is to understand what a given source provides evidence of. Is the reference the original source of the information/evidence?

89
Q

Q: Why isn’t just the data (and associated methodology) of research published?

A

It helps the reader to understand (and attempts to convince them of) the significance and implications of the information.

90
Q

Give examples of other things that follow the same structure as scientific articles

A

Fictional stories, movies
Newspaper articles
Common speech
Political/historical narratives, Hegelian dialectics

91
Q

What should be found in introduction, main body, conclusion?

A

Introduction – have you defined the problem/subject of the report? Have you provided sufficient context for the reader to understand what the problem is and why they should care?
Main body – Have you presented and analyzed the available relevant information that addresses the topic of your report?
Discussion/summary/conclusions – have you provided an overall answer to the original problem (or summary of the key information/points)? Have you outlined what the situation is going forward or the implications of your analysis?

92
Q

E.g. When would it be appropriate to cite the following sources:
A review article
A website
A research article from 1983

A

Review article: there has been a lot of media evidence
website: patients referring to drug in open forum, giving their opinion on disease or whatever ‘project of disease on general population’
Research article: demonstrates how understanding of topic has changed over time.

93
Q

Q: What evidence/sources could you use to support the following arguments?
That ________ is an effective treatment for a disease
That ________ (a specific immune cell) is involved in the pathophysiology of a disease
That the number of people worldwide affected by _______ (a disease) is increasing/at its highest level ever/X million people?

A
  1. clinical trial
  2. what other people have written, other articles
  3. official statistics from government, epidemeological study/official report from government