HRSS Flashcards

0
Q

What type of scale is IQ score?

A

Interval scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What time of scale is time?

A

ration scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What type of scale is weight?

A

ratio scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what are potential threats to the internal validity of a research project?

A
  • the presence of confounding variables

- attrition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In a meta analysis it is possible to combine data if studies are homogenous in terms of the research question, methods, treatment, and outcome measures. True or False?

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In systematic reviews publication bias refers to….

A

negative studies are less likely to be published then positive studies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what is a variable?

A

a variable is a measurable quantity which can assume any of a prescribed set of values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

definition: quantitative variables…

A

takes numeric values

  • discrete variable (assumes only isolated values)
  • continuous variable (it can take any value within a given range)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Definition: qualitative variables

A

not measured numerically (categorical)

  • nominal (just labels, no natural order)
  • ordinal (some natural order)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Type 1 Error

A

error of rejecting H0 when the H0 is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Type 2 error

A

error of accepting the H0 hen the H0 is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Definition of P-value

A

P-value = the probability of obtaining the present test result if the null hypothesis is true
- calculated to see id the results occurred by chance.
if the p-value is small < 0.05 = statistically significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

non-parametric tests

A

compare medians or mean ranks rather than means

  • suitable for non normal data
  • when the distribution is unknown with a small sample size.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Quantitative Research design

7 distinct characteristics

A
  1. attempt to verify theory and be deductive
  2. has a predetermined design structure
  3. uses data derived from score on standard scales
  4. uses probability sampling
  5. maintains to independent role of the researcher
  6. clearly defined structure
  7. employs statistical analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

definition: dependent variable

A

measures for any change resulting from manipulation (eg treatment by the experimenter)
(outcome, response)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

definition: independent variable

A

is the variable that is manipulated by the experimenter

explanatory, predictor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

definition: discrete data

A

takes only a finite number eg number of children

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

definition: continuous data

A

all possible values within a given range

eg weight, hight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

nominal scale

A

eg blood type or gender

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

ordinal scale

A

observations that are ranked

eg severity ratings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

interval scale

A

equal units of measurement (no absolute zero)

eg IQ, temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Ratio scale

A

like interval but with absolute zero

eg height, weight

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Types of probability sampling

A
  • random
  • systematic sampling
  • stratified random sampling
  • disproportional sampling
  • cluster sampling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Random Sampling

A

(probability sampling)
each member of the population has equal chance of selection
- randomly selected from population and randomly assigned into experimental/control group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Systematic Sampling

A

(probability sampling)

- every nth record selected from a list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Stratified Random Sampling

A
(probability sampling)
- considered superior to random sampling 
- bases on characteristics 
			pop= 300 - 200F and 100M
	sample =  60 - 40 F and 20 M

not always relevant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Disproportional Sampling

A

(probability sampling)
used when unequal in size, causing inadequate same size for comparison
eg if 90% female 10% male
still pick 10 female and 10 male

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Cluster sampling

A

(probability sampling)
- sampling from a series of random units in a population
eg pick states, 10 hospitals in each, 2 physios from each hospital …

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Types of Non-probability sampling

A
  • convenience
  • quota
  • purposive
  • snowball sampling
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Convenience Sampling

A

(nonprobability sampling)

- subjects chosen on the basis of their availability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Quota Sampling

A

(non-probability sampling)

- researcher guids sampling process until the participant quota is met

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Purposive Sampling

A

(non-probability sampling)

- hand picked participants based on criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Snowball Sampling

A

(non-probability sampling)

  • used when desired characteristics are rare
  • relies on original participants identifying or referring other people
33
Q

experimental study designs

A
true experimental ie RCT
Quasi experimental (no random assignment to groups)
  • experimenter has some control over independent variables
  • experimental conditions constructed
  • involve some form of treatment/intervention
  • aim: IV is the cause of changes in DV by controlling for other possible influences
34
Q

Observational study designs

A
  • observed in their normal state
  • groups that are compared are self selected
  • subjects may be measured and tested but there is no treatment or intervention
  • can be presepective or retrospective
35
Q

Types of RCT’s

A
  • Parallel groups (2 groups are studied concurrently)
  • Cross over design (order in tx are given randomly with wash out periods)
  • Within group comparison (Tx investigated concurrently in same pt, used for Tx that can be given independently to different body parts)
  • sequential design ( parallel groups, trial continues until clear benefit of one Tx or until clear of no benefit)
  • Factorial design (several factors compared at the same time)
36
Q

internal validity

A

for a study to have it, it must clearly demonstrate that a specific intervention manipulation causes an effect.
ie - the effect found is not due to some other factor
threats - chance, bias, confounding

37
Q

External validity

A

how applicable the results are to the target population from which the sample was drawn

38
Q

Literature Reviews

A

can describe previous work, can be a mixture of evidence and opinions

39
Q

Systematic reviews

A

Definition: a formal identification, assessment and synthesis of all primary research evidence to answer a specific research question using reproducible methods

40
Q

Meta-analyses

A

quantitative summary of results of several studies

41
Q

Why do systematic reviews?

A
  • pools large amounts of information from multiple individual studies
  • clarifies the status of existing research to inform decision making
42
Q

Advantages and Disadvantages of Systematic reviews

A

Advantages:

  • improves the ability to study consistency of results and findings
  • when conflicting results SRs provide overall picture of what is happening
  • if sample sizes are small, some SRs can pool data, increasing power to detect effects

Disadvantages:
- improved power to detect effects can also magnify effects of bias

43
Q

Steps in conducting a systematic review

A
  1. Define the research question
  2. Create plan
  3. Search the literature for potential eligible studies
  4. Apply eligibility criteria to select studies
  5. Assess the risk of bias of selected studies
  6. Extract data from the selected studies
  7. Synthesise the data
  8. Interpret and report the results
  9. Update and review in the future
44
Q

Two types of synthesis

A
  1. Meta-analysis - statistical synthesis of data from individual studies (depends on hetro/homogeneity)
  2. Narrative synthesis - synthesis of key information from individual studies using descriptive narrative rather than statistics
45
Q

Dichotmous Outcomes: Odds Ratio

A

A ratio of odds of the event occuring. Calculated by dividing the odds (of a specific event) in the treated group by the odds of that event in the control group
Odd Ratios measure strength of association between variables
OR = 1 suggests no association between the variables under study
A higher OR away from 1 represents a strong association b/w the variables
An association is not significant if the confidence interval for the OR contains a 1

46
Q

Reliability

A

reliability in measurement refers to the consistent accuracy of measurement

a reliable measure should give the same answer when used to measure the same variable, in the same manner, in the same subject, time after time.

47
Q

Inter-rater reliability

A

agreement between two or more raters

48
Q

Intra- rater reliability

A

consistency of ratings of 1 rater

49
Q

Test-retest definition and considerations

A
- consistency of test score after a predetermined period
considerations:
- learning effects
- carry over effects
- fluctuating characteristics
- environmental variable
- time of day
- motivation level
50
Q

validity

A
in measurement refers to the extent that the tool measures what it is intended to measure for a specific purpose
4 types:
face validity 
content validity 
concurrent validity 
construct validity
51
Q

Face Validity

A
  • validity may be obvious if the measured characteristics are concrete
  • e.g weight, height, range of motion
  • must be able to directly observe
52
Q

Content Validity

A
  • is a measure of how well an instrument measures the content of a particular trait or body of knowledge
  • usually addressed by a panel of experts
  • determine the universe of items related to the construct and select an adequate sample of items for the test
53
Q

Concurrent Validity

A
  • uses another measuring instrument (with known validity) as a criterion to assess whether the new instrument is measuring what it is meant to
  • the two measurements are taken at the same time
  • usually a ‘gold standard’ is used
54
Q

Construct Validity

A
  • aims not only to validate the test but also the theory behind the test
  • not just reliant on a panel of experts (content) but also includes hypothesis testing
55
Q

Other validity considerations…

A
  1. sensitivity
  2. specificity
  3. positive predictive value
  4. negative predictive value
  5. likelihood ratio
56
Q

sensitivity

A
  • measure the proportion of actual positives which are correctly identified.
  • amount of positives the test identifies TP/ TP + FN
57
Q

specificity

A
  • measures the proportion of actual negatives which are correctly identified
  • how many negatives identified by the test
    TN/TN + FP
58
Q

Positive Predictive Value

A

measures the proportion of participants with a positive result who are correctly diagnosed
TP/ TP+ FP
(predictive values depend of the prevalence of the disease)

59
Q

Negative predictive value

A
  • measure the proportion of participants with a negative result who were correctly diagnosed
    TN/TN +FN
    (predictive values depend of the prevalence of the disease)
    we use these to predict the chances of someone having the diagnosis
60
Q

Likelihood Ratio

A

is the ratio of the probability, of the specific test result, in people who do have the disease to the probability in people who do not

  • there are positive and negative likelihood rations
  • independent of disease prevalence and do not vary in different populations or settings.
  • can find to probability of disease for an individual patient
  • LRs measure the power of a test to change the pre-test into the post-test probability of a disease being present
  • the further the LRs are from 1 the stronger the evidence is for the presence or absence of the disease
  • LR >1 = the result associated with the presence of the disease
  • LR< 1 = the result associates with the absence of the disease

LRs summarise how much more (or less) likely pts with the disease are to have a particular test result than pts without the disease

61
Q

Reproducibility, repeatability, Reliability - all mean that the results of a particular test or measure are….

A

identical or closely similar each time it is administered

Reliability of clinicians ratings can be assessed using

  • Kappa (Cohens) if ratings are on a nominal scale
  • Wieghted Kappa if ratings are in an ordinal scale
62
Q

variations may arise because of…

A

variations in procedures, observers or changing conditions of test subjects a test may not consistently provide the same results when repeated.

intra-subject
intra - observer
inter- observer

63
Q

Reliability in categorical rating

A

when ratings come from a nominal (eg present, absent) or ordinal (eg none, mild, moderate, severe) scale, subjects are assigned to a code that classifies them as belonging to a particular category - here reliability is measure as the extent of agreement

  • Percent Agreement
  • Kappa
64
Q

Percent Agreement

A
  • simplest way to estimate reliability
  • calculate the proportion of observed agreement
  • it doesnt take into account agreement by chance
65
Q

Bland-Altman Analysis

A

graphical representation of the differences between the two methods against their averages and computes the 95% limits of agreement
- useful to look for any systematic bias, possible outliers, or any relationship between the differences between measures

66
Q

Reliability in Continuous Ratings

A

reliability is the ratio of variability of the true score to the total (true+error) scroe

the less measurement error, the more reliable the measurement.

- Intraclass correlation (ICC)
_Standard Error of Measurement (SEM)
67
Q

Intraclass Correlation (ICC)

A

for continuous rating, assesses rating reliability by computing the ratio of the between-subject variability and the total variability
- within subject variance represents measurement errors

  • ICCs are highly dependent upon sample heterogeneity; the greater the magnitude of the ICC
  • The same instrument can be judged ‘reliable’ or ‘unreliable’ depending on the population in which it is assessed.
  • help practitioners to know whether the instrument is able to discriminate between patients in the sample
  • ICCs are highly dependent on the variation in the sample
  • ICC for analysis of reliability in not sufficient and should be complemented by other statistics eg BA, SEMS
68
Q

Interpretation of ICCs

A
  • values range from 0-1
  • ICC approaches 1 when the within subject variability comes close to 0
  • ICC provides small values when within subject variability is much greater than the between subject variability
  • high value ICC doesn’t always mean high reliability
69
Q

ICC models

A

1, each subject is assessed by a different set of randomly selected raters ICC 1,1

2, each subject is assessed by each rater and the raters are randomly selected ICC2, 1
- if aim is general application in clinical practice

3, each subject is assessed by each rater, but the raters are the only raters of interest ICC 3, 1
- If testing by only small number of raters
(no generalizability)

unlikely to use ICC1,1 in clinical practice

70
Q

Standard Error of Measurement (SEM)

A
  • a measure of discrepancy between scores and is expressed in the same unit as the original measurement
  • SEM quantifies the precision of individual scores on a test and thereby indication whether a change in a score is a real change or due to measurement error
  • the smaller the SEM the more reliable the measurements
  • SEM% can be used to compare tests that use different units of measurement
  • SEM is affected by sample heterogenity
71
Q

Smallest Real Difference

A

a way to evaluate clinically important changes

  • a low SRD indicates adequate sensitivity to detect a real change
  • SRD is crucial in determining whether the magnitude of the effect is clinically important.
  • when the change is greater than SRD, the change is considered to be true.
  • A change must be larger than the size of measurement error before being considered as important or meaningful
72
Q

Power calculations

A

can be done to find out how many participants would be needed to have a good chance of detecting a significance

73
Q

information needed to work out no of patients needed….

A
  • significance level of statistical significance (chosen 0.05)
  • Power = probability of correctly accepting the alternative hypothesis when this is true.
  • minimal clinically relevant effect size for each type of outcome measure
  • Standard Deviation of outcome measure
  • drop out rate
74
Q

what would you need to review when critically appraising a paper?

A
  • were there specific inclusion and exclusion criteria
  • were the experimental and control groups similar in key outcome measures and characteristics at baseline
  • what was the dropout rate for each group
75
Q

logistic regression is appropriate when the dependent (outcome) variable is….

A

categorical

76
Q

smallest real difference (SRD) is crucial in determining whether…

A

the magnitude of effect is clinically important

77
Q

what is important in understanding the external validity of a clinical trial?

A
  • the patient characteristics, such as age, sex, as well as the type of disability, illness or injury
  • the intervention (what it is, as well as how and where and by whom it was delivered) can be preformed in a usual practice setting
  • the outcome measures reflect what is clinically relevant and are measured over a timeframe that is also clinically appropriate.
  • the elimination or reduction of the possibility of bias within the trial design
78
Q

assumptions related to logistic regression…

A
  • linearity in the logit
  • independence of errors
  • reasonable ratio of cases to variables
79
Q

Cohen’s Kappa

A

calculates agreement between observers over and above what might be expected by chance alone
- also called chance corrected agreement

Kappa (Cohens) if ratings are on a nominal scale