Chapter 44: Statistics and Patient Safety Flashcards
Which statistical term is defined as the degree of closeness of measurements of a quantity to that quantity’s true value?
Accuracy
Which statistical tool estimates survival in small groups?
Kaplan-Meier estimator
[Wikipedia: The Kaplan–Meier estimator, also known as the product limit estimator, is a non-parametric statistic used to estimate the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment. In other fields, Kaplan–Meier estimators may be used to measure the length of time people remain unemployed after a job loss, the time-to-failure of machine parts, or how long fleshy fruits remain on plants before they are removed by frugivores. The estimator is named after Edward L. Kaplan and Paul Meier, who each submitted similar manuscripts to the Journal of the American Statistical Association. The journal editor, John Tukey, convinced them to combine their work into one paper, which has been cited about 34,000 times since its publication.]
Which program/group seeks to prevent wrong site/procedure/patient errors?
Joint Commission on Accreditation of Healthcare Organizations (JCAHO)
[Pre-op verification of patient, procedure, operative site, and operative side (marking if left or right or multiple levels; must be visible after the patient is prepped). Time out before incision is made (verifying patient, procedure, position site + side, and availability of implants or special requirements).]
[Joint Commission: An independent, not-for-profit organization, The Joint Commission accredits and certifies nearly 21,000 health care organizations and programs in the United States. Joint Commission accreditation and certification is recognized nationwide as a symbol of quality that reflects an organization’s commitment to meeting certain performance standards.
Our Mission: To continuously improve health care for the public, in collaboration with other stakeholders, by evaluating health care organizations and inspiring them to excel in providing safe and effective care of the highest quality and value.
Vision Statement: All people always experience the safest, highest quality, best-value health care across all settings.]
Which statistical term is defined as the probability of making the correct conclusion (equals 1 - probability of a type II error)?
Power
[Likelihood that the conclusion of the test is true. Larger sample size increases power of a test.]
[Wikipedia: The power or sensitivity of a binary hypothesis test is the probability that the test correctly rejects the null hypothesis (H0) when the alternative hypothesis (H1) is true. It can be equivalently thought of as the probability of accepting the alternative hypothesis (H1) when it is true—that is, the ability of a test to detect an effect, if the effect actually exists. That is,
The power of a test sometimes, less formally, refers to the probability of rejecting the null when it is not correct, though this is not the formal definition stated above. The power is in general a function of the possible distributions, often determined by a parameter, under the alternative hypothesis. As the power increases, there are decreasing chances of a Type II error (false negative), which are also referred to as the false negative rate (β) since the power is equal to 1−β, again, under the alternative hypothesis. A similar concept is Type I error, also referred to as the “false positive rate” or the level of a test under the null hypothesis.
Power analysis can be used to calculate the minimum sample size required so that one can be reasonably likely to detect an effect of a given size. For example: “how many times do I need to toss a coin to conclude it is rigged?” Power analysis can also be used to calculate the minimum effect size that is likely to be detected in a study using a given sample size. In addition, the concept of power is used to make comparisons between different statistical testing procedures: for example, between a parametric and a nonparametric test of the same hypothesis.]
What does p < 0.05 mean?
There is > 95% chance that the difference between the populations is true and that there is < 5% likelihood that the difference is not true and occured by chance alone
Which type of study/trial compares a population with a disease with a similar population without the disease and looks for the frequency of a risk factor between the groups?
Case-control study
[Must be retrospective.]
Which type of error is defined as the incorrect rejection of the null hypothesis (falsely assumed there was a difference when no difference exists)?
Type I error
Which program/group seeks to collect outcome data to measure and improve surgical quality in the United States?
National Surgical Quality Improvement Program (NSQIP)
[Outcomes are reported as observed vs. expected ratios.]
[ACS NSQIP: Each hospital assigns a trained Surgical Clinical Reviewer (SCR) to collect preoperative through 30-day postoperative data on randomly assigned patients. The number and types of variables collected will differ from hospital to hospital, depending on the hospital’s size, patient population and quality improvement focus. The ACS provides SCR training, ongoing education opportunities and auditing to ensure data reliability. Data are entered online in a HIPAA-compliant, secure, web-based platform that can be accessed 24 hours a day. A surgeon champion assigned by each hospital leads and oversees program implementation and quality initiatives. Blinded, risk-adjusted information is shared with all hospitals, allowing them to nationally benchmark their complication rates and surgical outcomes. ACS also provides monthly conference calls, best practice guidelines and many other resources to help hospitals target problem areas and improve surgical outcomes.]
Which type of study/trial combines data from different studies?
Meta-analysis
Which statistical tool compares the means between 2 independent groups?
Student’s t test
[Variable is quantitative.]
[Wikipedia: A t-test is any statistical hypothesis test in which the test statistic follows a Student’s t-distribution under the null hypothesis. It can be used to determine if two sets of data are significantly different from each other.
A two-sample location test of the null hypothesis such that the means of two populations are equal. All such tests are usually called Student’s t-tests, though strictly speaking that name should only be used if the variances of the two populations are also assumed to be equal.
The independent samples t-test is used when two separate sets of independent and identically distributed samples are obtained, one from each of the two populations being compared. For example, suppose we are evaluating the effect of a medical treatment, and we enroll 100 subjects into our study, then randomly assign 50 subjects to the treatment group and 50 subjects to the control group. In this case, we have two independent samples and would use the unpaired form of the t-test. The randomization is not essential here – if we contacted 100 people by phone and obtained each person’s age and gender, and then used a two-sample t-test to see whether the mean ages differ by gender, this would also be an independent samples t-test, even though the data is observational.]
What is the JCAHO definition of a sentinel event?
Unexpected occurrence involving death or serious injury, or the risk thereof
[Hospital undergoes root cause analysis to prevent and minimize future occurrences (eg wrong site surgery).]
Which statistical term is defined as the number of people with disease in a population (eg the number of patients in United States with colon cancer)?
Prevalence
[Longstanding disease increases prevalence.]
Which statistical term is defined as the degree to which repeated measurements under unchanged conditions show the same results?
Precision
What is variance?
Spread of data around a mean
Which statistical tool is used to compare quantitative variables (means) for more than 2 groups?
ANOVA
[Wikipedia: Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as “variation” among and between groups), developed by statistician and evolutionary biologist Ronald Fisher. In the ANOVA setting, the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups. ANOVAs are useful for comparing (testing) three or more means (groups or variables) for statistical significance. It is conceptually similar to multiple two-sample t-tests, but is more conservative (results in less type I error) and is therefore suited to a wide range of practical problems.]