PART 3 Flashcards

1
Q

Refers to the number of deaths occurring after an operation has been performed.

A

Post-Operative Death Rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Measures the risk of death for the cause (Cancer) under study in a defined population

EX: Cancer mortality for Ruston, LA during 2021…..Breast cancer mortality for Ruston, LA

National Center for Health collects Cancer statistics on regions and states and then they publish by age, sex, race, type of cancer and site

A

Cancer Mortality Rate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Accuracy of the data – does the data measure what we want it measure

A

Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

An expert assessment to determine if the metric measures what it is attended to measure. Most common type of validity and weakest form.

For example, a researcher may create a questionnaire that aims to measure depression levels in individuals. A colleague may then look over the questions and deem the questionnaire to be valid purely on face value.

In other words, on its surface the questionnaire seems to be constructed in such a way that it’s a good tool to use to measure depression levels.

A

Face Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Formally assess whether or not all components of the metric are necessary to measure the quality. It evaluates the instruments’ ability to measure the targeted construct

  • Found more Prevalent in research

For example, suppose a professor wants to test the overall knowledge of his students in the subject of elementary statistics. His test would have content validity if:

The test covers every topic of elementary statistics that he taught in the class.

The test does not cover unrelated topics such as history, economics, biology, etc.

A test lacks content validity if it doesn’t cover all aspects of a construct it sets out to measure or if it covers topics that are unrelated to the construct in any way.

A

Content Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Typically measured for survey and assessment tools

EX: There is no objective, observable entity called “depression” that we can measure directly. But based on existing psychological research and theory, we can measure depression based on a collection of symptoms and indicators, such as low self-confidence and low energy levels.

A

Construct Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Measure of the agreement of a test or data element with a known goal standard

EX: A job applicant takes a performance test during the interview process. If this test accurately predicts how well the employee will perform on the job, the test is said to have criterion validity.

A graduate student takes the GRE. The GRE has been shown as an effective tool (i.e. it has criterion validity) for predicting how well a student will perform in graduate studies.

A

Criterion Validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Measure of the repeatability or reproducibility of the results of the measurement. Do I get the same measurement repeatedly?

High reliability – Number pops up the same every time
Low reliability – Number is different every time

A

Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Measures the reproducibility of a data point between two raters

High Inter-rater reliability – Data is almost identical or is identical

Low inter-rater reliability – Data is not identical and is totally different

A

Inter-Rater Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Measures the consistency of the results over a period of time

High Intra-rater reliability – Still gets the same results after months of time

Low Intra-rater reliability – Doesn’t get the same results after months of time

A

Intra-Rater Reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly