Ch. 11 Research Design & Execution Flashcards
Positive controls
any control that ensures a change in dependent variable when it is expected.
e.g. in designing an assay to detect HIV, administering the test to a group of samples known to have HIV would constitute a positive control.
Negative controls
any control that ensures no change in the dependent variable happens when no change is expected.
e.g. in designing an assay to detect HIV, administering the test to a group of samples known to NOT have HIV would constitute a negative control.
What are two ways we can assess causality in a basic science research experiment?
- utilizing controls (or standards) to demonstrate the outcome does NOT occur in the absence of the intervention
- making sure the variables/results show an “if-then” relationship.
Type I Error
The incorrect rejection of a true null hypothesis. Also known as a “false positive”
Type II Error
The failure to reject a false null hypothesis. Also known as a “false negative”
Accuracy
Also called validity, this is the ability of an instrument to measure a true value.
e.g. a person that weighs 170 should should get a reading of 170.
Alternatively, a scale that is accurate but imprecise would give a reading between 150 and 190 (in the middle of the two).
Precision
Also called reliability, this is the ability of an instrument to read consistently, or within a narrow range.
e.g. a scale weighing a person that is 170 might show 130 every time would be precise but not accurate.
Confounding variables
“third party” variables (often hidden or unobserved) that influence the dependent variable and are not the independent variables.