Lectures 1-5 Flashcards
Scientific method:
Observation - hypothesis - experiment - conclusion - scientific theory
Variables:
Controlled, independent, dependent
Experimental controls:
Positive - a treatment that gives the desired result
Negative - a treatment that does not give the desired result
False results:
Positive - from negative control; desired result when it shouldn’t be
Negative - from positive control; lack of desired result
Concerns to consider:
Experimenter/subject biases, record of procedure, reproducibility, qualitative vs quantitative data, statistical significance, correlation vs causation
Blind studies - single vs double:
Single blind - experimenter doesn’t know which treatment the subject is under; rules out coaching.
Double blind - neither experimenter nor subject knows; rules out coaching and placebo.
Define: sensitivity
Minimum amount of X needed to record a positive result. Ex: weighing sand on a bathroom scale.
Define: specificity
A positive result only comes from a truly positive sample. Ex: Measuring UV rays - instrument that measures all light vs instrument that measures UVa, UVb.
Define: random error
New error introduced with each measurement. Based on lab practices.
Define: systematic error
Error that is consistently present. Remember to calibrate!
Define: accuracy
How close a recorded value is to the true value
Define: precision
How reliably you can measure a value
pH meter:
Specifically permeable to hydrogen. Current produced by H+ is compared to standards of pH value. Stored in KCl or acidic buffer.
Random/systematic error in using pH meters:
Random: not cleaning probe properly (carry over contaminants), everything is mixed properly, reading the instrument properly (letting the instrument settle on a number)
Systematic error: not calibrating properly
Sensitivity and random error:
More sensitivity means more random error.
Accuracy/precision vs random/systematic:
Accuracy is more affected by systematic error, precision is more affected by random error.