Exam 1 Lecture 3 Flashcards
the method of least squares
used to draw the “best” straight line through experimental data points that contain some scatter
- some points will lie above and some below the line (equation y=mx+b can be used to quantify the unknown from its signal)
- for most chemical analyses, the response obtained by the given lab procedure must be compared to known quantities (called standards); in this way the response from an unknown quantity can be interpreted
method of least squares steps
- prepare a calibration curve from known standards
- work in a region where the calibration curve is linear (usually)
method of least squares assumptions
- uncertainty in y values is much greater than uncertainty in x values
- uncertainties of all y values are similar
method of least squares equation
di^2= (yi-mxi-b)^2
method of least squares overview
- deviations from the line (yi-y) are squared and summed
- why squared? direction doesn’t matter, and large deviations are weighted more heavily
- the sum is minimized by changing the slope and y-intercept of the fitted line
- i.e. we solve for the best m and b that give the minimal sum of squared differences
calibration curves
shows the response of an analytical method to known quantities of analyte
- standard solutions: contain known concentrations of analyte
- blank solutions: contain all reagents and solvents used in the analysis but contain no deliberately added analyte
steps to constructing calibration curves
- prepare standards in the relevant range
- subtract blank measurements (corrected response)
- graph corrected response vs. [analyte] (get m and b with least squares analysis!)
- unknown analysis; run sample, substract blank (get y), solve for x using y=mx+b
linear range (calibration curve)
concentration range over which calibration curve is linear
dynamic range (calibration curve)
concentration range over which there is measurable response
detection limit (lower limit of detection)
smallest quantity of analyte that is significantly different from the blank
quantitation limit (lower limit of quantitation)
smallest quantity of analyte that can be measured with reasonable accuracy
what is the minimum detectable signal, y of dl, defined as?
signal detection limit: y of dl = y of blank + 3s
(where s is the standard deviation associated with the sample measurements)
limit of detection
the corrected signal, ysample-yblank, is proportional to the sample concentration:
calibration line: ysample-yblank = m * sample concentration (where ysample is the signal ovserved for the sample and m is the slope of the linear calibration curve)
- the minimum detectable concentration is obtained by substituting y of dl for y sample to get the detection limit
- detection limit: minimum detectable concentration = 3s/m
reporting limit
the concentration below which regulations say an analyte is “not detected”
- it does not mean that analyte is not observed, it means the analyte is below the prescribed level
- the reporting limit is set at least 5 to 10 times higher than the detection limit
Matrix effect (Calibration challenges)
change in analytical sensitivity caused by something in the sample other than analyte