Lab Skills, Sig Figs, and Uncertainties Flashcards
Reporting w/ uncertainty
Round uncertainties to one sig fig and quote actuals to same decimal place (last sig fig should be in same place as the uncertainty [uncertainty determines # of sig figs in answer]).
Round % to one sig fig.
Final answer should respect the accuracy of the values used.
Sig figs (ambiguous [?] cases)
Believe it to that many decimal places.
ex.
123000, depends on how measurement was made (can solve the problem w/ scientific notation: 1.23000 x 10^5 is quoted to six sig figs [you mean it to be six sig figs] and 1.23 x 10^5 is quoted to five sig figs)
If you meant it to be six figs, you can put a dot at the end (ex. 123000.)
Multiplying/Dividing and adding/subtracting
When multiplying/dividing, # is quoted to lowest # of sig figs in problem; when adding/subtracting, ans is quoted to least # of digits after decimal (for a value in the problem).
Evaluation
Need to discuss data in terms of both accuracy and precision (not interchangeable).
If % error is greater than % uncertainty, random errors from uncertainty alone cannot explain the difference and some other errors must be present. Opposite is more random error than systematic?
Duplicate readings not taken? Instrument not calibrated? ex. sample not large (ex. temp rises/falls rapid—difficult to tell temp at which it melted)?
NOTE
- pH has no unit
- Glassware usually has the uncertainty written on it, so you can use that (?)
- So beaker not great to use (never?)
- When recording raw data, estimated uncertainties should be indicated for all measurements
- Pay attention to origin (should or shouldn’t it?)?
- Tells you what to say (?)
- Units throughout?
- If you don’t use precise equipment, will have to quote to a small # of sig figs
- If ans not final, can leave in % form (convert for final)
- Can’t use three-dec for higher masses (just a [our] lab thing?)
Accuracy
Refers to the closeness of a measured value to a standard or known value: use % error to indicate how accurate data values are (absolute value of the difference between the theoretical and experimental amt, divided by the theoretical amt and multiplied by 100).
If multiple trials, based on average.
Precision
Refers to the closeness of two or more measurements to each other: it is independent of accuracy (data can be precise but not accurate).
There are different measures of precision: equipment precision (comes from the uncertainty in the measured values and is determined by propagating uncertainties) and precision of the final, calculated values (would include equipment uncertainty, but could also include other factors).
You could calculate the range of your data and divide by two or half of the range as a percent of your average (depends on situation).
*Use (max-min)/2: if a low percentage of the values, then the data is precise.
Measuring to four decimal places more precise than measuring to two..
Measurement errors
All measurements always have some uncertainty called the error in the measurement, and measurement errors fall into two categories (understanding the nature of your errors tells you what to say): systematic errors and random errors.
Systematic error
Arise from a problem in the experimental set-up that results in the same measured values always deviating from the “true” value in the same direction (measured value constantly too small/large).
Can eliminate by pre-calibrating against a known, trusted standard.
ex. measuring devices out of calibration, poor insulation in calorimetry, etc.
Random error
Errors resulting in the fluctuation of measurements around the avg (equal probability measurement is above/below true value).
Generally result from the precision of a measuring device.
Can be reduced w/ the use of more precise measuring equipment or its effect minimized through repeat measurements so that the random errors cancel out.
Uncertainty in measuring devices
In a scale measuring device (anything that’s not digital [anything w/ gradations]), uncertainty is equal to the smallest increment divided by two; in a digital measuring device, equal to the smallest increment.
When not confident abt uncertainty in a measurement
Can make uncertainty bigger if you’re not confident—can overlay some logic (ex. parallax problems in reading a burette scale, random fluctuation, difficulties in knowing just when a color change has been completed in a rate experiment/titration, accounting for reflexes when using a timer [may become +/- 0.1 s from +/- 0.01 s]).
Indications of systematic error in data
If all higher (similar) than theoretical, indicates systematic error (random errors always present but can be reduced w/ repeated readings): ex. forgetting to zero the balance.
Good tables
- Table titles (say what’s in it [ex. “Temperature and volume data”] go above the table and columns are labeled with units and uncertainties (and data in the table has its last significant figure in the same place as the uncertainty)
- The independent variable should be in the first column and the table should go downwards (not side-to-side).
Good graphs
- Independent on x (and dependent on y)
- Both axes are labeled (w/ units)
- Title (figure titles should go below the graph [a good way to title it is “The Effect of your independent variable on your dependent” or y vs x)
- Data must fit with a best-fit line (straight or curved [roughly half of points below and above]) through your data (but don’t connect the points!), and error bars should be included