Principles Of Analysis Flashcards
Define the term ‘definitive method’
A method of exceptional scientific accuracy suitable for certification of reference material. Eg GCMS
Define the term ‘reference method’
A method demonstrating small inaccuracies against definitive method. The method that you compare the routine method to to make sure that it is working accurately enough. Eg Abel-Kendall method
Define the term ‘routine method’
Method deemed sufficiently accurate for routine use against reference method and standard reference materials (SRM). The method used for everyday lab techniques.eg enzymatic (eg Beckmann)
In terms of calibration what is the definition of a primary standard?
A substance of known chemical composition and high purity that can be accurately quantified and used for assigning values to materials and calibrating apparatus.
In terms of calibration what is the definition of a standard reference material (SRM)?
Reference material issued by an institute whose values are certified by a reference method which establishes traceability.
In terms of calibration what is the definition of a secondary standard?
A commercially produced standard for routine use calibrated against a primary standard or reference material.
In terms of calibration what is the definition of an internal standard?
A substance not normally present in the sample. Added to both standard and sample to correct for variation in conditions between different samples run – e.g. HPLC, GC, MS. The internal standard is also used to verify instrument response and retention time stability.
Define a calibration material?
Prepared from pure substance
Stable and homogenous material
Matrix similar to assay matrix e.g. serum
No chemical interferences
Define traceability?
An unbroken chain of comparisons of measurements leading to a reference value
e.g. cholesterol (Beckman enzymatic method)
Calibration
Standards traceable to NIST SRM909b level 1 (ID/MS) - the standard reference material
Method
Method certified against CDC Reference Method (Abell-Kendall)
CE Mark
Mandatory conformity mark on products within Europe e.g.
European Directive 98/79/EC on In vitro medical devices
What is the difference between verification and validation?
Verification is the confirmation, through provision of objective evidence, that the specified requirements have been fulfilled. This is performed only upon introduction of a new assay by the manufacturers order to verify that the manufactuers claims are correct.
Validation is the confirmation, through provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. Non-commercially produced method or a commmercially produced method that you are changing in house you must validate it.
VERIFICATION IS FOR A COMMERCIAL METHOD AND VALIDATION IS FOR YOUR OWN METHOD.
They have both similar and different characteristics that must be evaluated for a quantitative assays.
Both must be evaluated for trueness, accuracy, precision, uncertainty measurement, detection limit and quantitative limit. However, only validation need evaluate analytical specificity and sensitivity, measuring intervals and diagnostic sensitivity and specificity.
For the verification of qualitative assays only precision, analytical specificity and the detection limit need be evaluated.
What is trueness?
Trueness is the difference between true and measured value and is measured as Bias (positive or negative). This is assessed by repeat analysis of multiple levels of certified reference materials. The results are then compared against the assigned value. Also through recovery experiments and comparison of results from “fresh” EQA material that have values that have already been nationally determined
And finally by correlation with a current/accepted method using patient samples (comparability).
What is accuracy?
Accuracy is the closeness of agreement between a measured quantity value and a true quantity value of a measurand. When applied to a set of test results, it involves a combination of random components and a common systematic error (or bias) component.
Total Error = bias +/- 2SD for a 95% CI
What is precision?
There are two forms of precision, intermediate and repeatability.
Repeatability(formerly known as intra-assay or within batch precision) is measured by analysing the same sample multiple times in one run (a minimum of 20 results obtained from repeat analysis of IQC and patient samples on the same run).
Whereas intermediate precision (formerly known as inter-assay or between batch precision) is measured by analysing the same sample but over consecutive runs (a minimum of 20 results obtained from repeat analysis of IQC and patient samples from runs on different days in consecutive runs).
It is measured as a Coefficient of Variance (CV):
Coefficient of variation (CV)= [SD/M] x 100%,
Where SD is the standard deviation ((SD) = √Variance) and M=sample mean.
Notes:
- Use 2 or 3 levels around important cut-off values (an assay performs best (most precisely) in the middle of the range, than at the extremes)
- Express as %CV
- Ideal CV <5% and no worse than 10% except at low levels, where you may accept up to 20%
- Check against manufacturer’s values
- Analyte, concentration and technology dependent
What is the measurement of uncertainty?
ISO 15189 (3.17): The uncertainty of measurement is a parameter associated with the result of a measurement, that characterises the dispersion of the values that could be reasonably attributed to the measurand.
The basic parameter of Measurement Uncertainty is standard deviation
best estimate of the “true value” ± measurement uncertainty (2xSD from intermediate precision)
What is analytical specificity?
Measure of a method to determine only the analyte of interest i.e. cross-reactivity. Eg an assay the picks up two analytes with similar structures rather than just the one you are interested in (eg cortisol and prednisolone).