Principles Of Analysis Flashcards

1
Q

Define the term ‘definitive method’

A

A method of exceptional scientific accuracy suitable for certification of reference material. Eg GCMS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define the term ‘reference method’

A

A method demonstrating small inaccuracies against definitive method. The method that you compare the routine method to to make sure that it is working accurately enough. Eg Abel-Kendall method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define the term ‘routine method’

A

Method deemed sufficiently accurate for routine use against reference method and standard reference materials (SRM). The method used for everyday lab techniques.eg enzymatic (eg Beckmann)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

In terms of calibration what is the definition of a primary standard?

A

A substance of known chemical composition and high purity that can be accurately quantified and used for assigning values to materials and calibrating apparatus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In terms of calibration what is the definition of a standard reference material (SRM)?

A

Reference material issued by an institute whose values are certified by a reference method which establishes traceability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In terms of calibration what is the definition of a secondary standard?

A

A commercially produced standard for routine use calibrated against a primary standard or reference material.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In terms of calibration what is the definition of an internal standard?

A

A substance not normally present in the sample. Added to both standard and sample to correct for variation in conditions between different samples run – e.g. HPLC, GC, MS. The internal standard is also used to verify instrument response and retention time stability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define a calibration material?

A

Prepared from pure substance
Stable and homogenous material
Matrix similar to assay matrix e.g. serum
No chemical interferences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define traceability?

A

An unbroken chain of comparisons of measurements leading to a reference value

e.g. cholesterol (Beckman enzymatic method)

Calibration
Standards traceable to NIST SRM909b level 1 (ID/MS) - the standard reference material
Method
Method certified against CDC Reference Method (Abell-Kendall)
CE Mark
Mandatory conformity mark on products within Europe e.g.
European Directive 98/79/EC on In vitro medical devices

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the difference between verification and validation?

A

Verification is the confirmation, through provision of objective evidence, that the specified requirements have been fulfilled. This is performed only upon introduction of a new assay by the manufacturers order to verify that the manufactuers claims are correct.

Validation is the confirmation, through provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled. Non-commercially produced method or a commmercially produced method that you are changing in house you must validate it.

VERIFICATION IS FOR A COMMERCIAL METHOD AND VALIDATION IS FOR YOUR OWN METHOD.

They have both similar and different characteristics that must be evaluated for a quantitative assays.
Both must be evaluated for trueness, accuracy, precision, uncertainty measurement, detection limit and quantitative limit. However, only validation need evaluate analytical specificity and sensitivity, measuring intervals and diagnostic sensitivity and specificity.

For the verification of qualitative assays only precision, analytical specificity and the detection limit need be evaluated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is trueness?

A

Trueness is the difference between true and measured value and is measured as Bias (positive or negative). This is assessed by repeat analysis of multiple levels of certified reference materials. The results are then compared against the assigned value. Also through recovery experiments and comparison of results from “fresh” EQA material that have values that have already been nationally determined
And finally by correlation with a current/accepted method using patient samples (comparability).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is accuracy?

A

Accuracy is the closeness of agreement between a measured quantity value and a true quantity value of a measurand. When applied to a set of test results, it involves a combination of random components and a common systematic error (or bias) component.

Total Error = bias +/- 2SD for a 95% CI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is precision?

A

There are two forms of precision, intermediate and repeatability.
Repeatability(formerly known as intra-assay or within batch precision) is measured by analysing the same sample multiple times in one run (a minimum of 20 results obtained from repeat analysis of IQC and patient samples on the same run).
Whereas intermediate precision (formerly known as inter-assay or between batch precision) is measured by analysing the same sample but over consecutive runs (a minimum of 20 results obtained from repeat analysis of IQC and patient samples from runs on different days in consecutive runs).

It is measured as a Coefficient of Variance (CV):
Coefficient of variation (CV)= [SD/M] x 100%,
Where SD is the standard deviation ((SD) = √Variance) and M=sample mean.

Notes:

  • Use 2 or 3 levels around important cut-off values (an assay performs best (most precisely) in the middle of the range, than at the extremes)
  • Express as %CV
  • Ideal CV <5% and no worse than 10% except at low levels, where you may accept up to 20%
  • Check against manufacturer’s values
  • Analyte, concentration and technology dependent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the measurement of uncertainty?

A

ISO 15189 (3.17): The uncertainty of measurement is a parameter associated with the result of a measurement, that characterises the dispersion of the values that could be reasonably attributed to the measurand.

The basic parameter of Measurement Uncertainty is standard deviation
best estimate of the “true value” ± measurement uncertainty (2xSD from intermediate precision)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is analytical specificity?

A

Measure of a method to determine only the analyte of interest i.e. cross-reactivity. Eg an assay the picks up two analytes with similar structures rather than just the one you are interested in (eg cortisol and prednisolone).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is interference?

A

The effect on the analytical measurement of a particular component by a second component
It can be both analytical and physiological.
Analytical:
Cross-reactivity with other compounds that occurs in the methodology, e.g. increase in absorbance due to lipaemia - this is directly affecting the assay.
Physiological:
Factors that affect the concentration of an alternate analyte in vivo. For example the effect of drugs, e.g. prolactin is increased by antipsychotics.

17
Q

Define the different limits associated with the sensitivity of an assay (limit of detection, limit of blank and limit of quantification).

A

Limit of Blank (LOB)
Highest measurement result that is likely to be observed (with a stated probability) for a blank sample.

Limit of Detection (LOD)
Lowest amount of analyte in a sample that can be detected with (stated) probability, although perhaps not quantified as an exact value. In terms of the sensitivity, it is a sample blank plus 3SD (may use spiked blanks), but can also be assessed based on signal-to-noise ratio.

Limit of Quantification (LOQ)
Lowest amount of analyte in a sample that can be quantitatively determined with stated acceptable precision and trueness under stated experimental conditions.
Functional sensitivity is the analyte concentration at which the method CV = 20% or some other pre-determined CV. In terms of the sensitivity is involves measuring the analyte at increasing concentrations starting with the LOD until a CV of 20% or predetermined acceptable CV is reached, though you can also use 3 x LOD.

LOD IS THE HIGHEST MEASURABLE VALUE AND THE LOQ IS THE LOWEST VALUE MEASURABLE IN A PATIENT SAMPLE.

18
Q

What is the measuring interval?

A

The interval between the upper and lower concentration of analyte in the sample for which it has been demonstrated that the method has suitable levels of precision, accuracy and linearity

Contains but is broader than the reference range.

It is the range between the LOQ and the highest concentration studied during verification/ validation.
Determined by linearity, which is an assessment of the difference between an individual’s measurements and that of a known standard over the full range of expected values. Usually performed by comparing results from a series of dilutions of known standard across assay range.

19
Q

How are reference ranges determined?

A

By measuring the concentration of a particular analyte in samples from a normal healthy population (must be n>120) then plotting as a frequency plot. A gaussian distribution is usually observed (may be normal or skewed, or it may not conform to those characteristics at all), the reference range is the central 95% = a 95% confidence interval, hence the mean +/- 2 SDs.

There are three calculations that can be used to calculate the reference range:

Parametric method (most common, performed on a group of healthy people): 
- Assume Gaussian distribution of data or transformed (log) data.                     - Determine reference limits (percentiles) as +/- 2SD from mean.

Non-parametric method (no assumption of a Gaussian distribution - common for reference ranges quoted such as >5):
- Results ranked and cut-off taken at x% of values in each tail.

Target driven reference range (e.g. cholesterol):

  • reference range cannot be derived from “healthy” population because none of the population have a normal reference range
  • Instead it is referenced against JBS2 (Joint British Societies) targets for treatment of hyperlipidaemia, because the normal population is unhealthy

There is exclusion factors for using a ‘healthy population’, eg obesity, reference ranges can be sub grouped if the analyte is affected by eg gender or age, and ref ranges can be influenced by physiological factors such as time of day/month/year…

Despite this, biological variation will still exist between members of a healthy population used to derive a reference range. Biological variation is introduced by physiological factors, diet, fluid intake and exercise.it can be calculated by:

CV (total) = √ CV2analytical + CV2biological

20
Q

Define sensitivity and specificity.

A

Sensitivity:
positivity in disease, if the assay says the person has the disease - how often this is correct Sensitivity = TP/(TP+FN)

Specificity:
negativity in health, if the assay says the person doesn’t have the disease - how often this is correct = TN/(TN+FP)

21
Q

Define positive and negative predictive values.

A

Positive and negative predictive values measure the ability of the test to correctly assign individual to either disease or non-diseased group (hence the false +/-s).

PPV = TP/(TP+FP)
NPV = TN/(TN+FN)