Abstract:
|
The concept of analytical method detection capability (MDC), has been adopted for several decades as the basis for determining measurement reporting limits. One outcome has been inadequate control of method performance at low-levels. As a result, the frequently poor comparability of such data among laboratories, has led to the introduction of progressively higher data reporting limits and the consequent loss or degradation of potentially valuable low-level environmental data. This paper proposes that the statistical process behind this concept of detection has been misunderstood and misapplied. It reviews the application of the 'null' hypothesis, in particular the requirement that this hypothesis be the opposite of what one suspects to be true. The traditional approach applies to the case where an analyte is known to be very probably present (conventional parameters, nutrients, and major ions). But in other cases, (ultra-trace contaminants in drinking water), we know that the analyte is present at levels below our analytical capability to measure. Therefore the statistical process and logic must be inverted. This paper affirms, on a statistical basis, the need to report low-level data, and disavows the application of reporting limits at levels any higher than 3 times the method repeatability as estimated by the within-run standard deviation (Sw). It supports the reporting of low-level estimates, and the adoption of four generic reference points for data interpretation: W (between Sw/2 and Sw), CD (= 3 Sw), DL (= 6 Sw), and QL (=12 Sw). |