The Perils of Increased Sensitivity
Article By Eric Wilhelmsen, Ph.D., C.F.S. Published May 16, 2023
Article Source: The Perils of Increased Sensitivity | Food Safety (food-safety.com)
I have been conditioned (as have, perhaps, most readers) to view hazards as something to be eliminated. The presence of heavy metals like lead and mercury in food is not desirable. Trihalomethanes are not desirable. Pathogens like Salmonella, Listeria monocytogenes, or the Shiga toxin-producing strains of Escherichia coli are not desirable in food. Analytical science has responded with faster and more sensitive tools. We have many tandem instruments—gas chromatography–mass spectrometry (gc-ms), tandem mass spectrometry (ms-ms), etc. We have powerful molecular techniques and instrumentation like polymerase chain reaction (PCR). These tools are not as mind-blowing as they are portrayed to be in television and movies, but we can detect very low levels of most hazards using them. So, what is the problem?
Perhaps an anonymized case from the past will illustrate one of the challenges of sensitivity. In 1986, California passed Proposition 65. This proposition requires businesses to provide warnings to Californians about significant exposures to chemicals that cause cancer, birth defects, or other reproductive harms—a noble objective. Lead, as a heavy metal, is a reproductive toxin. It is present in canned foods. If the proposition, as written, were to be applied to canned food, then the tolerance would have defaulted to the detection level. At around this time, however, new tandem instrumentation reduced the detection limit by several orders of magnitude. This detection was so low that a special clean room was required to house the instrument to avoid contamination from the nominally clean laboratory environment. How much below the background level does a detection limit need to be? Fortunately, the detection limit-based tolerance was not applied to canned foods. It was a scary time for the food industry, however.
As another example, we can examine the present-day problem of pathogen control. I will ask readers to accept for the moment that the analytical assessment of pathogen risk is limited by two things: the ability of a sample to represent the lot that was sampled, and the very low incidence rates. As a thought experiment, consider pathogen testing on spinach. In general, there is a zero tolerance for pathogens in foods regulated by FDA, including fresh-cut spinach. A typical pathogen assay calls for enriching 375 grams of spinach and using molecular detection to assess the presence of a pathogen, usually a Shiga toxin-producing E. coli or Salmonella. This enrichment assay is quite sensitive due to the approximately 106-fold enrichment of the pathogen.
Although the detection of a pathogen destines the sampled lot to destruction, one must question what this result really means. Given that these pathogens are considered rare but ubiquitous, how many samples would be required to mandate discarding every field of spinach? Does every field contain at least one pathogen? It should be clear that this procedure can confirm the presence of a pathogen in the sample, but this result conveys very little information about the number of pathogenic cells in the tens of thousands of pounds in a field. The problem does not go away even when ten samples are taken from the same field because the general background is less than 0.01 cfu/pound for most raw produce, which makes sampling and testing an adequate method for assessment or learning, but a poor mitigation tool. Distinguishing between finding the rare pathogen or detecting a deviation from the normal background is not practical given the zero tolerance.
Some will argue that more sensitive methods are needed to drive change. This is a reasonable argument, but it misses the underlying problem of defining the mission before implementing a sampling and testing program. At the risk of over-generalizing, when the mission is to reduce risk, the effort is virtually doomed to fail. Testing safety into a product is a bad plan. When an industry is under pressure, it often falls back on a test-and-release protocol to appease the marketplace, but this is at best a bandage on the problem. However, testing is an important tool to monitor a process and look for deviations.
This first step in such a monitoring program is to understand the background. In the case of a hazard associated with a potential risk, one must understand the background level of the hazard. This background defines the current process capability. The risk associated with this level of hazard can be acceptable or unacceptable. If the level of risk is unacceptable, then the process must be changed; no amount of testing will mitigate the risk. If the level of risk is acceptable, then a testing program must be developed to watch for deviations from this background rate that indicate the process is out of control. This monitoring process will never include enough samples to be an effective test-and-release program where defect rates are very low and there is no tolerance. The study of these detected deviations must be viewed as an opportunity to improve the process and prevent future deviations.
In food safety and quality assurance endeavors, control charts are often used to watch for and detect deviations as part of control processes. Rules have been developed to identify when action is warranted rather than chasing each value over the mean. The simplest rule is that deviations within two or three standard deviations are to be expected. For hazards like heavy metals or pathogens, detections should be expected when sensitive methods are employed. These deviations become important when they significantly deviate from the normally observed range. Again, if the observed range is higher than desirable, then it is recommended to fix the process because testing-based mitigations are generally futile.
As an observer of the industry, I support U.S. Department of Agriculture (USDA) efforts to examine the entire supply chain to reduce Salmonella contamination in poultry. This reflects an assertion that the background rate is unacceptable. USDA is also pushing toward enumeration and virulence testing rather than just presence/absence testing. It is also resisting calls to invoke a zero tolerance. USDA recently announced the move to a cloth-based aggregated sampling system to enhance sampling. Beef has already largely made this move.
There is an emotional appeal in seeking bigger and more samples. Aggregated sampling can partially fill this need. It is great marketing to have a more sensitive detection tool, but the detection must be several deviations from the background to be important. Unfortunately, without first understanding the mission, such efforts will not really reduce risk. Understanding the risk is the key to increasing safety.