By Errol Zeiger, Errol Zeiger Consulting
In the regulatory setting, genetic toxicity testing is used principally to identify potential carcinogens and germ cell mutagens. Substances that are positive in the in vitro tests are considered to be of the most concern for inducing cancer or genetic mutations in rodents and, by extension, in humans. These in vitro positives are then tested in rodents to determine if they have the capability of inducing genetic damage in the animal. In vivo genetic toxicity testing currently is also a prerequisite for identifying germ cell mutagens, i.e., those that have the potential to mutate sperm or egg cells resulting in offspring either expressing or only carrying a mutant gene.
The in vitro genetic toxicity assays used internationally for regulatory approval of chemicals are the bacterial (Salmonella; E. coli), mammalian cell mutagenicity (L5179Y mouse lymphoma cells; CHO cells), and/or mammalian cell chromosome damage (L5178Y, CHO, CHL cells, or human lymphocytes) assays. In vivo testing uses primarily the rodent bone marrow cell chromosome aberration or micronucleus assay.
This commentary will focus primarily on the use of genetic toxicity tests for identifying potential carcinogens, and will not address germ cell mutagenicity. In both situations, the apical endpoint, cancer or germ cell mutagenicity, is currently demonstrated and quantified by extensive animal experiments because the genetic toxicity assays and the structure-activity relationship models are not sufficiently accurate predictors of these effects or the dose-responses to support human health and safety decisions.
The identification of germ cell mutagens not only requires a showing that the substance is genetically active, but also that it will reach the gonads, which by definition requires animal testing. This demonstration of the presence of active genotoxin in the gonads, usually male, is part of the definition of a germ cell mutagen. Because there is no one organ or tissue is at risk in cancer studies, it is not necessary to demonstrate that the genotoxin will reach all potential cancer sites in an animal. This distinguishes the two events for the purposes of this discussion.
Current genetic toxicity testing practices
All current and proposed genetic toxicity testing schemes for identifying carcinogens are based on the premise that in vitro mammalian cell systems are more predictive of in vivo mammalian effects than are microbial cell systems, and in vivo rodent test systems are inherently more predictive of rodent (as a surrogate for human) cancer, than are in vitro mammalian cell systems. This is a logical, sensible, and scientifically defensible starting point. Because cancer is a disease of multiple organs and tissues, and genetic damage is a necessary, but not sufficient, inducer of cancer, a genetic toxin in vitro is presumed to be a genetic toxin in somatic cells in vivo, and a potential carcinogen. This is the major assumption driving genetic toxicity testing, i.e., that a substance capable of producing genetic toxicity is also capable of producing cancer.
This testing philosophy was first proposed in 1979 and incorporated into a tier or battery testing approach, and has been the dominant paradigm since then. Since the tier/battery approach was adopted, there has been widespread use of the in vitro tests, and extensive, but less, use of the in vivo tests. As a consequence of this usage, there is a wealth of data on the performance of the various in vitro and in vivo tests and their abilities to identify carcinogens and germ cell mutagens. The in vitro tests remain the basis of the carcinogen testing schemes despite the knowledge that they are far from accurate, and are ineffective for identifying potential noncarcinogens (i.e., the specificities of the tests currently used are about 50%, which means that they have no predictive value for identifying noncarcinogens).
The high positive predictivity of the in vitro assays for carcinogenicity supports the common industrial practice to "flag" potential carcinogens solely by in vitro tests and not to test further (i.e., in vivo), preferring instead to place additional efforts on substances that were not positive in the in vitro assays. This decision strategy is probably responsible for a large (but unquantifiable) reduction in animal use because it recognizes that these positive substances are highly likely to be rodent carcinogens. Thus, it would be an unnecessary investment of time, money, and resources, including animals, to confirm the expected carcinogenicity of the substance if other, non-in vitro positive substances are available.
In vivo testing
All national and international test guidelines currently identify the bone marrow chromosome aberration or micronucleus test as the definitive in vivo assays for genetic toxicity. The induction of DNA strand breaks in rat liver is another assay used, although less often. It was selected primarily because it is performed in the liver, which is a site of concern for cancer, and because it provides a second genetic endpoint.
Data from extensive testing has shown that relatively few substances, both carcinogenic and noncarcinogenic, that are genetically active in vitro are also active in vivo, and even fewer substances have been identified that are positive in the in vivo but not the in vitro tests. Indeed, for many years, benzene was the only substance known (outside the confidential data that may have been sequestered in chemical or pharmaceutical company vaults) that was known to be negative in vitro but positive in bone marrow tests and carcinogenic.
Published compilations of data from the US National Toxicology Program (NTP) show that the sensitivity of the bone marrow test for predicting cancer is 20-30% and it has a false positive prediction rate for cancer of approx. 30%, which is equivalent to the false positive rate of the in vitro Salmonella assay. It is well accepted among regulatory authorities that the in vivo bone marrow assays are not very sensitive. This is why, in many current regulatory schemes, a negative result in vivo does not counteract a positive response in vitro, and the substance is still considered a potential carcinogen.
For the purposes of this discussion, it should be noted that the majority of known International Agency for Research on Cancer (IARC) Group 1 human carcinogens have been shown, often retrospectively, to be rodent bone marrow clastogens. However, such a retrospective analysis is not very informative when attempting to predict carcinogenicity from rodent bone marrow data.
The in vivo/in vitro liver unscheduled DNA synthesis (UDS) assay has much less publicly available data for examination, but it is well-known among people using the test for regulatory purposes, and the regulators to whom the data are submitted, that it is extremely rare for a substance to be found positive in this assay. One senior regulator has been quoted as saying in scientific conferences that he has never seen a positive response in the assay, and that rather than do the assay, one can just as easily throw away the money and save the time and bother.
There is no question that rodent tests are needed to definitively identify and characterize carcinogens, or to demonstrate non-carcinogenicity, despite the fact that the animal tests may not have 100% accuracy in identifying human carcinogens and noncarcinogens. However, because of the potential societal burdens of introducing new carcinogens into commerce, especially as food additives, drugs, and biocides, the animal test remains the most reliable indicator of human carcinogenicity, other than retrospective epidemiology studies.
Given the known sensitivities and predictivities for cancer of the currently available (and mandated) in vivo assays, what is gained by their continued, and required, use in carcinogen screening? And why are they being included in the new rounds of regulatory guidance?
If a new test with only 30% positive predictivity for rodent cancer, and an equivalent false positive rate, was offered to regulatory agencies today, would they be willing to adopt – and require – it? Would it make a difference if it was an in vitro or an in vivo test?
Is it not time we started using the publicly available data to drive the regulatory requirements, rather than enshrine 30-year old assumptions that have not stood the test of the data?
©2007 Errol Zeiger