The EU’s confused evidence processes for identifying endocrine disruptors

DISCLAIMER: All opinions in this column reflect the views of the author(s), not of EURACTIV.com PLC.

Chemicals

A group of scientists has written to the European Commission to voice concerns about burden of proof and confused evidence requirements to identify and classify endocrine disruptors under the PPP and Biocides Regulations.

Paul Whaley is a consultant and researcher at Lancaster University, developing systematic review methods as an approach for making best use of the best evidence in chemicals policy. Whaley co-signed a letter with 13 other scientists investigating methods for making best use of the best evidence in developing chemicals policy.

I am among a growing number of researchers developing new, more robust and transparent methods for making best use of the best evidence in developing chemicals policy.

A group of us have been taking a keen interest in the European Commission’s proposals to identify and classify endocrine disrupting chemicals (EDCs), in particular how it delivers a credible process which can reliably identify EDCs, and is in line with what we would consider to be best scientific practice in aggregating, appraising and integrating existing research of health hazards and risks posed by chemical substances.

We think the Commission’s current proposals are inadequate (and have said so in an open letter, here): the processes for identifying endocrine disruptors are under-defined and incoherent, and entail a burden of proof which will leave EDCs virtually unregulated.

Starting with the positives, however, it is encouraging that the proposal acknowledges the role which systematic review (SR) methods could play in identifying EDCs.

SR methods distinguish themselves from traditional evidence review techniques by insisting on transparent, reproducible methods for finding and aggregating scientific research, and by employing a number of measures to safeguard against cherry-picking and selective interpretation of data. This ensures that results of an evidence review are as objective and comprehensive a statement of what the evidence says as is reasonably possible.

The hope for SR methods is that they will help resolve many of the seemingly intractable debates about which substances should be restricted and which not, and it is very encouraging that they receive explicit mention in the proposed regulatory text.

The problem is, the criteria, as they stand, come nowhere near achieving this ambition.

For example, while there is a welcome call to assess “all available relevant scientific evidence”, the criteria go on to privilege studies which are “performed according to internationally agreed study protocols” as the evidence on which regulatory identification as an EDC is to be “primarily” based.

This is either unnecessary, because a systematic review will anyway privilege the best conducted studies, based on empirical evidence of which methods produce the best results, or it introduces bias by forcing reviewers to treat guideline studies as stronger than other studies when they may in fact be weaker.

We are also worried by just how under-defined and ambiguous the evidence review processes are. While there is a lot of text about how research is to be identified and aggregated, it doesn’t seem to amount to very much. Without a clear standard for when accumulating evidence triggers regulatory identification, how can we expect different regulatory stakeholders to consistently identify the same chemicals as EDCs?

We have particular concern about the requirement of “sufficient” evidence of harm in humans in order to classify an EDC as “known to cause an adverse effect”.

Normally, only animal evidence is required to trigger a regulatory response. This is because identifying the effects of chemical exposures in epidemiological studies of uncontrolled human populations is extremely difficult. Multiple exposures, subtle effects and differences in study populations all militate against proving that exposure to a particular chemical substance has a particular health effect.

These health effects can only really be demonstrated in controlled experimental set-ups, where the experimental subjects and their environment are identical in all ways except for their chemical exposure. Of course, we can’t do controlled trials with potential toxicants on people, so in toxicology we use animal studies as our primary evidence-base.

The problem is, by de facto requiring that a chemical substance cannot be identified as an EDC without robust human evidence, the Commission’s proposed criteria sideline nearly all of the evidence that would normally be used to identify a problem compound, in favour of a wait-and-see approach that refuses to take regulatory action against a compound until enough people have been demonstrably harmed that scientists can prove the chemical poses a problem for human health.

This is both unscientific and unethical, and the criteria simply have to be changed to allow that evidence of harm in animal studies can be sufficient to trigger regulatory identification as an EDC.

Further Reading