Jacqueline McGlade responded to written questions submitted by EurActiv’s Arthur Neslen.
Industry advocates often talk about the precautionary principle as a well-meaning luxury that Europe cannot afford at a time of recession. Why do you believe they are wrong?
Lessons from history tell us this is incorrect – indeed, precaution is a catalyst for innovation, which could benefit Europe during times of recession. This is particularly true when precaution is supported by smart regulation or well-designed tax changes. It is encouraging to see some corporations have fundamentally embraced sustainable development objectives in their business models and activities in recent years.
Don’t environmental regulations place an unbearable burden on industry that hurts their international competitiveness?
You could ask, should we allow companies to continue to market products which have been shown to harm users, solely in order to maintain their international competitiveness? It is important to consider what we want our economy to do for us. Policy makers’ priority must be maintaining the welfare of citizens, not economic competitiveness at any cost.
From a purely economic competitiveness angle, responding to early warnings can provide immense savings – avoiding companies becoming locked into paths which have to be discontinued due to the harm, or avoiding expensive compensation when a long history of harm has been proven, as was the case with mercury poisoning in Japan. Companies that respond quickly to early warnings are often front-runners in their industries.
Virtually all reviewed cases in our report show that early warnings about harmful effects were available, but that the prospect of short-term profit generated strong economic incentives for companies to continue with their practices. For example this has incentivised the most efficient fishing methods, the sales and use of cheap and effective substances such as benzene, lead in petrol, asbestos, insecticides, or growth hormones for meat production. In these cases – and there are many more examples – there were subsequent costs.
How would you respond to industrial critics who argue that opposition to the DDT insecticide under the precautionary principle would cost thousands of lives?
We do not propose a complete ban on DDT. In the final negotiations that led to the Stockholm Convention, an exception was made for DDT with an acceptable purpose for use in disease vector control. Thus, under the Stockholm Convention, countries may continue to use DDT, in the quantity needed, provided that the guidelines and recommendations of the World Health Organization (WHO) and the Stockholm Convention are met, and until locally appropriate and cost-effective alternatives become available for a sustainable transition from DDT.
In DDT-using countries, it is of utmost importance that DDT is used for its acceptable purpose only. Evidence continues to emerge showing that the chemical can be extremely harmful to humans and the environment. Thus it is important that when using such a chemical, the costs and benefits are properly weighed up and all evidence is taken into account.
How precise do you believe that a scientific assessment of risk – or the potential severity of risk – should be before action is taken under the precautionary principle?
The EEA has produced and refined a working definition of the Precautionary Principle that has proved useful in helping to achieve a more common understanding of the principle:
'The precautionary principle provides justification for public policy and other actions in situations of scientific complexity, uncertainty and ignorance, where there may be a need to act in order to avoid, or reduce, potentially serious or irreversible threats to health and/or the environment, using an appropriate strength of scientific evidence, and taking into account the pros and cons of action and inaction and their distribution.'
This definition is explicit in specifying situations of uncertainty, ignorance and risk, as contexts for considering the use of the PP. It is expressed in the affirmative rather than the triple negatives found in, for example, the Rio Declaration. It explicitly acknowledges that the strength of scientific evidence needed to justify public policy actions is determined on a case-specific basis, and only after the plausible pros and cons, including their distribution across groups, regions, and generations, have been assessed.
At what point should a substance or practice be considered ‘guilty until proven innocent?’
The full body of credible evidence should be reviewed, and each case should be considered individually. However, we can say that scientific uncertainty is not a justification for inaction, when there is plausible evidence of potentially serious harm.
Where the 'knowledge-to-ignorance ratio' is high (implying much knowledge and little practically necessary ignorance), as with, for example, lead, asbestos and mercury, there is little need for either more research or for precautionary measures – in such cases we need to take preventative action.
Where there is little knowledge and a lot of ignorance surrounding the substance or practice, there is a need for both precautionary measures following credible early warnings and for novel research, rather than 'scientific inertia' of excessive research on well-known substances.
Should field tests be a precondition of use of the precautionary principle, or are laboratory tests and models sometimes robust enough for action – and where do you draw the line?
Every case is different. However, historically there has been an over-reliance on the statistical significance of point estimates compared to confidence limits based on multiple sampling. There has also been a bias towards using models that grossly simplify reality rather than using long-term observations and trend data of biological and ecological systems. These approaches have sometimes led to the production of false positives. More importantly the governance of scientific ignorance and unknown unknowns has been neglected.
How would you characterise the severity of the public health risk that endocrine disrupting chemicals (EDCs) pose?
There is strong evidence of harm from EDCs in some wildlife species and in laboratory studies using rodent models for human health. However, the effects of EDCs on humans may be more difficult to demonstrate, due to the length, cost and methodological difficulties with these types of studies – so wildlife and animal studies may be seen in some cases as an early warning of the dangers.
In the last 10 years, risk assessment and regulatory frameworks for dealing with EDCs have been created and screening procedures have been developed to test chemicals for endocrine disrupting properties. There are still lots of factors that make the risk assessment process difficult. Chief amongst these is the fact that these chemicals can affect early development of, for example, the brain, reproductive, immune and metabolic systems in detrimental ways that are often invisible until several years or sometimes decades after exposure.
Scientific understanding is further complicated because mixtures of similarly acting EDCs in combination may contribute to an overall effect, whilst each of these chemicals alone may not cause harm. These factors make it hard for scientists to identify thresholds of exposure below which there are no effects.
However, there is a large body of evidence linking chemical exposure to thyroid, immune, reproductive and neurological problems in animals, and many of the same or similar diseases and disorders have been observed to be rising in human populations. Both animals and humans may be exposed to these chemicals in the environment, or via water or the food chain where the chemicals can build up.
What have been the worst mistakes made so far out of a desire to prevent environmental damage that proved misplaced, and what have you learned from them?
In the second volume of Late Lessons, we analyse incidents of 'false positives', where government regulation was undertaken based on precaution but later turned out to be unnecessary. In total 88 cases were identified to be alleged false positives, however, following a detailed analysis most of them turned out to be either real risks, or cases where 'the jury is still out', or unregulated alarms, or risk-risk trade‑offs, rather than false positives.
The analysis revealed four regulatory false positives: US swine flu, saccharin, food irradiation, and Southern leaf corn blight. Numerous important lessons can be learned from each, although there are few parallels between them in terms of when and why each risk was falsely believed to be real. This is a lesson in itself: each risk is unique, as is the science and politics behind it and hence a flexible approach is therefore needed, adapted to the nature of the problem. The costs of the false positives identified were mainly economic, although the actions taken to address swine flu in 1976 did lead to some unintended deaths and human suffering, and diverted resources from other potentially serious health risks. Determining the net costs of mistaken regulatory action, however, requires a complete assessment of the impacts of the regulation, including the costs and benefits of using alternative technologies and approaches.
Overall, the analysis shows that fear of false positives is misplaced and should not be a rationale for avoiding precautionary actions where warranted. False positives are few and far between as compared to false negatives and carefully designed precautionary actions can stimulate innovation, even if the risk turns out not to be real or as serious as initially feared. There is a need for new approaches to characterising and preventing complex risks that move debate from the 'problem' sphere to the 'solutions' sphere. By learning from the lessons in this chapter, more effective preventive decisions can be made in the future.
How systemic do you believe is the pressuring of scientists assessing the safety of substances that pose a potential health or environmental risk?
Early warning scientists and others who identify potential impending harm have sometimes been discouraged in the past or actually lost positions or suffered various kinds of losses. However, they often bring forth useful and timely knowledge and therefore need to be encouraged and not harmed for their efforts. Good public policy suggests laws should discourage such actions in the first place and justice requires rectification if they are the subjects of retaliation.
The Late lessons from early warnings case studies have provided several examples of early warning scientists who were harassed after issuing or publishing their views. Examples include Snow (in relation to his work on cholera); Selikoff (regarding asbestos); Henderson, Byers, Patterson and Needleman (regarding leaded petrol); Osakawa (regarding mercury); Putzai and Chapella (regarding GMOs); Schneider (regarding climate change); and several scientists in the French bees story. In addition there are others who wish to remain anonymous.
Other examples from beyond the Late lessons case studies include public servants who have been prevented from speaking out on environment or health issues.
You pinpoint the rapid and greater spread of technologies as a potential cause of alarm, which give you most concern?
In the report we highlight several technologies where potential risk is multiplied by their rapid spread and development. Examples include genetically modified organisms, mobile phones and nanotechnology, but there are also many others.
These technologies are now taken up more quickly than before, and are often rapidly adopted around the world. This means risks may spread faster and further, outstripping society’s capacity to understand, recognise and respond to these effects in time to avoid harm.