The EU: bigger on big things, smaller on small things?
José Manuel Barroso, the outgoing President of the European Commission, made a pledge before he ended his second term in office: the EU Executive would stop spending its time on petty things like regulating olive oil or the amount of water used in toilet flushes to focus on the bigger issues.
“Europe should only act where it can add real value – big on the bigger things, small on the smaller things,” Barroso said in a speech to European leaders in October 2013. “Useless laws weaken necessary laws,” he added, quoting Montesquieu.
One of the tools to achieve this law-making process are impact assessment studies.
IAs are hardly a new feature in EU policymaking. Originally introduced at the turn of the century under the Commission’s ‘Better Regulation’ programme (since rebranded ‘Smart Regulation’), they are now being followed up as part of the Regulatory Fitness and Performance programme (REFIT), which aims to make EU law simpler, and to reduce regulatory costs on businesses.
IAs are meant to act as a 'reality check' on policymakers by requiring “an integrated analysis of costs and benefits” of proposed legislation and an assessment of the economic, social and environmental effects of every major new policy initiative.
The European Commission is currently reviewing the guidelines on how it carries out IAs and has committed to issue new ones by the end of 2014.
The matter may sound technical but is in fact highly political. Essentially, the guidelines determine who the Commission seeks advice from and how. After going through the IA procedure, draft legislation may be amended or even dropped altogether, in an effort to make the EU leaner and more efficient.
Weighing scientific evidence is a key part of this process, based on the precautionary principle.
Enshrined in Article 191 of the Treaty on the Functioning of the European Union (EU), the precautionary principle holds that anticipated harmful risks to areas such as health or the environment should take precedence in policy-making, where a scientific consensus exists.
In preparing its environmental laws, the treaty adds that the Union “shall take account of:
- available scientific and technical data,
- environmental conditions in the various regions of the Union,
- the potential benefits and costs of action or lack of action,
- the economic and social development of the Union as a whole and the balanced development of its regions.”
In practice, implementing the precautionary principle has proved controversial however, as it touches upon politically sensitive topics like the safety evaluation of foodstuff, pesticides, or genetically modified organisms (GMOs).
In all cases, policymakers are supposed to look at the evidence and evaluate the degree of uncertainty before making any decision.
This is where things get tricky: when is there enough scientific evidence to take a decision? Where to draw the line?
Risk analysis: the work of EU agencies
The job of weighing the science – and therefore the risk – is often delegated to external agencies, like the European Food Safety Authority (EFSA), the European Chemicals Agency (ECHA) or the European Medicines Agency (EMA). There are also numerous expert groups that the European Commission consults on a regular basis to inform its decisions, many of which are composed of scientists.
EFSA for instance recently published its assessment of aspartame, the low-calorie sweetener used in drinks and weight-control products and regularly issues scientific opinions about the safety of food products.
The vast majority of those opinions are uncontroversial. But the agency has suffered criticism for its ‘revolving door’ practice: experts sitting on panels and in decision-making bodies often have a background in the food industry, or return there after leaving EFSA, leading to conflicts of interest.
In 2012, the chair of EFSA's management board, Diána Bánáti, was forced to resign after it emerged that she had failed to mention she was also a member of ILSI-Europe, a body funded by the food and agro-chemical industry.
To address the issue, EFSA unveiled new rules for its in-house staff, as well as its outside experts, including specified lists of activities that would preclude scientific experts from serving on advisory panels.
Scientists previously employed by industry must now take a two-year "cooling-off" period before they can sit on EFSA’s scientific panels and scientists who receive more than 25% of their research funding from industry face other restrictions on the roles they can undertake at the authority, for example.
Following the EFSA scandal, European institutions came to the realisation that common rules were needed to police the 31 agencies currently informing EU policymakers. A common approach was agreed in 2012 followed up a year after by a new set of set of guidelines on the prevention and management of conflict of interests in decentralised agencies.
The EFSA case illustrates just how contentious science has become as a policy-making tool.
Following an impact assessment, controversial technologies such as GMOs or animal cloning might be approved in Europe as being safe or, on the contrary, banned because of the environmental and health risks they pose.
The stakes are sometimes so high that politicians are tempted to manipulate the evidence collected in IA studies in order to favour one outcome over another.
Anne Glover, the EU’s chief scientific advisor, is no stranger to the issue. Describing her role at the European Commission, Glover recently admitted that she found it difficult to disentangle the evidence gathering process from what she calls the “political imperative” that drives policy.
In order to back-up their policy proposals, Glover said EU commissioners may sometimes request that specific “evidence” is presented in IA studies.
“Let’s imagine a Commissioner over the weekend thinks, ‘Let’s ban the use of credit cards in the EU because credit cards lead to personal debt’. So that commissioner will come in on Monday morning and say to his or her Director General, ‘Find me the evidence that demonstrates that this is the case.’”
The Commissioner’s staff might resist the idea but in the end, she says, “they will do exactly what they’re asked” and “find the evidence” to show that credit card use leads to personal debt, even though this may not be the case in reality.
“So you can see where this is going,” Glover said: “You’re building up an evidence base which is not really the best.”
Often, the European Commission will also contract external consulting firms to do the research groundwork that feeds into an impact assessment study.
But consulting firms have little incentive to contradict the Commission’s political agenda if they want repeat business, Glover said. This leads them to look for evidence that will back up the Commission’s agenda, leading to a distortion of the ‘scientific’ data presented to policymakers.
To be fair, the Commission is not alone in trying to manipulate evidence, Glover said. The same goes for the other two EU institutions – the European Parliament and the EU Council of Ministers, which represents the 28 EU member states.
“What happens at the moment – whether it’s in Commission, Parliament or Council – is that time and time again, if people don’t like what’s being proposed, what they say is that there is something wrong with the evidence. So everybody blames the evidence and nobody is honest about the fact that in many cases, understanding the evidence is the best possible platform to make the logical extension into policy. But they don’t like it so they say ‘We need more evidence’. And of course scientists can always produce more evidence.”
Contested impact assessments: the REACH case
There are countless examples of topics where EU policymakers have bickered over the evidence, including on the safety of nanoparticles, the impact of biofuel crops over food prices or chemical substances with hormone-disrupting effects.
In fact, the battle extends far beyond the EU institutions and spills over to the private sector and non-governmental groups trying to influence policy.
Perhaps the most politicised case to date was the REACH regulation on chemicals, which gave rise to one of the most epic lobbying battles in the EU's history, generating dozens of impact assessment studies before it was eventually adopted in 2006.
The bitter campaign saw chemical companies warn they could be forced to close factories and leave Europe because of the extra costs generated by the EU law. These included widely publicised industry-funded impact assessment studies which claimed that REACH would cost billions of euros to implement, causing millions of job losses in Germany.
At one point, EU officials arranged a meeting to try and make sense of 36 different impact assessment studies on REACH, most of them focusing on the legislation’s projected disastrous cost to businesses.
The Commission’s own initial impact study, meanwhile, had sought to balance those costs with the expected benefits of REACH for public health and the environment.
Ironically, the final impact study, which brought together representatives from all sides of the debate – from environmental NGOs to the chemical sector – ended up broadly confirming the Commission’s initial assessment that the costs of REACH would be largely bearable for the chemicals industry.
It was in the aftermath of the REACH battle that the European Commission adopted its impact assessment guidelines, which are now being revised.
The guidelines include minimum quality standards for conducting IAs such as data collection and making sure all the relevant stakeholders are consulted in the process.
“Good quality data – facts as well as figures – are an essential part of any IA. You need them to define the problem and the baseline scenario, and to identify the impacts of alternative options for dealing with the problem. Particular attention needs to be paid to quality and credibility of data,” the Commission states in its IA review document.
As shown by the REACH saga, stakeholders – whether businesses or citizen’s groups – can have a huge influence on the credibility of EU policies and are therefore seen as an essential part of the IA process.
“A poor public consultation process inevitably weakens the Commission's case,” the EU executive notes in the IA review consultation document, reminding officials that consulting those affected by new policies is “a Treaty obligation and a mandatory component of all IAs.”
Based on the data, impact assessment should also list clear policy options for politicians to choose from, highlighting several alternatives in terms of the expected benefits and costs.
Overall, the Commission says a good quality IA should always have the following attributes:
- Comprehensive: “IA analysis should be comprehensive, considering relevant economic, social, and environmental impacts of alternative policy solutions.”
- Proportionate: “The scope and depth of the IA should be proportionate to the type of initiative, the importance of the problem, and the magnitude of the expected impacts.”
- Evidence-based: “All Commission proposals should be based on the best available evidence and scientific advice, or a transparent explanation of why some evidence is not available and why it is still considered appropriate to act.”
- Open to stakeholders' views: “Stakeholders' views must be collected on all key issues and reported on in the IA Report. Every effort should be made to ensure that the Commission has sought and considered a wide and balanced range of views. The reasons for disagreeing with dissenting views must be explained.”
- Unbiased: “IA analysis must be objective and balanced. An IA should inform political choices with evidence - not the other way around.”
- Transparent: “The credibility of IA hinges on the transparency with which results are presented, estimations explained, choices justified and limits acknowledged.”
- IAs should also be embedded in the policy cycle, drawing on lessons from the past, and conducted in cooperation with other Commission services to bring different perspectives on a given issue.
To ensure those standards are respected, an Impact Assessment Board (IAB) was appointed in 2006 as “an independent quality checker” for IAs to ensure they are subject to “rigorous scrutiny”. It brings together the bosses of the all the Commission’s main departments.
But Anne Glover, says the board’s composition should be revised to make it less political. In particular, she points out that there is no scientist sitting on the board, saying “There is an opportunity there".
Politicians retain the last word
Meanwhile, the Commission’s effort to improve its impact assessment process has attracted praise across the board.
In a 2010 report, the European Court of Auditors said that IAs had been "broadly effective" in supporting EU decision-making. Their findings were “helpful”, especially because they presented decision-makers with policy options to choose from, in the auditors' view.
However, they also remarked that the Commission's IAs were not updated to take into account amendments made to legislation during the decision-making process. The European Parliament and Council of Ministers, the two law-making bodies of the EU, rarely carry out such assessments themselves, they noted.
The Parliament took the criticism on board and created a specific department in 2012 to repeat IAs whenever “substantive amendments” are made to legislative proposals. The new Parliament department may even take the initiative in some areas, such as making “in-house appraisals” of IA studies conducted by the Commission. The Council, for its part, agreed to discuss Commission IAs more frequently than in the past, without committing any further.
But no matter how irrefutable the evidence may be, at the end of the day politicians will – and should – always retain the last word, the Commission warned.
“The IA supports and does not replace decision-making – the adoption of a policy proposal is always a political decision that is made only by the College of Commissioners,” the Commission said.
For the EU’s chief scientist, Anne Glover, this is acceptable as long as all parties understand that policy decisions are taken by politicians, not scientists, and that the evidence-gathering process is transparent.
What she suggests is that the Commission creates a special department whose role would be to assess policy proposals against the evidence – “a central service which would be the evidence portal.” The service would take "questions" submitted to it by the Commission directorates and bring together the evidence to substantiate the issue at hand. Once formulated, the evidence base would be sent back to the policymakers who could then look at policy options based on the analysis.
According to Glover, such an open process would compel politicians to justify their decisions – and explain why they may have chosen to disregard scientific evidence along the way.
“If [politicians] choose to develop a policy that’s not based on the evidence, the biggest difference is that they would have to say, ‘I accept the evidence but for other reasons – social, economic, ethical, philosophical – we’re doing this’. And I think that’s quite justifiable because as I scientist I’m not saying that all that matters is evidence.”
“I don’t do policy for science, I do science for policy.”