Project-selection procedures in the EU’s Cohesion policy remain oriented towards outputs and spending rather than driven by expected results, a new report from the European Court of Auditors (ECA) underlined.
Despite the Commission’s intention to focus on results and benefits, assessing the actual extent to which EU funding contributes to the overall growth and development in the member states remains complicated, said the report, published on Tuesday (4 September).
Shortcomings in monitoring create further obstacles for assessing the actual benefits of each project.
“Although the set-up of Cohesion policy programmes is now more results-oriented, with a stronger intervention logic and an extensive set of indicators, we conclude overall that project selection is not yet sufficiently results-oriented and monitoring continues to be mostly output-oriented,” said Ladislav Balko, ECA member responsible for the report.
For the Auditors, member states should compare project applications, ask beneficiaries to establish at least one “genuine result indicator for each project” and examine both the expected results and indicators in the context of the applications.
For its part, the Commission, which replied to some of the criticism from the report, said it would define common result indicators for the European Regional Development Fund and improve its performance reporting for an efficient performance review in 2019.
In its reply, the Commission stressed that result indicators are already included in the proposal for a Common Provisions Regulation for the post-2020 programming period. But it underlined that it had a limited say in selecting and monitoring programmes, which is the competence of member states.
— European Court of Auditors (@EUauditors) September 4, 2018
Lack of quantified result indicators
According to the report, which looked at 34 projects from seven programmes in the Czech Republic, France, Italy and Finland, project selection still addresses results only superficially, as selection criteria seldom focus on quantified result-indicators.
Furthermore, while all 34 projects applied to the Operational Programs’ specific objectives, the report shows that managing authorities focused on “… the overall feasibility and likelihood of the projects to succeed”, when assessing applications.
The Auditors suggest that selection criteria should focus both on projects’ alignment with operational programs’ objectives and on the provision of quantified result indicators to measure the actual impact of the projects in the long-term.
Selection on a first-come, first-served basis
The report shows that the majority of the projects were not selected via a ranking procedure but based on who applied first. Consequently, it is impossible to guarantee that the selected projects selected were actually among the best applicants, underlined the Auditors.
Managing authorities in member states indeed used several ways to promote a call to potential beneficiaries, the report said. But the Auditors advise the authorities to encourage more beneficiaries to submit applications, so that selection procedures include a bigger variety of projects, which should be compared with each other to identify the best ones.
The Commission underlined that the selection of projects is the exclusive responsibility of the managing authorities in member states, who define the selection criteria and apply them when examining the applications for funding.
However, it highlighted that “there is a difference between direct results of projects financed and the results to be achieved at the level of the operational programmes (reflected by the result indicators)”.
As far as the selection procedure of the projects is concerned, “it is more efficient to set quality thresholds and accept all projects that meet this threshold than to apply a method of direct comparison of applications”. The latter could result in a higher number of project applications but may not lead to better overall project quality, the Commission said.
Monitoring procedures still show insufficiencies
During the current 2014-2020 period, the auditors found that monitoring systems started functioning only at a later stage, mainly due to delays in the adoption of the legal framework at a European level. Even when that was resolved, weaknesses in a few IT systems remained, which once again affected the quality of information and the reliability of the data produced for the monitoring and the reporting procedures.
The report highlights that “until the end of 2016, only a limited amount of data was collected in IT systems. Consequently, the audits necessary to provide assurance on the monitoring systems and the reliability and availability of the data mostly started in 2017.”
Furthermore, according to the report, “monitoring information is still mostly output-oriented: the Commission’s reporting presents progress for outputs alongside spending, but there is only limited information on the actual results achieved”.
As a result, the Commission still lacks a complete overview of the data from the monitoring and the IT systems. This absence will also create significant issues for the Commission’s plan to examine the programmes’ achievements in 2019 and create a performance reserve, an amount “set aside at the beginning of the period for each program”.