The European Commission should not hire internally for its lead scientific advisor on artificial intelligence (AI) writes Alex Petropolos and Max Reddel.
Alex Petropoulos is an Advanced AI Analyst at ICFG, a Brussels-based think tank Max Reddel is the Advanced AI Director at the ICFG.
The EU AI Office is at a decisive moment. Reports suggest that the Commission is looking to fill the crucial role of Lead Scientific Advisor with an internal hire rather than an external expert, even though the role can only be properly filled by a leading scientist. The Advisor will strongly influence hiring and operational decisions at the Office. Without a well-qualified leader, the Office will struggle to attract the scientific and technical talent needed to deliver on its mission to shape AI governance in Europe.
The Commission could maximise the chances of hiring a qualified candidate by inviting world-class scientists to apply for this key position. Finding an internal official with the necessary credentials to fill the role of lead scientific advisor will be challenging, to say the least.
For one thing, a suitable candidate could earn 6-7 figures in a leading AI company. For another, it is highly unlikely the EU institutions already have a highly qualified AI scientist within their ranks. The UK and US hired former DeepMind and OpenAI researchers, respectively. With expertise within the EU already thin on the ground, it’s highly unlikely that this expertise exists within an EU institution already.
But what makes this role so important? The answer comes down to talent. To create an institution that answers important, yet unknown questions about how to regulate and achieve trustworthy AI models, talent is the number one bottleneck.
One of the most important questions the AI Office will have to face is assessing whether general-purpose AI models pose systemic risks – a task that will need staff drawn from world-class talent pools in highly competitive fields. The Office, however, will need this experience at all levels, from scientific advisors to policy officers.
A case in point is that the UK and US AI Safety Institutes (AISIs), tackling similar problems, made acquiring talent with experience working in frontier AI companies one of their key performance indicators. As the famed US Air Force pilot and strategist John Boyd said: “People, ideas, machines – in that order!”
This strategy has shown remarkable success. Less than three months after it was founded, the UK AISI has hired over 50 years' worth of frontier-lab technical expertise, with prominent AI researcher Yoshua Bengio on its advisory board. In a similar timeline, the US AISI had hired Paul Christiano as Head of AI Safety.
In stark contrast, more than 100 days into its setup, the EU AI Office has yet to appoint either a Head of Unit for AI Safety or a Lead Scientific Advisor – two crucial positions that will set the tone and direction of the Office in its infancy.
Should the Commission fail to fill these roles with suitable candidates, a wider conversation may need to start about the future of the AI Office. Its mandate has progressively expanded, and now covers regulation and compliance, excellence in AI and robotics, AI for societal good, AI innovation and policy coordination, as well as AI Safety. Questions should be raised about the sensibilities of bundling innovation, safety, and regulation all into one organisation, when many of those ambitions conflict.
In fact, if the Commission’s internal rules make it impossible to seek external scientific experts – as some people suspect – that may force the AI Office to focus solely on regulation and forgo its other mandate of, “fostering the development and use of trustworthy AI," in Europe. This would pave the way for a bold new institution to pick up that task: a CERN for AI, as European Commission President Ursula von der Leyen, proposed in July.
This would help clarify the EU’s approach, but it would not change the fundamentals. If the EU is to take a leading role in shaping the global AI economy, it must invest in the talent and infrastructure needed to build advanced AI models. As John Boyd said: “People, ideas, machines – in that order!” The ambition and ideas are there. We must not ignore talent.