By Luca Bertuzzi | EURACTIV 30-05-2022 (updated: 02-06-2022 ) Technical standards will determine how the AI Act will apply in practice. [Carlos Amarillo/Shutterstock] Languages: Français | DeutschPrint Email Facebook Twitter LinkedIn WhatsApp Telegram The technical standards that will implement the EU’s Artificial Intelligence Act will be developed jointly by the three European standardisation bodies, according to a draft standardisation request obtained by EURACTIV. The European Committee for Standardisation (CEN), the European Committee for Electrotechnical Standardisation (CENELEC), and the European Telecommunications Standards Institute (ETSI) will all be in charge of developing the technical standards for the AI Act. The three standardisation organisations will have to provide a work programme detailing the timeline and technical bodies responsible for each standard requested in the flagship AI regulation. They will also have to submit a progress report to the European Commission every six months. Technical standards will play a key role in the implementation of the AI Act, as the companies that apply them will be considered by default in conformity with the EU rules. Standards play such a critical role in bringing down the compliance costs that they have been defined as the ‘real rulemaking’ in an influential paper on the EU’s AI rulebook. The annexe to the draft request details the standards that will need to be developed, namely on risk management systems, governance and quality of datasets, record-keeping, transparency and information to users, human oversight, accuracy specifications, quality management including post-market monitoring, and cybersecurity. Moreover, the bodies will have to define the validation procedures and methodologies for assessing whether the AI systems are fit-for-purpose and meet the European standards. As per the regulation, such conformity assessment could be carried out by the AI provider on its own or by a third party. The European bodies will have to consider the interdependencies between the different requirements and make them explicit when delivering the technical standards. In addition, they will have to give special attention to making the standards compatible with the needs of SMEs, which will have to be involved together with civil society in the consensus-building exercise. For Kris Shrishak, a technology fellow at the Irish Council for Civil Liberties (ICCL), involving civil society is a positive development as standardisation organisations might be ill-equipped in addressing issues such as bias mitigation. “This issue goes beyond technical aspects. It depends on the people who are using the AI systems, the context of use and the people who are affected,” Shrishak said but complained that it is not clear what would happen if civil society was not sufficiently involved. Technical standards have become increasingly politicised, with both China and the United States investing massive resources in trying to influence the discussion in international fora according to their strategic interests and those of their companies. To counter the steady decline of European companies in defining technical standards, the European Commission recently launched a standardisation strategy that, in line with the EU’s digital sovereignty agenda, seeks to reduce the foreign influence from European standards and more vehemently promote EU interests in international standardisation bodies. European Commission sets out a plan to regain clout in standard-setting The European Commission’s new Standardisation Strategy outlines a renewed commitment to engage in the definition of technological standards for emerging technologies to counter growing international competition. Therefore, the standards will have to be aligned with the policy objectives to “respect Union values and strengthen the Union’s digital sovereignty, promoting investment and innovation in AI as well as competitiveness and growth of the Union market, while strengthening global cooperation on standardisation in the field of AI that is consistent with Union values and interests” The reference to considering the interests of SMEs might also be interpreted in this sense, as the Commission is seeking to scale up European tech companies that are still relatively small compared to international competitors. In the last EU-US Trade and Technology Council summit on 16 May, the two sides committed to developing a joint roadmap on evaluation and measurement tools for trustworthy AI and risk management. The roadmap is expected for the next TTC meeting in December. In this draft version, the standardisation request is valid until 31 August 2025, and the three standardisation organisations would have to submit their joint final report by 31 October 2024. However, the request will only be finalised once the AI Act has been consolidated in interinstitutional negotiations, which are unlikely to start before 2023. The risk of fragmentation for international standards The EU’s standardisation strategy is trying to prevent the lagging of European companies but might open the door for the regionalisation of international standards. [Edited by Zoran Radosavljevic]