Understanding the Need for Standardization in AI
To foster the adoption of artificial intelligence (AI) technologies, institutions and industries have decided to work together on their standardization. Today, the adoption of AI is facing trust issues from stakeholders. Without standardization, the technology is still perceived as being too risky in many sectors of activity. This observation is shared by legislators who wish to regulate the use of this technology. They need to rely on industry standards to find the right balance between innovation and citizen protection.Standardization aims to mitigate the risks inherent in AI.
- Safety: AI algorithms can have applications in critical sectors where safety is paramount, such as vehicles, surveillance systems, and scoring applications. Ensuring minimum safety is one of the roles of standards, which will require companies to provide evidence of reliability.
- Bias mitigation: AI is commonly subject to problems of bias. Protecting against such problems is necessary and will involve validation steps that are described in the standards.
- Interoperability: With many specific development environments and processes, compatibilities become difficult to manage, which can cause serious problems. Standardizing processes will allow for more collaboration and will create more security in deployments.
- Ethics and legal issues: Ethical and legal issues also push for standardization, to facilitate responses to concerns and legal procedures in this area. One objective of standardization is also to pave the way for gaining the trust of society.
The European AI Act and Standardization
The European Commission supports a European AI standardization initiative to pave the way for a regulatory framework for this technology: the AI Act. This project aims to regulate the use of AI systems so that they are respectful of human beings, in terms of safety and ethics. To put it simply, the AI Act is to AI what the GDPR is to data. The AI Act defines several levels of risk related to specific uses and impacts of AI. These levels then see specific controls applied depending on their classification, with some types of AI even being prohibited (e.g., AI for manipulating opinions or AI for social credit). To make the AI Act applicable in practice, the European Commission has called on CEN CENELEC, the European standardization body, to provide it with technical standards. The European Commission has thus mandated CEN CENELEC with a “Standardization Request” to provide it with the necessary standards on which to base the regulation of AI technologies. To do this, CEN CENELEC identifies, adapts if necessary, and adopts international standards already available or under development by other organizations such as ISO/IEC.Concretely, for CEN CENELEC, two possibilities are envisaged concerning the standards for AI.
- Harmonized standards: These are HEN (specific) standards that group together several texts from other European standards that will be merged or supplemented.
- EN standards: These are standards that result from internal developments or from the adoption of ISO standards.
- In the case of the AI Standardization Request, CEN CENELEC will produce some harmonized standards and will adopt the ISO standards.
Numalis and ISO/IEC 24029 Suite of Standards
Arnault Ioualalen, CEO of Numalis, is the editor of the two ISO/IEC 24029-1 and 24029-2 standards of Working Group 3 on AI reliability. Arnault’s role is to lead meetings and coordinate the drafting and editing of these standards. He is therefore actively involved in the drafting of the standards and is responsible for taking into account the proposals of other experts in the texts. Arnault is also the flag bearer for these standards, participating in their promotion and defending their interests before various institutional and industrial bodies.- Standard 24029-1: Provides an overview of the assessment of the robustness of neural networks.
- Standard 24029-2: Concerns the definition of methodologies for assessing the robustness of neural networks using formal methods.
- Standard 24029-3: On statistical methods is already under development.