Understanding the Need for Standardization in AI
To foster the adoption of artificial intelligence (AI) technologies, institutions and industries have decided to work together on their standardization. Today, the adoption of AI is facing trust issues from stakeholders. Without standardization, the technology is still perceived as being too risky in many sectors of activity. This observation is shared by legislators1 who wish to regulate the use of this technology. They need to rely on industry standards to find the right balance between innovation and citizen protection.
Standardization aims to mitigate the risks inherent in AI.
- Safety: AI algorithms can have applications in critical sectors where safety2 is paramount, such as vehicles, surveillance systems, and scoring applications. Ensuring minimum safety is one of the roles of standards, which will require companies to provide evidence of reliability.
- Bias mitigation: AI is commonly subject to problems of bias. Protecting against such problems is necessary and will involve validation steps that are described in the standards.
- Interoperability: With many specific development environments and processes, compatibilities become difficult to manage, which can cause serious problems. Standardizing processes will allow for more collaboration and will create more security in deployments.
- Ethics and legal issues: Ethical and legal issues also push for standardization, to facilitate responses to concerns and legal procedures in this area. One objective of standardization is also to pave the way for gaining the trust of society.