In July 2019, the Russian Venture Company initiated the creation of a technical committee on standardization TC 164 “Artificial Intelligence”. About 120 leading domestic companies in this area have been consolidated over the past six months since its establishment as part of the committee: universities, research organizations, and IT companies, consumers of AI technology, and specialised public authorities. Besides, working groups and subcommittees were established to deal with the main aspects of activity and build cooperation with international standardisation institutions.
Today, we are shaping the agenda of the technical committee for the medium term. It should take into account both universal services for standardisation of information systems and technologies and specific services typical to intelligent data processing systems.
The first question pool should include, first of all, various aspects related to unification: development of standard terms and definitions in the field of AI, standards for the presentation and description of data, information exchange protocols, ensuring mutual compatibility of intelligent systems and their compatibility with other automated systems.
Specific standards are related to the obscurity of intelligent algorithms due to the lack of explanatory decision-making mechanisms. This peculiarity, which is a characteristic first of all for AI systems, which are based on data-based training, significantly hampers assessment of the quality of work and prediction of the system behaviour in real operating conditions. This, in turn, prevents from using AI technologies in solving critical tasks, that is, such tasks whose incorrect solution can lead to threats to human health and life, significant environmental and economic damages. This includes, first of all, medical diagnostics and support for making medical decisions, driving unmanned vehicles, construction equipment, and hazardous industrial equipment, most AI applications in the field of defence and security.
Barriers associated with the use of AI systems in these areas can be removed by standardising the requirements for testing methods of critical, intelligent systems, as well as by creating a certification system that provides objective confirmation of the compliance of systems with the established requirements in the field of functionality and security.
Another feature of the standardisation of artificial intelligence lies in the sphere of information security. The creation and use of artificial intelligence systems are inextricably connected with the use of big data. In many cases, such data is classified as confidential information, including personal data. Currently, there are no legal ways for the operator to transfer personal data to a third party — the developer of the AI system.
For example, medical data accumulated in a healthcare institution cannot be transferred to an IT company to develop an intelligent system designed to solve a particular medical diagnostic problem. Of course, this holds back progress and demands the development of organizational, technical, and regulatory decisions in the field of guaranteed de-personification of big data, management of permissions to process personal data, etc.
The problem of protecting information processed in AI systems is further exacerbated by the fact that the level of confidentiality of information during its processing can spontaneously increase. This occurs during data aggregation, extrapolation, and restoration of initially missing data components. As a result, information about a person collected from open sources may, at some point, acquire signs of personal data, and this will require adequate measures to protect them. To prevent excessive measures of information protection while guaranteeing compliance with information security requirements in AI systems, standards are needed to establish the procedure for determining the level of confidentiality of processed data. All these provide for the possibility of changing this level during the functioning of the intelligent system.
Russian President Vladimir Putin highlighted the importance of artificial intelligence standardisation work in his annual speech before the Federal Assembly. He noted that in the conditions of increasing speed of technological changes in the world, “we must create our own technologies and standards in those areas which determine the future. It is primarily about artificial intelligence, genetics, new materials, energy sources, and digital technologies. ... As early as this year, it is necessary to launch a flexible mechanism of experimental legal regimes for the development and implementation of new technologies in Russia, to establish new regulation of big data turnover. ”
The technical committee of TC 164 Artificial Intelligence, in accordance with the National Strategy, will carry out work on the development of AI standardisation for the Development of Artificial Intelligence for the period until 2030. It provides for the creation of unified systems for standardisation and conformity assessment of technological solutions developed based on artificial intelligence, the development of international cooperation on standardisation and the provision of certification of products based on AI.
By: Sergey Garbuk, Chairperson of the Technical Committee 164 Artificial Intelligence based on RVC.