Basically, you always have to keep the big picture in mind when it comes to artificial intelligence, because it is not only used in medicine, but also in other areas. We have already gone into the basics and terms in the first part of our series: Artificial Intelligence Basics for Medical Technology

In addition to the terms, the work on standards is of course also complex, but sometimes not even purposeful for medical technology. We would like to bring this closer to you in the following article.

Standards for Artificial Intelligence – let’s start at the top with ISO / IEC

The working group “ISO/IEC JTC 1/SC 42 Artificial intelligence” represents an association of IEC and ISO dedicated to writing standards in the field of Artificial Intelligence. Due to the set-up of the committee, horizontal standards are developed by them, which must be applied to all areas in which artificial intelligence (AI) is used.

Horizontal standards are standards that apply to several areas and are therefore rather generic. If no other specific standards exist, they represent the so-called “state of the art” and are thus applicable.

Vertical standards, on the other hand, are intended for specific fields of application and may override horizontal standards.

We are already familiar with this from the field of medical technology, where the IEC 60601-1 standard is used as a horizontal standard for all electrical medical devices. Vertical standards are then considered as collateral standards applicable to product groups / fields of application (60601-1-3 / 60601-1-8) or then in particular standards (all 60601-2-X) applicable to individual products.

Already published standards of the JTC 1/SC 42 Committee on Artificial Intelligence

The following standards have been published by the committee JTC 1/SC 42 so far:

ISO/IEC TR 20547-5:2018Information technology – Big data reference architecture – Part 5: Standards roadmap
ISO/IEC TR 20547-2:2018Information technology – Big data reference architecture – Part 2: Use cases and derived requirements
ISO/IEC 20546:2019Information technology – Big data – Overview and vocabulary
ISO/IEC TR 24028:2020Information technology – Artificial intelligence – Overview of trustworthiness in artificial intelligence
TR 29119-11:2020Software testing – Part 11: Guidelines on the testing of AI-based systems
ISO/IEC 20547-3:2020Information technology – Big data reference architecture – Part 3: Reference architecture
TR 24027:2021Information technology – AI – Bias in AI systems and AI aided decision making
TR 24029-1:2021AI – Assessment of the robustness of neural networks – Part 1: Overview
TR 24030:2021Information technology – AI – Use cases
TR 24372:2021Information technology – AI – Overview of computational approaches for AI systems
ISO/IEC 38507:2022Governance of IT – Governance implications of the use of artificial intelligence by organizations
ISO/IEC 22989:2022Information technology – AI- Artificial intelligence concepts and terminology
ISO/IEC 23053:2022Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

If we look at the development more closely, we see that the first standards are all technical reports or standards on definitions. The definitions listed in the Technical Reports are not mandatory, but are only recommendations. Of course, this does not apply to the definitions written in “real” standards. Thus, these definitions are binding and should also find their way into your documentation to make life easier for the auditors.

Where is the “state of the art” for medical technology?

The IMDRF, International Medical Device Regulators Forum, has analysed the ISO/IEC 22989 definition standard and adopted most of the definitions without significant changes. Some definitions have been slightly adapted to medical technology. Thus, the IMDRF document “Machine Learning-enabled Medical Devices: Key Terms and Definitions” can be regarded as “state of the art”, so that the definitions of ISO/IEC 22989 do not have to be used.

Currently, the JTC 1/SC 42 committee is working on 25 other standards. The committee consists of ~ 300 members. These are organised in various groups and are very active in writing the corresponding standards.

In the next two to three years, a large number of new, horizontal standards are expected to appear and, due to the lack of vertical standards in medical technology, will represent the “state of the art”. The standards of JTC 1/SC 42 that have already been published are to be regarded as up-to-date and thus represent a large catalogue of requirements for all AI products for the time being.

In addition, some countries are currently actively writing additional lists of requirements and thus setting national requirements.

Impact of horizontal non-medical standards on medical technology

With the publication of the “non-medical standards”, the standard writers in the medical field are encouraged to read these standards and, if necessary, write their own standard if the requirements are not suitable for medical technology. Whether there will be a statement on the suitability of standards for medical technology is still open.

As an example of this approach, the AdHoc 6 Committee analysed the “non-medical standard TR 29119-Software testing – Part 11: Guidelines on the testing of AI-based systems”. This group wanted to create a reference document by referencing the applicable chapters. Unfortunately, the requirements in this standard were too vague and not specific enough to be used for medical technology. The conclusion of the AdHoc 6 was that they now want to write their own standard in the area of “AI Testing”.

Standardisation activities outside the JTC 1/SC 42 Committee

Directly under TC 62 is the Advisory Group SNAIG (Software Network and Artificial Intelligence Advisory Group), which serves to advise the TC 62 Secretariat on AI and networks. So far, this committee has published four reports.

The first report has currently summarised the existing regulatory requirements from the following countries: Brazil, China, European Union, India, Japan, Korea, Saudi Arabia and Singapore.

In addition, the SNAIG made the following recommendations in the first half of the year:

-Improved cooperation with ISO TC215 (technical committee responsible for e.g. 62304)

-GOST clinical evaluation

The Russian Standard Committee wanted to write a standard for clinical assessments, but this was discontinued due to political problems.

In the second and third report, the regulations of the countries were further observed and an attempt was made to involve all other standards committees (ISO / IEEE /AAMMI).

In addition, a survey was launched in the third report on which topics are considered important in the field of artificial intelligence in medical technology. This resulted in the following order:

  1. verification methods
  2. cybersecurity
  3. risk management
  4. AI data lifecycle
  5. Data quality assurance
  6. AI software lifecycle
  7. quality management system for AI
  8. AI transparency
  9. remote monitoring or control
  10. Privacy

Some suggestions were generated from this, so that two additional groups were founded at the IEC, which are now actively writing standards. One is PT8, which wants to work actively in the area of clinical validation, and the other is PT 63450, which wants to write a standard in the area of “Testing of AI Systems”.

In total, the SNAIG has made 18 recommendations since its inception and has intensified cooperation with the ISO 210 and ISO 215 committees. The ISO 210 committee writes all QM standards for medical technology, while ISO 215 covers all standards in the field of medical software (mostly together with IEC SC62WG1).

The aim is thus to avoid writing standards twice, which they could also contradict.

At ISO, the counterpart to SNAIG is the ADHOC group, which deals with ISO standards that should be adapted for AI. There will also be some activities here in the near future, but the NWIP (“New Working Item Proposals”) have not yet been started.

So there is a lot going on when it comes to this topic at the moment, so keep your eyes and ears open. We will do the same here, so stay tuned for part 3 of our series.

This article was written in collaboration with Dominik Kowalski, IEC member and the “Institute for Intelligent Cyber-Physical Systems (ICPS)” at Heilbronn University.

Please note that all details and listings do not claim to be complete, are without guarantee and are for information purposes only.