In part 2 of our series on artificial intelligence in medical technology, we looked at the topic of “Regulatory requirements”. We mentioned some of the standards on artificial intelligence and the committees involved. On 9 March 2021, the European Commission presented its goals for the digital transformation by 2030. In order to achieve these goals, more and more legislation on AI and digitalisation is being published, which poses a major challenge for research and development teams.

In April 2021, the European Commission published a proposal for a “Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union acts” (Proposal EU AI Act). 
On 14 June 2023, MEPs held a partial vote, which became official at the end of 2023. The EPRS (European Parliamentary Research Service) has published a briefing on this (Artificial intelligence act (europa.eu)).
On 8 December 2023, the negotiators from the EU Parliament and Council reached a provisional agreement on the Artificial Intelligence Act (AI Act). This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while promoting innovation and making Europe a pioneer in this field (Artificial  Intelligence Act: deal on comprehensive  rules  for  trustworthy AI).

The new provisions of the AI Regulation will apply to:

  • all providers of AI systems who develop an AI system or have one developed and then place it on the market or put it into operation;
  • those who use AI systems under their own responsibility;
  • Distributors outside the EU and EU importers;
  • Providers and users of AI systems located in a third country if the output generated by these systems is used in the EU.
Categorisation of AI systems according to a risk-based approach

As the use of AI with its specific characteristics (such as complexity, lack of transparency, data dependency, autonomous behaviour) can (but does not have to) endanger the fundamental rights and safety of users, the AI Regulation applies a risk-based approach. The level of legal regulation required depends on the level of risk:

  • an unacceptable risk => prohibited AI practices,
  • a high risk => regulated AI systems with a high risk,
  • a limited risk => transparency and
  • a low or minimal risk => no obligations.

Data source: European Commission

Medical devices with AI are high-risk AI systems

As can be seen in the table, medical devices with AI applications are automatically classified as high-risk systems as they are regulated by EU health and safety harmonisation legislation.

All high-risk AI systems will be subject to the following new regulations:

  • Obligation to carry out an ex-ante/(prior) conformity assessment:
    • Registration of the medical device in an EU-wide database managed by the Commission before it is placed on the market, put into service or used;
    • Fulfilment of existing conformity regulations (e.g. MDR or IVDR);
    • Own conformity assessment (self-assessment) for AI systems that are not covered by EU law as proof of compliance with the AI Act and use of the CE marking
    • Conformity assessment by a “notified body for AI systems with biometric identification”.
  • Further demands:
    • Fulfilment of further requirements, in particular with regard to risk management, testing, technical robustness, data training and data management, transparency, human oversight and cyber security
    • Fulfilment of requirements applies to suppliers, importers, distributors and users of AI systems
    • Providers from outside the EU require an authorised representative in the EU with the obligation to
      • ensure the conformity assessment,
      • establish a post-market surveillance system and
      • initiate corrective measures if required.
Requirements for high-risk AI systems

“High-risk AI-systems should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity in accordance with the generally acknowledged state of the art…” (Recital (49))

Title III of the AI Act deals with the requirements for high-risk AI systems in 51 articles. Here is an extract:

  • A risk management system must be implemented, documented and maintained throughout the life cycle of the AI system. The principle is similar to the risk management system required by the MDR and IVDR with customisation to AI specifics (Article 9).
  • Ensuring data quality as the basis of training, validation and test data sets if AI techniques are used to train models with data. Transparency for the purpose and processes of data collection (Article 10).
  • Preparation of technical documentation before placing on the market and keeping it up to date – for medical devices taking into account the MDR/IVDR requirements (Article 11)
  • Automatic recording of processes and events (“logging”) during the operation of the high-risk AI-systems. The aim is to ensure traceability of the functioning of the AI-system throughout its lifecycle to an extent appropriate to the intended purpose of the system (Article 12)
  • Transparency and comprehensibility in the operation and use of the AI system (Article 13)
  • Provision of information (including comprehensible, complete, compliant digital instructions for use) (Article 13)
  • Development of the AI-system with a duty of human oversight proportionate to the risks associated with these AI-systems and with an appropriate human-machine interface. Persons entrusted with human oversight must fully understand the capabilities and limitations of the high-risk AI-system and be able to properly monitor its operation and interpret the results of the high-risk AI-system (Article 14)
  • High-risk AI systems shall achieve an appropriate level of accuracy (accuracy levels and the relevant accuracy metrics shall be specified in the IFU), robustness (to errors, faults or inconsistencies within the system or operating environment) and cybersecurity (technical solutions to address AI-specific vulnerabilities shall be appropriate to the circumstances and risks) in relation to their intended purpose and shall function consistently in this respect throughout their lifecycle (Article 15)
Obligations of providers, users and other parties

Providers of high-risk AI systems must

  1. ensure that their high-risk AI systems fulfil the requirements in Chapter 2;
  2. have a quality management system in accordance with Article 17; this corresponds mostly to the QM system in accordance with ISO 13485 and MDR/IVDR.
  3. prepare the technical documentation of the high-risk AI system (Articles 11, 18 + Annex IV);
  4. retain the logs automatically generated by their high-risk AI systems if this is subject to their control (Article 20);
  5. ensure that the high-risk AI system is subject to the relevant conformity assessment procedure before it is placed on the market or put into service (Article 43);
  6. draw up an EU declaration of conformity (Article 48);
  7. affix the CE marking to the AI system (Article 49);
  8. comply with the registration obligations in the EU database referred to in Article 51;
  9. take the necessary corrective measures – including market-side measures – if the high-risk AI system does not fulfil the requirements in Chapter 2 (Article 21);
  10. inform the national competent authorities of the Member States in which they have made the system available or put it into service and, where applicable, the notified body, of the non-compliance and of any corrective measures already taken (Article 22);
  11. demonstrate, at the request of a national competent authority, that the high-risk AI-system fulfils the requirements set out in Chapter 2 of this Title (Article 23).

Product manufacturers assume responsibility for the conformity of the AI-system as well as the obligations of the provider if the AI-system is placed on the market or put into service under the name of the product manufacturer (Article 24).

In principle, authorised representatives have the same obligations as those specified in the MDR/IVDR (Article 25).

According to Article 26, importers are obligated to:

  • ensure that the provider of the AI-system has carried out the relevant conformity assessment procedure and drawn up the technical documentation in accordance with Annex IV and that the AI-system bears the required conformity marking and is accompanied by the required documentation and instructions for use;
  • indicate their name/trade name on the AI system and any accompanying documents;
  • ensure storage or transport conditions (where applicable 😊)
  • demonstrate compliance to the national competent authorities upon reasoned request, including access to the logs automatically generated by the AI system.

According to Article 27, distributors must:

  • verify that the required CE conformity marking is affixed, that the required documentation and instructions for use are included, and that the supplier and importer have fulfilled their specified obligations;
  • ensure storage or transport conditions (where applicable 😊)
  • take the necessary corrective measures if the AI system does not meet the requirements.
  • demonstrate compliance to the national competent authorities upon reasoned request.

Obligations of distributors, importers, users or other third parties under Article 28:

  • Each of the aforementioned actors is subject to the obligations of the provider if
    1. they place a high-risk AI system on the market or put it into operation under their name or brand;
    2. they change the intended purpose of a high-risk AI system already on the market or in operation;
    3. they make a significant change to the high-risk AI system.
  • the original provider must supply the new provider with all necessary documentation required to fulfil the specified obligations.

Users must comply with Article 29:

  • use the AI system in accordance with the enclosed instructions for use;
  • ensure that the required duty of care is guaranteed by a natural person;
  • in the event of the occurrence of risks referred to in Article 65(1) or a serious incident or malfunction referred to in Article 62, inform the provider or distributor and suspend the use of the AI-system;
  • automatically generated by the AI system for an appropriate period of time;
  • use the information provided in accordance with Article 13 to fulfil its obligation to carry out a data protection impact assessment, where applicable.
What’s next?

The agreed legislative text must now be officially adopted by both the Parliament and the Council in order to become EU law. Parliament’s Internal Market and Citizens’ Rights Committees will vote on the agreement in one of their next meetings. Germany recently signalled its approval of the AI ACT after brief signs of hesitation.

In order to not lose any time, medical device manufacturers who are already using AI technology in their medical devices or are planning to do so should analyse the impact of the AI Act on their company and its processes as well as the medical device as such.

The experts at seleon will be happy to support you.

Please note that all details and listings do not claim to be complete, are without guarantee and are for information purposes only.