The EU AI Act: Implications for Research and Innovation

After several years of intense legislative efforts, the AI Act (Regulation 2024/1689) was finally published on 12 July 2024. As discussions about its implications across various sectors continue, it is important to evaluate how this regulation will impact both current and future research and innovation projects within the EU, particularly in the health domain, including initiatives like the TRUMPET project.

What is AI Act?

The AI Act represents the world’s first comprehensive legal framework for artificial intelligence, setting a strong precedent for similar initiatives globally. Its primary goals are twofold: improving the functioning of the EU internal market and fostering investment in AI and innovation, while ensuring that AI technologies benefit people and enhance societal well-being.
As a horizontal piece of legislation, the AI Act applies across all sectors, governing the development, deployment, and use of AI systems within the EU. It establishes a framework for managing and mitigating risks related to health, safety, fundamental rights, and public interests (e.g., public health, protection of critical infrastructure, environmental impact) that AI systems may pose. The regulation aims to address these risks appropriately by promoting reliable and trustworthy AI without placing excessive burdens that could hinder innovation.
The AI Act entered into force on 1 August 2024. As a general rule its provisions will become fully applicable from 2 August 2026, with some exceptions. For example, prohibitions of certain AI systems will take effect after six months of the act entering into force, while the rules for AI systems embedded into regulated products – will apply after 36 months.

EU approach to regulating AI

Two key concepts are fundamental to understanding the AI Act: the definition of an “AI system” and the “risk-based approach.” In the context of the AI Act, an “AI system” refers to any machine-based system that meets all of the following criteria:

  • It is designed to operate with varying levels of autonomy;
  • It may exhibit adaptiveness after deployment;
  • It infers, based on the input it receives, how to generate outputs—such as predictions, content, recommendations, or decisions—that can influence physical or virtual environments, for explicit or implicit objectives.

A defining characteristic of AI systems is their inference capability. Therefore, traditional software or rule-based systems, which perform automated tasks strictly according to predefined human instructions, do not fall under the scope of the AI Act.
AI systems may be used independently or as components of a product, whether physically integrated (embedded) or serving the product’s functionality without being integrated (non-embedded).
Risk-based approach means that although the AI Act applies across all domains, it does not regulate all AI systems equally. The rules are tailored to the level and scope of risks generated by the AI system. Based on this approach:

  • Certain AI practices are deemed unacceptable and are completely prohibited;
  • High-risk AI systems, including those managed by manufacturers and users, are subject to stricter requirements;
  • Certain AI systems must adhere to transparency obligations;
  • Minimal-risk AI systems remain largely unregulated.

The diagram below provides a visual overview of the risk bases approach.

The AI Act places significant emphasis on high-risk AI systems, which are subject to the most stringent regulations. For these systems, the requirements include: comprehensive risk management, adhering to data governance practices to ensure the quality and relevance of data sets, maintaining detailed technical documentation and records, promoting transparency by providing information to deployers, and ensuring human oversight. Additionally, high-risk AI systems must meet standards for robustness, accuracy, and cybersecurity.
Moreover, high-risk systems that rely on AI model training must be developed using training, validation, and testing data sets that adhere to established quality criteria.

AI Act and medical devices

AI Act is relevant to use of AI in the health domain. In particular, AI-enabled tools which are used for medical purposes would be qualified as high-risk AI systems, if they:

  • are safety components of products, or are themselves products, falling within the scope of Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR)] provided that
  • the product concerned undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to MDR or IVDR.

In such a case, the AI-enabled medical devices are regulated by both the AI Act and the MDR or the IVDR. This is impactful to projects which are working on AI solutions for healthcare.

AI Act and research

However, this does not automatically mean that all obligations of the AI Act will apply to research and innovation projects. The EU lawmakers did not intend to stifle innovation, so the AI Act provides explicit exceptions. Notably, it does not apply to the research, testing, or development of AI systems or models prior to their being placed on the market or put into service. Such activities, however, must still comply with applicable EU laws, such as the GDPR.
Moreover, testing in real-world conditions—i.e., the temporary testing of an AI system for its intended purpose outside of a laboratory, aimed at gathering reliable data and assessing the system’s conformity with the AI Act—is not exempt from these regulations. Nevertheless, real-world testing does not constitute placing the AI system on the market or putting it into service, as long as it complies with specific conditions set out in the AI Act. Those include: requirements to draw up a real-world testing plan and submit it to the market surveillance authority in the Member State of testing, limitations on the testing period, or putting in place appropriate safeguards to protect persons belonging to vulnerable groups due to their age or disability.

AI Act and the TRUMPET project

The TRUMPET project places significant emphasis on the security and privacy of AI systems. The goal of the project is to research and develop novel privacy enhancement methods for federated learning, and to deliver a highly scalable federated AI service platform for researchers, that will enable AI-powered studies of siloed, multi-site, cross-domain, cross border European datasets with in compliance with GDPR and privacy requirements. The generic TRUMPET platform will be piloted, demonstrated and validated in the specific use cases of cancer hospitals.
Although, as mentioned above, the AI Act does not apply to scientific research, all research and development (R&D) activities must still adhere to established ethical and professional standards for scientific research, as well as comply with all relevant Union laws. These requirements, in particular related to data protection compliance, are mapped and monitored by WP5 of the project. Additionally, the project’s exploitation strategy will examine the potential for the placement of the TRUMPET results on the market and their deployment in real-world conditions. Given the potential commercialization of the project’s results, the project is looking into specific provisions of the AI Act applicable to AI systems, in particular in the health domain. This will be explored further through project workshops and the ongoing work of WP5, led by Timelex.

Last but not least, the AI Act stipulates that high-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities. In the context of the TRUMPET project, researchers will implement various privacy enhancing technologies, and deploy curious attacker threat models and external penetration testing experts to measure the functionality, cyber resilience and scalability of the platform. Once these requirements have been sufficiently developed, they can become main exploitation aspects of the TRUMPET platform, contributing to development of cyber-resilient AI models.

Eleni Moraiti, Magdalena Kogut-Czarkowska (TIMELEX)