Regulatory

Fit4AI Act: a process model for insurance

by Stefan Nörtemann / 9. July 2025

The incremental entry into force of the AI Act

Following years of groundwork, the European Union regulation known as the Artificial Intelligence Act (AI Act) entered into force on 1 August 2024. As is standard with EU regulations, the provisions become law immediately in all member states without requiring any further national implementing acts. However, transition periods do apply until the provisions take effect. These are normally two years. This is also the case with the AI Act, with a few exceptions [1], which means that most articles will be mandatory from 1 August 2026 onwards.

A process model for insurance

I have presented details about the legal framework, individual rules and the ramifications for the insurance industry in my blog posts dated 15/07/202403/02/2022 and 22/02/2022. This time, I consider a question: What is to be done with an AI project in the insurance sector in terms of the regulation?

 

Let us consider the following situation: An insurance company develops an AI application or purchases one from a software provider (on-premises or in the cloud) and intends to utilise it within the company. The individual rules of the AI Act are strongly dependent on the determination of the role: user, product manufacturer, provider, deployer, importer and others. In this instance, we limit ourselves to the role of user, i.e. the insurance company utilising an AI in its business.

 

To make it easier to find a way through the labyrinth of requirements resulting from 180 recitals, 113 articles and 13 annexes, the following process model for insurance companies describes an efficient way to determine whether the AI application falls under the AI Act, and if so, to which category the application belongs and what this means for the scope of regulation.

 

Does the AI Act apply?

When considering the question of whether the provisions of the AI Act apply to an AI application, we must first check whether the application falls under the scope of Article 2 (territorial) and Article 3 (material).

 

Territorial applicability

Is the insurer headquartered in the EU, is the application being placed on the market in the EU, has it been used by users in the EU or are the outputs used in the EU? If the answer to any of these questions is yes, then the AI Act applies.

 

Material applicability

Next, we must consider whether the application is ‘artificial intelligence’ at all within the meaning of the AI Act. To quote Article 3: ‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’.

 

This definition is not universal valid, but differs from other definitions in the technical literature. Only if this definition applies to the AI application in question do the provisions of the AI Act apply.

 

Prohibited systems?

In the next step, we must clarify whether the AI application falls into the category of ‘prohibited systems’. Article 5 contains an extensive set of criteria that must be reviewed in detail. It is important to know that Article 5 already applies: the operation of prohibited systems has been banned since 2 February 2025.

 

The catalogue of prohibited systems contains a total of eight subject areas. It is advisable to check these carefully, as an infringement of Article 5 is punishable with exceptionally strict sanctions. [2]

 

 

Prohibited practices generally include manipulative subliminal techniques or the exploitation of a person’s vulnerability due to their age or disability in order to influence a person’s behaviour in a way that causes or is likely to cause physical or mental harm to that person or another person.

 

High-risk systems

We must then check whether the AI application falls into the category of high-risk systems. These are defined in Art. 6 and the fields of application concerned are listed in full by sector in Annex III.

 

Point 5c of Annex III sets forth criteria for AI applications in the insurance sector. To quote it word for word: ‘AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of health and life insurance’. The comprehensive requirements for high-risk systems are therefore limited to the health and life insurance segments, and only to risk assessment and pricing in this case.

 

There are also exceptions. For example, Article 6 (3) states: ‘An AI system shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.’

 

If the AI application meets the criteria in point 5c of Annex III and none of the exceptions set out in Article 6 (3) apply, we are dealing with a high-risk system. As users, the extensive requirements of the following articles must be met (Articles 8 to 15). These include, but are not limited to, specific risk management, data governance, detailed technical documentation and record-keeping requirements, transparency requirements, human oversight, and specific requirements for accuracy, robustness and cybersecurity. We comment further on the requirements for providers, deployers, importers and users of high-risk systems in a subsequent blog post.

 

AI applications with special transparency requirements

If the AI application is not a high-risk system, we must check whether it meets one of the criteria for applications with special transparency requirements pursuant to Article 50. Specifically, this applies to AI applications that interact with natural persons, have emotion recognition systems or AI systems designed to generate deep fakes. In these cases, this must be made transparent for users. In the insurance sector, this is only relevant when chatbots are used.

 

General-purpose AI models

Finally, we must determine whether the AI model is a large language model, as it might then fall into the category of general-purpose AI models. If the AI application is made available under a free and open-source licence, no further action needs to be taken. Otherwise, we must decide whether it is a ‘general-purpose AI model with or without systemic risk’.

 

Classification criteria are set out in Annex XIII. Key criteria include the complexity of the model measured in FLOPs (floating point operations per second), the number of parameters of the model, the size of the training data set and the number of registered end users.

 

Depending on these criteria, the requirements set out in Articles 53 and 55 must be taken into account. Specifically, these relate to technical documentation, a strategy to comply with copyright, the assessment of systemic risks and much more.

 

Are there voluntary commitments?

If the AI application does not fall into any of the previous categories, there are no regulatory requirements under the AI Act. In this case, all that remains to be done is to check whether there is a voluntary commitment relating to AI applications in the industry or company that has to be taken into account.

 

What needs to be done?

On the basis of the process model described above, we now know how to categorise our AI application. In a future blog post, we address what specific steps must then be taken.

 

 

 

 

[1] Regulations on prohibited systems (Article 5) and AI literacy (Article 4) have been in effect since 2 February 2025. Special regulations concerning governance structures and general-purpose AI systems come into force on 2 August 2025.

 

[2] In accordance with Article 99, fines of up to EUR 35,000,000 or up to 7% of an undertaking’s total worldwide annual turnover for the preceding financial year – whichever is higher – can be imposed.

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time