Artificial Intelligence

Interpretable machine learning in the insurance environment – objectives of model interpretability

by Thomas Hofmann / 31. October 2024

The last blog post took a closer look at the concept of explainability in connection with AI model predictions. It explained that explainability does not refer exclusively to the representation of causal relationships, but rather is strongly dependent on the given context. This article presents different objectives of model interpretability that arise from the different areas of application for AI models, the target audiences in each case and the possible consequences of inaccurate predictions.

General conditions for algorithmic data processing [1]

The same technology-neutral general conditions apply to all methods in the insurance sector, regardless of the underlying modelling type. The requirements imposed by the financial supervisory authorities and legislators include strict requirements relating to data quality and security, fairness and non-discrimination as well as the stability and explainability of the model results. These aspects have remained largely unchanged over time and must always be viewed in the overall context and not in isolation. Consequently, all properties that were previously tested for the use of traditional statistical models, such as generalised linear models (GLMs), must also be verified for complex ML methods.

 

Objectives of model interpretability [2]

Explainability is not only a regulatory requirement, but also of fundamental importance in order to strengthen trust in a model decision. The independent High-Level Expert Group on Artificial Intelligence set up by the European Commission stresses that the degree of explainability depends on the audience, the context and the potential consequences of an erroneous decision. Depending on the user and area of application of an AI model, there are different objectives for its comprehensibility. There can be at least four reasons for the need to explain AI systems and their predictive results, although they cannot be clearly distinguished from one another.

 

Explain to discover: The overarching goal of data science is to glean insights and correlations from data. Many problems can be solved more effectively by analysing large data sets and using machine learning models. In addition to the data itself, the models used become an independent source of knowledge. The interpretability of the models makes it possible to extract additional knowledge that is captured by the model. This provides a deeper insight into the data, which also makes it easier to identify correlations. This makes it possible to make hypotheses about causal relationships.

 

Explain to improve: Another incentive to develop interpretable models is the need to continuously improve them. A model that generates understandable and comprehensible predictions is easier to develop and potential errors are easier to identify. The interpretability of ML models is crucial, as this is the only way to test and verify their results. Understanding an erroneous model prediction provides clear approaches to correct the system.

 

Explain to justify: There is an increasing need for explainability to ensure that AI-supported decisions are not erroneous. The term explanation usually refers to the need for reasons or justifications for a specific outcome and not to a detailed description of the internal process or logic behind the decision-making. Approaches to model explanation and/or interpretability provide the necessary information to validate and/or verify the plausibility of models and justify their results, especially when unexpected decisions have been made.

 

Explain to control (individual predictions): Comprehensive explainability is not only intended to justify model behaviour on a global scale; it can also help to prevent potential problems with individual predictions. An in-depth understanding of system behaviour provides additional knowledge of what would have to have been different in individual cases to achieve an alternative result. This approach plays a key role in ensuring improved control of the system.

 

Objectives of various model audiences

Depending on the area of application, different users inside and outside an insurance company come into contact with AI systems in their day-to-day work. However, they usually have different technical backgrounds and information requirements. As such, one of the aforementioned model interpretability objectives is pursued according to the target group in question. This is illustrated below by way of examples using four different actors in the insurance context.

 

In underwriting, for example, the principle of ‘Explain to discover’ is at the forefront. Here, insurance companies want to use detailed data analysis to identify hidden relationships and patterns that allow conclusions to be drawn about the risks of an insurance group. Interpreting the model used helps to identify risk-related characteristics that influence the risk assessment. For example, if the model shows that certain pre-existing conditions lead to claims at an above-average rate, this may lead to the development of new pricing rules or the adjustment of existing ones in order to avoid money-losing policies in the future.

 

(Product) developers and modellers pursue the goal of ‘Explain to improve’ and focus on optimising the algorithm to allow for more accurate predictions and increase the performance of the model. The analysis of the model plays a key role in identifying both weaknesses and potential improvements. This will make it possible, for example, to develop more accurate pricing models that allow insurance premiums to be calculated more precisely, as well as more personalised insurance offers that are tailored to the individual needs and behavioural patterns of customers.

 

Model validators are responsible for checking the stability and suitability of the model, especially with regard to regulatory requirements, which corresponds to the objective of ‘Explain to justify’. Their main goal is to ensure that the model’s predictions are accurate and reliable. Sensitivity analyses are used to check how sensitively the model reacts to changes in the input data. Model validators also ensure, for example, that the model does not use any discriminatory characteristics, such as gender or religious affiliation, or traits that are strongly correlated with them.

 

Policyholders also attach great importance to the transparency of model predictions, with the objective of ‘Explain to control’ at the forefront, especially with regard to individual forecasts. This should be explained in a comprehensible way to a generally non-expert audience by means of simple, causal relationships. If a customer wishes to find out why they have been denied occupational disability protection or why certain health insurance benefits are excluded as a result of the preceding risk assessment, they have a legitimate interest in an understandable justification. Ideally, specific measures can be identified with which the customer still has a way to obtain the desired insurance cover, even if no humans were involved in the decision-making process.

 

Outlook

The following blog post provides a more detailed explanation of the taxonomy of methods for model explainability.

 

 

[1] Deutsche Aktuarvereinigung e.V. (DAV). 2024, Explainable Artificial Intelligence: Ein aktueller Überblick für Aktuarinnen und Aktuare. Link

 

[2] Adadi and Berrada. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). Link 

 

 

INTERESTED IN MORE?

Subscribe to our blog and stay well informed.

You can revoke your consent at any time