The potential of large language models in the insurance sector
A concise exploration
An introduction to how the components of artificial intelligence are trained to produce accurate results.
The recent development of large (advanced) language models has lowered the knowledge and technological barriers to employ NLP models in a wide variety of use cases. In this paper we explore how these models could be leveraged in the insurance sector.
Insurers are well accustomed to analysing structured data by statistical analysis or machine learning algorithms. However, a large part of the available data within an insurance company is unstructured, with textual information playing a key role in many of its core processes, like client interaction, claims handling or underwriting.
Large language models have given us the tools to analyse these data without building complex proprietary models and automating even more processes than before.
In this article we will discuss in short:
- What is natural language processing (NLP)?
- How has it evolved?
- What are large language models (LLMs)?
- How does this relate to generative artificial intelligence (genAI)?
- What are some of the most compelling use cases in the insurance industry?
- Challenges, risks and limitations
The idea behind this paper is to give some context to a discussion that has been arising in many insurers today.
Natural language processing
How computers can “understand” human language
Natural language processing, or in short NLP, is a field within artificial intelligence. It is a combination of artificial intelligence, computer science and linguistics. NLP aims to enable systems to understand, interpret, generate or respond to human language in a meaningful way.
It has a wide variety of uses, like machine translation, sentiment analysis, speech recognition, text classification, topic detection and more.
The techniques in NLP have seen a rapid progression over the last 10 years where we moved from simple methods like “bag of words” to more advanced deep learning approaches. In this last category, state-of-the-art transformer models (used in large language models) are now able, more than ever before, to capture context when analysing textual data.
The evolutionary leap
How LLMs are set to replace traditional NLP
The landscape of language understanding and generation is undergoing a transformative shift. The rise of large language models (LLMs) stands out as a defining moment. Where NLP has been a cornerstone for language-related tasks, the emergence of LLMs has revolutionised the way we approach linguistic challenges.
Traditional NLP approaches often demanded extensive feature engineering, with more complex training setups. But they also required substantial domain expertise for fine-tuning models effectively. In contrast, LLMs offer a more streamlined approach due to their massive scale and pretraining capabilities. The massive amount of diverse textual data that goes into these LLMs allows them to automatically learn intricate language patters without the need for extensive feature engineering. This shift will reduce the burden on data scientists and opens up the spectrum of professionals that can harness the power of these advanced language models.
Large language models
Bigger models and better performance
Large language models are advanced transformer models, engineered to generate text that closely mimics human writing. The quality of these models has witnessed a remarkable surge in recent years due to the available training data that can be found on the internet along with the increase in available computing power. This allowed the size of these LLMs to expand exponentially, escalating from hundreds of millions of parameters in 2018 to hundreds of billions of parameters in 2023.1,2
By training them with vast amounts of textual data, LLMs are now able to generate human-like output. We do have to keep in mind that that these models are statistical algorithms that use probabilistic models to generate sequences of words that are likely to occur in human language. LLMs are not developed for fact checking and therefore caution should be given to the “verity” of the output.
Generative AI
The power of creation
Large language models are part of what is called "generative AI.” Generative AI is a combination term for AI models that can generate new data based on patterns learned from existing data. This can be more than just text. Images, voices, and music can also be created by generative AI.
Generative AI and especially large language models have already seen applications in several areas, like chatbots or virtual assistants, language translations, Q&A systems or even code generation and debugging.
What can the insurance sector gain from these powerful AI models? In the next section we will discuss a number of use cases on how large language models can be of use in the insurance sector.
Use cases in insurance
How LLMs can transform the insurance sector
Across industries it is seen that the most promising use cases of LLMs can be classified in the following areas:
- Semantic search (analysing textual data at scale)
- Content generation
- Customer interaction
This next section we will shed some light on each of those categories of use cases and their potential to reshape insurance.
1. Semantic search
Claims handling
Claims handling is a process that is rich in unstructured textual data. For instance, a claims form containing free text fields that are filled in by the insured, correspondence with experts or client communication. To analyse this textual information to determine whether a claim file is complete, what the potential severity of the claim is and to which department the claim should be distributed, NLP and LLMs can play an important role.
LLMs can, for instance, be fine-tuned to help with tasks such as data or text classification. Even though this is already being done in the insurance sector, using LLMs can improve these tasks significantly. By fine-tuning these models, using examples of the task it needs to complete, existing pretrained LLMs can be used on new domains the model hasn’t seen before.3 In this way the internal distribution of claims to different departments or specialists can be automated.
LLMs can also be used for information extraction. They can be trained to extract specific data from emails or documents. This data can then be used for further analysis, for instance determining the severity, or for further processing.
In addition, if the model is not able to extract the necessary data, a draft response could be generated stating that certain information is missing and requesting that information. This way, claims handlers can spend more time on claims that require more human interaction and expertise.
Fraud detection
NLP has already shown its usefulness in fraud detection.4 With LLMs evolving quicky, they can add value by for example enhancing pattern recognition and more advanced text analysis. They might help reduce the false positives or even increase the true positives. In addition, LLMs can help to generate human-like explanations about the outcome of a fraud detection model, fostering the adoption of the model outcome by the fraud investigators.
2. Content generation
Summarisation
Summarising is a tricky task, as it requires extracting all important information and writing it down in a short and understandable way. A good summary is not just a copy and paste from several important parts. It is a complete piece of text that reads easily.
LLMs are a powerful tool for the task of summarisation. They can generate a shorter version of input text without leaving out the ”important” pieces.
Examples where this could be used are customer service, where client call centre contact can be summarised automatically in the customer relationship management (CRM) system, or in underwriting where large policy documents can be reduced to understand coverages or clauses, and ultimately assess the risk related to the contract.
Contract and report creation
LLMs can be used to support the process of document, contract or report creation. They could be fine-tuned to create new documents based on a predefined template. This could streamline the process of reporting where complete documents or reports could be generated based on templates or even prompts.
Coding support
Writing code can be done faster with the help of LLMs. If we use these models correctly, they can help us with writing or transforming code, for instance in the case of modernising legacy systems.
Although the generated code may contain bugs or even security weaknesses, it has the potential to speed up the task significantly.
3. Customer interaction
Customised messages
With the use of LLMs it is possible to automate customised messages to clients. By building a “personalised” prompt, using client data as for instance risk profiles, the LLM can be asked to write a personalised message in a style that fits both the message as well as the type of client.
Chatbots or smart agents
The LLMs behind ChatGPT, GPT 3.5 or GPT 4 are particularly strong in understanding and responding to a user’s question. When retrained with company information, this model has the potential to answer and handle customer requests. But also, for human agents, LLMs can be of great support by making suggestions during the conversation based on the content of the conversation or the sentiment of the customer.
Challenges, risks and limitations
Data and privacy
Even though these models nowadays can do more and more, data will always be key. When retraining an LLM, make sure the training data used is correct and in line with current regulations.
In addition, it is recommendable to use an enterprise version of the LLM instead of a publicly available version, to avoid sharing sensitive data. An enterprise version, like the Azure OpenAI, comes with several benefits in the areas of data security, private access, compliance and governance and integration with other services. It also allows for ”easy” fine-tuning and retraining of models with your own data within your own subscription.
Skills and competencies
Developing an AI use case for insurers requires a collaborative effort between data scientists, engineers and business experts. While data scientists and engineers bring technical expertise in crafting and fine-tuning models, business experts contribute essential domain knowledge about insurance operations, regulatory requirements and industry nuances. This synergy ensures that the developed language models not only meet technical standards but also align with the practical needs and objectives of the insurance company. With LLMs lowering the technical challenge in analysing textual data, the ability to understand the strategic use of this technology and apply it in the insurance business domain will become even more important.
Risk
The evolution of AI has brought many new opportunities. However, with its rise come new risks:
- Persuasive misinformation: Generative AI has the potential to generate highly persuasive but factually incorrect text. This can lead to misinformation or misinterpretation.
- Discrimination: AI applications can lead to unforeseen behaviour that can lead to discrimination, especially if models are trained on biased real-world data.
- Cybercrime: Following up on the first point, persuasive misinformation, generative AI can facilitate the generation of phishing and fraudulent emails or deepfakes by those with wrong intentions.
In response to these risks, both companies and governments have begun to take measures to protect against the misuse of AI. For instance, more research has been done to improve the fairness and transparency of AI systems. Companies are also implementing policies and guidelines to ensure responsible use of AI within their organisations.
On the governmental side, the European Union has recently proposed the Artificial Intelligence Act. This comprehensive legal framework aims to ensure that AI is used in a way that is safe and respects fundamental rights.
These measures represent important steps towards mitigating the risks associated with AI. However, as AI continues to evolve, ongoing efforts will be needed to ensure its responsible use.
Conclusion
Language models have already shown their added value in a wide range of sectors, including the insurance sector.6,7 However, there is still much to gain by using LLMs, or generative AI. Automating more routine work will free up time to use specific skills and expertise more efficiently. And personalisation will enhance the ability to serve customers well.
Even though language models could potentially add a lot of value, we do have to stay focussed on the challenges and limitations. Data privacy and security need always be a high priority, especially with the European AI act reaching a provisional agreement in December last year (2023).8
1 Radford, A. et al. (2018). Language models are unsupervised multitask learners.
2 Brown, T.B. et al. (2020). Language Models Are Few-Shot Learners.
3 Vaswani, A. et al. (2017). Attention Is All You Need.
4 Boulieris, P. et al. (2023). Fraud Detection With Natural Language Processing.
5 Van Dam, D., van Es, R. & Postema, J.T. (July 2021). Developing Fair Models in an Imperfect World: How to Deal With Bias in AI. Milliman White Paper. Retrieved 4 February 2024 from https://www.milliman.com/en/insight/developing-fair-models-in-an-imperfect-world.
6 NN. How NN uses AI to check fraud risks and injury claims. Retrieved 4 February 2024 from https://www.nn-group.com/news/how-nn-uses-ai-to-check-fraud-risks-and-injury-claims/.
7 Willard, J. (20 July 2023). Cytora utilizes LLMs for next generation risk and underwriting intelligence. Reinsurance News. Retrieved 4 February 2024 from https://www.reinsurancene.ws/cytora-utilizes-llms-for-next-generation-risk-and-underwriting-intelligence/.
Explore more tags from this article
About the Author(s)
Contact us
We’re here to help you break through complex challenges and achieve next-level success.
Contact us
We’re here to help you break through complex challenges and achieve next-level success.