Skip to content
people pointing at a computer screen

Conversational AI Tools for Insurance

If you’ve ever used Alexa or Siri or Google Assistant, you’ve already engaged with a form of conversational AI. Conversational AI models incorporate machine learning technology with natural language processing (NLP) patterns to mimic human conversation. These models’ outputs can be pre-determined, depending on which knowledge base they are trained to find answers, and companies can develop unique knowledge bases to fit their needs. With such flexibility, more and more companies from a wide range of industries incorporate AI solutions into their online platforms. 

Insurance companies are increasingly adding chatbots and messaging apps to their websites to help engage with their customers in real time to answer questions, make payments, purchase policies or even file claims. These technologies give rise to new efficiencies for insurance firms and can simplify the claims process for consumers. In 2017, for instance, the peer-to-peer insurance provider Lemonade was able to handle a simple stolen property claim in under 5 seconds with no paperwork using their claims bot.

Profile of a digital tech user

Generative AI tools for Insurance

Generative AI uses input and data it's trained on to create new content in a huge variety of forms. The popular ChatGPT blends conversational AI with generative AI, giving users the chance to input instructions conversationally while the bot works to create new content based on patterns of data analysis. The application of these types of generative AI technologies in the insurance industry opens the door to all types of interesting uses, from policy underwriting to new product generation and all the way to creating marketing materials.

Of course, these opportunities can expose a range of new challenges that both insurers using these technologies and their clients should be prepared for.

Want to learn more? Contact Us!

What's at risk?

AI-Enabled Cyber Security Threats

In 2018 the most destructive cyber attack in history affected systems and companies in 65 different countries and resulted in billions of dollars in damages. The NotPetya attack, which was attributed to elements of the Russian military intelligence apparatus, resulted in court challenges by the pharmaceutical company Merck. At issue was whether their cybersecurity policy covered an attack that was considered an act of war, which would trigger a specific exclusion clause under their existing cyber liability policy. While Merck won the court’s decision, the case highlighted the need for a thorough examination of the scope of cyber insurance policies and their ability to adapt to new and evolving threats. Regulatory bodies will also need to adjust to how these cases are adjudicated. 

The increase in AI use is a double-edged sword in that respect. While the good guys can train AI systems to automate active network and information system monitoring, bad actors can train these technologies to quickly probe systems for signs of weakness and entry points. AI has even been used to impersonate the voices and likenesses of C-suite workers in order to gain access to important data that would otherwise be secure. Until there’s a solid set of regulations and likely some additional case law on the topic, cybersecurity insurance offerings will need to evolve continually to meet the moment.

Pros & Cons

AI Benefits & Drawbacks in Cyber Liability Insurance

The speed with which insurance transactions can occur is a clear benefit to all involved when it comes to AI. Additionally, consumers and insurance agencies may see cost savings as a result of more efficient and accurate data collection and analysis. Cyber liability policyholders using AI to monitor network and data security may incur lower premiums as a result of the added capability. Insurers can also use AI to determine the risks inherent to a client’s cyber environment, review documentation and arrive at conclusions much faster than without it, which can mean cost savings for everyone involved. 

However, there are many factors in using artificial intelligence that cyber insurance companies consider when establishing a client's risk. Company employees with access to generative AI systems can inadvertently share sensitive information while using these technologies. Even with the best intentions, AI system use can result in copyright infringements, generation of misinformation or plagiarized content and can hurt brand reputation if depended on too heavily. Insurance companies using AI to help with claims or underwriting could see errors as a result of algorithm issues or unethical use of data to establish premium calculations. 

Contact the experts at Ansay