Can AI Innovation in Insurance Outweigh Regulatory Challenges?

October 7, 2024

Several types of artificial intelligence are already being adopted by various parts of the insurance industry, and they have the potential to deliver extraordinary efficiency savings, opening the door for even more profitability, innovation, and complex problem-solving. While the use cases for AI-based large language models, such as those used in ChatGPT, in the insurance industry are evolving, at present examples of how it is being used include summarizing and generating documents, carrying out data analytics, and acquiring data for risk assessment and underwriting. As an insurtech company, we are also looking at how AI can help us write software in an automated way and exchange data between two entities across the insurance ecosystem.

Artificial intelligence introduces multiple risks as it can generate errors. AI can misapply statute information from one U.S. state to others, leading to incorrect conclusions. Additionally, AI systems can often hallucinate, or fabricate facts by incorrectly extrapolating information. In terms of fairness, AI algorithms can also create bias if they rely on prejudiced data, potentially leading to discriminatory practices in insurance pricing or hiring. Therefore, despite the efficiency benefits AI can bring, human oversight remains essential to ensure decisions are fair and inclusive.

1. AI Risks

Although AI can bring significant advancements in efficiency and profitability for the insurance industry, it is not without its risks. One main source of risk is the potential for AI to generate errors. For instance, AI may ingest legal statute information from one state and incorrectly apply it to others, leading to flawed conclusions and actions. Another issue is that AI can hallucinate — it can take a piece of factual information and create incorrect extrapolations, further complicating matters.

Moreover, AI bias remains a significant concern. When AI systems use prejudiced data, their outputs become skewed, which can lead to discriminatory practices. For example, if AI detects higher mortality rates among certain racial or ethnic groups, it may erroneously conclude that higher insurance premiums should be charged to these groups. These biases are particularly dangerous in recruitment, potentially leading to discrimination based on region or socioeconomic background. These concerns highlight the need for vigilant human oversight to ensure that AI-driven decisions are equitable and free from undue prejudice.

2. New AI Regulations

AI technology has evolved rapidly over the past few years, and regulation has struggled to keep pace with developments. Legislators are now grappling with the need to catch up with fast-evolving AI technology, hoping to mitigate the associated risks. This creates new challenges for insurers, who must prepare for an influx of regulations designed to govern AI implementations. Earlier this year, Colorado became the first state to implement comprehensive legislation aimed at regulating developers and users of high-risk AI systems to protect consumers.

Under Colorado’s new AI legislation, high-risk AI systems include those that significantly influence decisions related to education, employment, financial services, government services, healthcare, housing, insurance, and legal services. The law mandates that developers and deployers of such AI systems take proactive steps to mitigate risks of algorithmic discrimination. Developers are required to disclose harmful potential uses of their AI systems, the data used to train the systems, and risk mitigation measures. This enhanced transparency is designed to ensure that AI systems are safe and equitable.

3. Avoiding Algorithmic Bias

The Colorado AI Act, set to take effect on February 1, 2026, requires AI developers and users to address and mitigate risks associated with algorithmic discrimination. Developers must provide detailed information to deployers about the AI system’s data sources, potential harmful uses, and measures taken to minimize bias. Additionally, developers must publicly disclose information on the types of AI systems they’ve released and how they manage discrimination risks.

In response, AI users must implement comprehensive risk management policies to oversee the use of high-risk AI systems. This includes conducting detailed impact assessments of AI systems and any modifications made to them. Such measures are designed to make AI usage transparent and accountable, thereby reducing the likelihood of biased or harmful outcomes. These stringent requirements aim to ensure that AI systems are deployed ethically and responsibly, thereby gaining consumer trust and regulatory approval.

4. Transparency Required

The Colorado legislation emphasizes transparency, requiring that consumers be informed when they interact with AI systems, such as chatbots, unless the interaction is clearly evident. This directive is in line with similar efforts under the recent EU AI Act and other state regulations, such as those in Utah, California, and New Jersey. Companies deploying AI must disclose on their websites that AI systems are used for consequential decisions affecting customers.

Looking ahead, it’s expected that other states will follow Colorado’s lead in adopting comprehensive AI regulations. Many elements of AI governance, such as risk assessment, control testing, and data monitoring, are already covered under existing laws and regulatory frameworks both in the U.S. and globally. This multitude of regulations adds complexity to the AI landscape, making future compliance an ongoing challenge for insurers. By staying proactive and vigilant, companies can navigate the evolving regulatory environment successfully.

5. Five Practical Steps for Insurers

While AI presents challenges, there are practical steps insurers can take to stay compliant and derive maximum benefit from the technology. One key step is Clarity: insurers should make straightforward disclosures to inform customers when chatbots are in use or when AI contributes to decision-making, particularly in sensitive areas like hiring. This upfront transparency builds trust and meets regulatory requirements.

Intellectual Assets protection is another critical area, where insurers must be diligent in safeguarding customers’ data ownership and protecting sensitive information. For instance, at Zywave, there have been instances where AI providers’ contracts sought ownership of the data or models provided. Therefore, scrutinizing contracts for confidentiality and intellectual property clauses becomes increasingly important as more AI technologies get integrated into operations. Ensuring the company retains control and protection of its proprietary data is crucial.

Reliable Data is essential to ensure AI systems make accurate, unbiased decisions. Companies are responsible for verifying that AI systems access trustworthy data. For instance, Zywave uses a well-vetted data repository, comprising proprietary data, data from reliable third parties, and vetted public sources. This practice not only aligns with new regulations but also fosters transparency and accountability in decision-making processes. Knowing the source and reliability of the data helps ensure that AI-driven decisions are just and defensible.

Record Keeping is particularly significant as AI products become ubiquitous in the insurance industry. It’s vital to meticulously document the data sources, ownership, and usage to fend off allegations of intellectual property theft or errors due to incorrect data. This form of due diligence is crucial in protecting companies from legal and ethical pitfalls, ensuring the responsible and effective implementation of AI technologies.

Finally, Acquiring New Skills is necessary for staying ahead of regulatory changes and ensuring compliant AI usage. Insurance firms must deepen their understanding of AI to navigate forthcoming laws effectively. While new roles like prompt engineers are emerging to optimize AI outputs, human oversight is indispensable. It ensures inputs provided to AI systems are free from bias and security risks, reinforcing the importance of combining technical know-how with ethical considerations.

Closing Thoughts

The rapid adoption and advancement of AI technology ensure its lasting importance. Although integrating AI safely and ethically necessitates additional administrative and oversight efforts, the resulting efficiency and profitability gains are immense. The insurance industry, in particular, could greatly benefit from AI, provided strict protocols are implemented to maintain compliance in a continually changing regulatory landscape. Crafting a strong AI governance framework requires extra effort but offers monumental benefits in terms of innovation, efficiency, and profitability. These rewards make the additional work worthwhile. Over time, the advantages will greatly surpass the challenges, paving the way for a more dynamic and customer-centric insurance sector.

In addition, as AI continues to evolve, its potential applications in the insurance industry will likely expand. From speeding up claims processing to enhancing risk assessment accuracy, AI can revolutionize various aspects of the business. However, this necessitates continuous monitoring and adaptation to balance technological advancements with ethical considerations. Investing in AI oversight and governance today will ensure sustainable growth and maintain public trust in the long run. Consequently, as the insurance landscape transforms, companies leveraging AI with robust governance will not only achieve better operational efficiency but also set new standards for customer service excellence.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later