PREVIOUS ARTICLENEXT ARTICLE
FEATURE ARTICLES
By 13 September 2024 | Categories: feature articles

0

Addressing the concern of bias in insurance data

Josefin Rosén, Trustworthy AI specialist at SAS’ Data Ethics Practice

Addressing bias in insurance is a critical industry-wide priority, particularly given its profound impact on the underwriting process. With industry experts predicting that discrimination will become a significant regulatory concern from an AI perspective, it is imperative to adopt an ethical and responsible approach to AI sooner rather than later.

Data is the cornerstone of all AI systems, which learn from the data they are trained on. Therefore, it is crucial that the data used is relevant, fair, accurate, representative and of high quality. For instance, when developing AI for automated decision-making, the system typically learns from historical data, often originating from a previously manual process where human decision makers made similar decisions. While the AI may learn to replicate these decisions, it also inherits the biases embedded in the data from those human decision-makers. The risk is that while an individual’s biased decision may affect a single case, an AI system can scale this bias across thousands of decisions per minute, rapidly amplifying the impact if not properly managed.  

Moreover, seemingly innocent variables can inadvertently act as proxies for sensitive information, introducing bias into AI systems. For example, a postal code may serve as an unintended proxy for ethnicity, particularly in regions with a history of segregation, where specific ethnic groups are concentrated in certain areas. As a result, even if ethnicity is not explicitly included as a variable, the use of postal codes can introduce unintended bias, leading to unfair outcomes in AI-driven models.

Different approaches

Trustworthy AI begins before the first line of code, and it requires an intentional, holistic approach where supporting technology is crucial, but also where people and processes are important to consider.

At SAS we are using an AI governance model within our own organisation that we also use for advisory purposes. The model has four pillars:

  1. Oversight, which essentially brings humans into the loop. We have established an interdisciplinary executive committee that guides the organisation through AI ethical dilemmas, form sales opportunities to procurement decisions.
  1. Controls, where the focus is on regulatory activities around the globe and the establishment of AI-specific risk management methods within the organisation.
  1. Culture, where we focus on coaching and training our employees to cultivate a global culture of well-intentioned individuals committed to upholding the principles of trustworthy AI. This is an important aspect since trustworthy AI progresses in tandem with the pace of cultural change
  1. Platform, that overlays and supports the technology and capabilities that are necessary to be able to innovate responsibly at every stage of the AI lifecycle, from data to decision, and to develop and deploy AI that is truly trustworthy. This includes solid data management and data governance, bias detection and mitigation, explainability, decision audibility, model monitoring given that AI models can degrade over time.

Enhancing the analytical lifecycle

As emphasised earlier there is no good AI without good data, the quality of AI is directly tied to the quality of data it processes. One innovative approach to overcoming data limitations is the use of synthetic data. This allows organisations to bridge gaps where real data might be scarce or unusable due to privacy concerns. By generating synthetic data, insurers can ensure that their models are more inclusive and representative of diverse populations. Moreover, synthetic data is often more cost-effective than gathering actual demographic or behaviour-based information, making it an attractive option for insurance leaders who need to make predictive decisions efficiently and affordably.

In addition, AI offers significant potential to enhance insurance operations. When deployed responsibly, AI can improve fraud detection and provide customers with targeted risk prevention strategies, reducing the likelihood of false claims. By training these models on large datasets that reflect the characteristics of real fraud cases, insurers can better identify anomalies and suspicious patterns, ultimately protecting both the company and its clients.  

The path forward

The integration of trustworthy AI in the insurance industry is not just a technological imperative but a moral and ethical one. As AI continues to evolve, insurers must remain cautious, ensuring that their systems address bias and operate transparently and fairly. The four pillars of the AI governance model referenced above provide a robust AI governance framework for achieving this goal. By fostering a culture of responsibility, implementing strong regulatory controls, ensuring human oversight, and leveraging advanced platforms, we can build AI systems that not only drive innovation but also uphold high standards of ethics and fairness.

As the industry moves forward, it is essential that all stakeholders - regulators, companies, and consumers collaborate to ensure that AI is used as a force for good. Through careful planning, a solid AI governance model, and a commitment to trustworthy AI principles, we can create a future where AI enhances the insurance industry, fostering trust and delivering fair outcomes for all.

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (45 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (21 votes)
Better business applications (132 votes)