Annual General Meeting 2024

MAPFRE
Madrid 1,965 EUR -0,01 (-0,25 %)
Madrid 1,965 EUR -0,01 (-0,25 %)

INNOVATION| 12.20.2023

Insurance perspective on responsible AI. Challenges and opportunities

CESAR ORTEGA IA SEGUROS

César Ortega Quintero

Expert Data Scientist
MAPFRE

In an era when AI has become an omnipresent force, questions about the ethical implications of AI, its repercussions on the economy, and its potential to increase inequalities have all contributed to a sense of unease. However, as we navigate these unfamiliar waters, we’re also discovering the remarkable ways in which AI can improve our lives.

“Insurance is one of the key enablers of our new AI society”, is the phrase I keep remembering from my first meeting with the Global Head of Disruptive Innovation when I joined MAPFRE back in the summer of 2022. At that time, I was assigned to a recently created workstream on Responsible Artificial Intelligence (RAI).

One year on and I’m now coming to the end of a scouting process in which we’ve conducted several Proof of Concepts (PoCs) to compare Tier 1 RAI providers from Europe and the US by using some of our core models as guinea pigs, in areas such as Underwriting, Customer Lifetime Value (CLTV), and Fraud Detection, among others.

In this context, I’ve taken my time to try to get to know more about Artificial Intelligence and answer some of the most common questions around this topic: How is the legislation going? What are the main benefits of AI? How can we ensure the ethics of the models? What can the insurance industry do in this area?

But first things first. What’s Artificial Intelligence? It’s an ongoing debate, but here’s my take on the concept:

Artificial Intelligence (AI) is a multidisciplinary area of knowledge that combines theoretical principles of subjects such as mathematics, statistics, physics, computer science, graph structures etc. Its purpose is to develop physical and virtual machines capable of mimicking and improving human cognitive processes to perform all manner of tasks “intelligently”, from prediction and optimizing decisions to content generation.

With that much at least clear, I’ve analyzed and learned about its benefits and risks. On this last item, depending on the legislation, the industry and the theoretical approach, risks can be named and grouped differently, but the most common ones are the seven requirements suggested by the EU Guidelines for Trustworthy AI: Human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity; non-discrimination and fairness; societal and environmental well-being; and accountability.

I won’t go into them all individually here – if I did, this article would be too long, believe me. I’ll cut to the chase: it has become imperative to find actionable solutions to control and mitigate AI risks and potential harm to individuals, organizations, and society as a whole.

But how can we mitigate those risks?

The first thing we can do, which is vital, is to promote a culture that values ethical considerations in AI development and usage. This implies not only internal communication campaigns and RAI training for all employees, but also truly integrating these principles into a company’s core values and culture.

Also, a good amount of time and resources is needed to find that sweet spot where all risk dimensions meet desirable thresholds. We need more investment and research so we can continue to evolve and grow side by side with AI.

Here are some tips I can share on how to tackle the risks:

  • Always evaluate your data performance metrics contextually.
  • Improve the robustness of your system by incorporating techniques like data augmentation, adversarial training, or capsule networks.
  • Proactively preventing data and AI models from suffering breaches is a vital task that is closely related to model robustness.
  • If a simpler model does the trick, stick to it, and don’t change tack.
  • Conduct modeling bias analysis to identify and compensate any divergence in performance with respect to relevant bias-error metrics between protected and non-protected groups.

This is just the tip of the iceberg to be honest. There are many more things to discuss and dig deeper on, which you can read in our latest paper: FROM Insurance Perspective IMPORT Responsible AI Challenges and Opportunities.

You can download and read the full article here. I really hope you take the time to learn something new from my journey!

 

RELATED ARTICLES: