Insurance companies, a key player in mitigating the risks of Artificial Intelligence
Against a backdrop in which Artificial Intelligence is booming and is affecting increasingly larger parts of society and business models, MAPFRE has undertaken research with a view to fully understanding the technology risks posed and to find formulas that help to assess, monitor and mitigate these risks in order to deploy Responsible AI (RAI).
The bulk of the global business fabric uses Artificial Intelligence (AI) in different areas of its daily activities, from business processes to the creation of solutions or products for customers, employing a wide range of options. This technology offers major advantages in terms of innovation, productivity and even profitability; however, we must also be aware of the risks it poses (biases, ethics, performance, reliability of information, intellectual property, etc.); these risks must be properly managed to apply it safely, reliably and, ultimately, sustainably.
Most companies are in the process of learning how to use this technology, applying it to use cases in test environments or with very controlled impacts. For this reason, there is a certain perception of control over the risks associated with AI in the business environment, although the general consensus is that frameworks, tools, guidelines and regulations are needed to help with its deployment.
The launch of generative Artificial Intelligence tools, accessible to all citizens, and with unprecedented adoption in record time, has contributed to a drastic increase in the use of this technology by all types of profiles, adding complexity to the debate on the risks posed by AI.
Regulatory bodies are focusing their efforts on enacting regulations and legislation that protect individuals and society from the possible misuse of this technology, although uncertainty remains high, in particular in relation to the allocation of responsibilities.
As use cases escalate at companies and the regulations become clearer, awareness of proper AI risk management will increase, and with it, demand for services associated with properly managing these risks. Insurers can act as a catalyst of this process, helping their customers with the responsible and sustainable deployment of Artificial Intelligence.
MAPFRE analyses the risks of AI
Against this backdrop, MAPFRE has performed research with a view to fully understanding these risks, assessing, monitoring and mitigating them in order to deploy AI responsibly, publishing a comprehensive report containing all the conclusions.
There are three types of risks and impacts generated by AI, with an impact at a personal, social and corporate level:
- Operational risk, connected to the strength, security and performance of the technology.
- Ethical risk, in relation to justice, transparency and explainability of solutions and models.
- Regulatory risk, which encompasses everything in relation to regulatory compliance and legal liability.
Controlling these areas and attempting to mitigate them appears essential, especially at a time when generative AI has achieved unprecedented adoption, intensifying concerns about its ethical and legal impact: the generative AI market is expected to be worth an estimated $121 million by 2027, with a CAGR of 68% (2022-2027).
What is responsible AI and why is it so important
Specifically, the concept of Responsible Artificial Intelligence (RAI) is about guaranteeing the proper use of AI and minimizing these risks.
Responsible Artificial Intelligence is about managing the life cycle of AI models following the necessary principles, processes and policies to ensure that the technology is developed and operates in such a way that it always seeks a positive impact and protects individuals and society. This entails understanding and managing the risks associated with the models and generating the mechanisms that allow them to be controlled and mitigated from the triple perspective mentioned above.
Although this is still a fledging concept, some sectors are worth particular mention when it comes to the maturity of their AI adoption. The Technology, Media and Telecoms sector leads the way (with 92% of companies having deployed AI at scale with RAI systems at an advanced stage or under development), followed by the Pharmaceuticals sector and Health, in that order.
The coming years will be key to the growth of this area. According to a recent study by Gartner, RAI technologies and systems could reach maturity in terms of their adoption and scale in approximately 6 years.
“Never have we seen a technology be adopted so quickly as we are seeing with generative AI. Given the increasing concerns shared by policymakers and businesses and the fact that the risks are still poorly understood, the concept of responsible AI must be brought to the fore to ensure the technology is taken seriously and to ensure collaboration to establish common ways of working with it, which help protect people and communities, while encouraging positive innovation,” comments Bárbara Fernández, deputy director of MAPFRE Open Innovation and head of insur_space.
The role of insurance companies in mitigating AI risks
At present, insurance products and services that covers the risks associated with the use of Artificial Intelligence are practically non-existent.
Autonomous vehicles or systems used in industrial processes are the first examples in which specific insurance for AI is starting to appear. It is precisely in these areas where demand is expected to increase in the medium to long term, extending to the insurance of any system that is managed in its entirety by AI algorithms. In other words, systems that are automated end to end, where decisions are made applying Artificial Intelligence with no human involvement.
To this end, it is important for regulations to assign responsibilities. What’s more, it is necessary to assess current insurance products to determine the impact of the use of AI by customers on their coverage. In some cases, it will not be necessary to create specific products, but to adapt existing ones.
In the case of insurance companies, they play a double role in the current context: first of all, they must guarantee that the internal deployment of the technology is completely safe, serving as an example for other industries; secondly, they must accompany and help customers in their own deployment of AI and RAI to protect themselves, individuals and society in general, preventing and avoiding any unwanted behavior and guaranteeing a solution and coverage in case of any damage.
Download the report prepared by MAPFRE here to take a deeper look into the analysis of AI risks and the conclusions drawn from the research and field work performed.