Madrid 2,166 EUR 0 (0,09 %)
Madrid 2,166 EUR 0 (0,09 %)
INNOVATION| 18.09.2023

Responsible Artificial Intelligence, an economic, technological and social necessity

Thumbnail user
Artificial intelligence is set to be the next frontier of productivity, with immense economic potential in all industries. Analysts are calling it the technological development of the 21st century, and its social impact is already palpable. But for this to happen, the AI must be accountable, or it won’t be.

It is no secret that artificial intelligence (AI) has permeated our lives, from the technology at our fingertips (our smartphones, computers or increasingly autonomous vehicles) to the business and economic world, with its ever-increasing adoption by companies. Its development and progress has been analyzed and promoted for more than a decade.

Heavy investments by technology giants in research and development are fueling advances in the field of AI. In turn, organizations’ confidence in AI and their perception of the technology as a key driver of their growth are driving their increased investments. According to recent estimates from the consulting firm Precedence Research, the AI market will grow at a compound annual growth rate of 38.1% from 2022, reaching $1.591 billion by 2030.

With the generative AI boom thanks to applications such as ChatGPT, Stable Diffusion or GitHub Copilot, among others, the impact of technology has grown exponentially. Boston Consulting Group’s latest study on the matter, “The CEO’s Roadmap on Generative AI,” talks about how, by 2027, the generative AI market will have an estimated size of $121 billion, with a CAGR of 68% (2022-2027).

All this context leads to a point on which analysts, companies, technology developers and implementers, regulators and even society in general seem to agree: the economic potential is enormous, but it also entails uncertainty and complex latent risks.

A potential for the global economy worth trillions of dollars

AI can perform a wide variety of routine tasks, from sorting data or text to creating such texts, to automating inventory in a warehouse or handling claims. As a result, it is increasingly permeating business tasks and operations, as well as society’s day-to-day life.

In addition, it has very positive effects at the economic level and can enable interrelated technological developments that will increase the rate of productivity. As such, it is a general-purpose technology that is needed to drive development and help solve many of today’s problems.

Focusing on generative AI, we find studies that talk about its economic potential. The most recent, published by McKinsey in June, quantifies the impact of generative AI on productivity in trillions of dollars for the global economy. Specifically, it talks about how “generative AI can add [to the economy] the equivalent of $2.6-$4.4 trillion per year across the 63 use cases analyzed. This means an increase in all AI of between 15% and 40%.” Furthermore, the study adds that “this estimate would roughly double if we include the impact of software-embedded generative AI using current AI for other tasks beyond those use cases.”

AI—and generative AI—is having a significant impact on all industry sectors, and all indications are that it will continue to do so in the future. As far as the insurance sector is concerned, this will be an impact of between $50 billion and $70 billion per year, according to McKinsey. Other sectors, such as banking, increase this figure to $200-$340 billion.

“In this context, where we are only at the tip of the iceberg of the AI era, it is essential to control exposure to the risks associated with technology. There is no point in generating savings or increasing productivity if the operational, ethical and regulatory risks involved are not controlled, or if companies do not prioritize their responsible use. The short- and long-term consequences of not managing and mitigating the challenges are too serious. These risks can and should be controlled,” says Bárbara Fernández, deputy director of MAPFRE Open Innovation and head of Insur_space.

Mitigating risks for sustainable economic, technological and social development

According to the report “Responsible Artificial Intelligence: reliable, safe and sustainable technology to generate the economy of the future,” prepared by MAPFRE, most organizations already believe that the responsible use of AI should be a priority for the top management of their companies.

Today, there is a certain perception of control of AI risks by companies, although the absence of regulation and practical guidelines for the proper use of artificial intelligence worries most of the parties involved.

In this regard, the survey “The State of AI in 2023: Generative AI’s breakout year,” published in August by McKinsey, is along the same lines. Respondents indicate that the risks are a concern, but very few companies are yet prepared for the use of generative AI or to deal with the risks it brings.

Mitigating risks to prevent them from leading to economic, operational or reputational losses, physical harm to individuals, marginalization/discrimination of groups, economic or political instability, or digital security problems, among others, is essential for sustainable economic, technological and social development. For AI to truly be a lever for growth, responsible AI must be a fundamental piece of the business agenda, regardless of size or economic activity.

To ensure this technological paradigm, the first step is the definition of best practices, standards and services for risk assessment, monitoring and mitigation. As AI adoption continues to increase and regulation comes into effect, not only will the need to comply with that first phase increase, but so will the need to ensure regulatory compliance. This will be linked to an opportunity for third-party services capable of meeting those demands and vouching for working with responsible AI.

“Insurers must act as a safety net in this environment, as well as enablers of any major project or innovation that comes from the hand of artificial intelligence. That is why it’s our responsibility to be able to anticipate what’s to come, to lead the way through our own use of ethical AI and responsible governance,” says Fernandez. “Only by investing in research and the right use of technology will we be able to accompany our customers on their journey, as well as evaluate and support the use of AI in every initiative they develop,” she adds.