Fico and Corinium survey looks at responsible AI in business

0
Fico and Corinium survey looks at responsible AI in business

FICO known for the “Credit score/FICO Score,” an indicator used to predict credit issues, has released a report titled “The State of Responsible AI.” The document is the results of a survey conducted with the help of business intelligence firm Corinium around responsible AI. The two organizations tried to understand the aspects that enable a company to adopt more responsible, ethical, transparent and secure AI.

A survey on responsible artificial intelligence

As part of an initiative led by Corinium and FICO, a survey was conducted on companies exploiting artificial intelligence on a daily basis. The objective was to better understand how companies are using AI and whether the issues of ethics, responsibility, and respect for the interests of customers are assimilated by these groups. The two organizations interviewed several Analytics chiefs, AI chiefs and Data Officers to better understand the AI strategies implemented in these structures.

According to the survey, most of the companies surveyed are deploying AI with high risk. Some figures available in the report, tend to affirm these statements:

  • 73% of the organizations surveyed had difficulty getting support from their management to prioritize ethical and responsible AI
  • 65% are unaware of how certain decisions are made around their company’s strategy.

In addition, one in five institutions is taking the lead in monitoring all the models developed to ensure that they comply with a certain ethics. A figure to correlate with these two data:

  • 39% of company board members as well as 33% of management teams have incomplete knowledge about the ethical issues of AI.

The importance of the notion of “accountability”

The report’s findings call for a greater emphasis on governance and training on responsible AI among decision makers, board members and executive team members. Another issue highlighted by the study is the lack of accountability for responsible AI.

One piece of data points in this direction:

  • 43% of those surveyed say they have no responsibility for areas beyond regulatory compliance in ethical AI.

For Scott Zoldi, chief analytics officer for FICO, this figure is a red flag:

“In my opinion, it speaks to the need for more ethical regulation. If AI developers generally don’t consider their responsibility to be more extensive than what existing regulations impose, then there’s a problem, not to mention the cases where it’s not enforced.”

The solution advocated in the report is to apply immutable AI model governance. These frameworks should allow for increased oversight of AI models to ensure that AI decisions are accountable, transparent and fair. FICO and Corinium also state that the powers that decision makers have should be used to set standards in companies for AI and promote active oversight of AI systems.

The future of responsible AI would be in the hands of business leaders

The study reveals that 80% of the frameworks put in place by decision makers around AI are not those that would enable responsible AI use. However, nine out of ten companies know that ineffective processes for monitoring AI models are a barrier to its adoption. Two out of three companies believe that responsible AI and AI ethics will be two central elements in their AI adoption strategy within two years.

Figures that seem to be in contradiction, as companies claim to be making bad decisions, while assuring that these bad decisions are not beneficial in implementing responsible AI. It all lies once again, according to the survey, in a better understanding of the ethical issues related to AI by decision makers and business leaders.

Translated from Une enquête menée par Fico et Corinium s’intéresse à l’IA responsable dans les entreprises