The banking industry faces complex financial risks, including credit, market, and operational risks, requiring a clear understanding of the aggregate cost of risk. Advanced AI models complicate transparency, increasing the need for explainable AI (XAI). Understanding risk mathematics enhances predictability, financial management, and regulatory compliance in an evolving landscape.
This paper examines the interplay of the AI Act and GDPR regarding explainable AI, focusing on individual safeguards. It outlines rules, compares explanations under both, and reviews EU frameworks. The paper argues that current laws are insufficient, necessitating broader, sector-specific regulations for explainable AI.
The essential role of #ai in #banking holds promise for efficiency, but faces challenges like the opaque "black box" issue, hindering #fairness and #transparency in #decisionmaking #algorithms. Substituting AI with Explainable AI (#xai) can mitigate this problem, ensuring #accountability and #ethical standards. Research on XAI in finance is extensive but often limited to specific cases like #frauddetection and credit #riskassessment.
"We here propose a novel XAI [eXplainable AI] technique for deep learning methods (DL) which preserves and exploits the natural time ordering of the data. Simple applications to financial data illustrate the potential of the new approach in the context of risk-management and fraud-detection."
"Explainable Artificial Intelligence (XAI) models allow for a more transparent and understandable relationship between humans and machines. The insurance industry represents a fundamental opportunity to demonstrate the potential of XAI, with the industry’s vast stores of sensitive data on policyholders and centrality in societal progress and innovation."