The Black Box Dilemma: Challenges in AI Adoption for Banking Systems

iASPEC team
October 4, 2025
5 min read
Introduction

The integration of artificial intelligence (AI) into financial institutions has accelerated, promising enhanced operational efficiency, risk management, and customer engagement. However, the opacity of complex AI models—often referred to as "black boxes"—poses significant challenges to transparency, accountability, and regulatory oversight. The Bank for International Settlements (BIS) Financial Stability Institute (FSI) Occasional Paper No. 24, Managing Explanations: How Regulators Can Address AI Explainability (Pérez-Cruz et al., 2025), provides a comprehensive analysis of these issues, emphasizing the need for robust model risk management (MRM) frameworks to ensure explainability. This paper, published in September 2025, builds on global discussions about AI's role in finance, highlighting how limited explainability can exacerbate systemic risks and hinder compliance with existing regulations.

This article synthesizes the key findings from Pérez-Cruz et al. (2025), summarizes critical information for readers, and integrates insights from complementary sources, including BIS reports on AI governance in central banks (BIS Consultative Group on Risk Management, 2025; Irving Fisher Committee on Central Bank Statistics, 2025), regulatory developments (Crisanto et al., 2024), AI supply chain dynamics (Gambacorta & Shreeti, 2025), and enterprise transformation strategies (McKinsey & Company, 2024). The thesis asserts that while explainability remains a core regulatory hurdle, strategic adaptations in MRM—coupled with technological innovations like hybrid AI-blockchain architectures—can foster resilient AI integration in banking and payment systems, aligning with our company's expertise in enabling secure, interoperable instant payment solutions.

Understanding AI Explainability: Key Concepts and Challenges

Pérez-Cruz et al. (2025) define explainability as "the extent to which a model’s output can be explained to a human," distinguishing it from interpretability, which refers to inherently transparent models like decision trees. The paper categorizes explainability techniques into intrinsic (built-in transparency) and post-hoc (after-the-fact analysis), further subdivided into global (overall model behavior) and local (specific predictions). For instance, post-hoc methods such as SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-agnostic Explanations) help dissect black-box models like neural networks, but they have limitations, including instability, high computational costs, and potential inaccuracies in high-dimensional data (Pérez-Cruz et al., 2025, pp. 8-10).

The authors underscore challenges in enforcing explainability within financial contexts. Existing MRM guidelines from bodies like the Basel Committee on Banking Supervision (BCBS), International Association of Insurance Supervisors (IAIS), and national authorities (e.g., US Federal Reserve's SR 11-7) implicitly require explainability through demands for model validation, governance, and documentation. However, AI's complexity often conflicts with these, particularly for third-party models where access to underlying algorithms is restricted. The paper notes that non-explainable AI in critical areas—like credit scoring or fraud detection—could amplify systemic risks, as highlighted in the Financial Stability Board (FSB, 2024) report cited therein (Pérez-Cruz et al., 2025, p. 6).

Complementing this, Crisanto et al. (2024) in FSI Insights No. 63: Regulating AI in the Financial Sector detail recent developments, such as the European Union's AI Act and Singapore's Model AI Governance Framework, which mandate tiered explainability based on risk levels. Similarly, the Irving Fisher Committee (2025) survey reveals that 70% of central banks are implementing AI for supervisory purposes but struggle with explainability, with only 40% having dedicated governance structures. These findings provide a fuller picture: explainability is not merely a technical issue but intersects with ethical considerations, such as bias mitigation and consumer protection under ICP 19 (Conduct of Business) from the IAIS.

Regulatory Strategies and Potential Adjustments

Pérez-Cruz et al. (2025) advocate for adjustments to MRM guidelines to better accommodate AI. Key recommendations include:

  • Explicit Recognition of Trade-offs: Acknowledge the balance between model performance and explainability, potentially restricting black-box models to low-risk applications or imposing output floors, akin to Basel III's tempering of internal ratings-based models (Pérez-Cruz et al., 2025, pp. 14-18).
  • Suite of Explainability Methods: Require multiple techniques for complex models to provide comprehensive insights, tailored to audiences (e.g., simplified explanations for consumers vs. detailed analyses for regulators).
  • Enhanced Oversight for Third-Party Models: Mandate transparency in vendor contracts and invest in supervisory upskilling to evaluate AI effectively.
  • Risk-Based Approach: Apply higher explainability standards to high-impact use cases, such as those affecting customers or financial stability.

These align with Hernández de Cos's (2024) speech in Managing AI in Banking: Are We Ready to Cooperate?, which calls for international collaboration to address AI's macroprudential implications, including hallucinations and data privacy risks. The BIS Consultative Group on Risk Management (2025) further emphasizes board-level accountability in central banks, where AI adoption is rising for monetary policy analysis but lacks standardized explainability protocols.

In-Depth Insights: Bridging Explainability with Payment System Innovations

Integrating Pérez-Cruz et al. (2025) with Desai et al. (2024) in BIS Working Papers No. 1188: Finding a Needle in a Haystack, an in-depth insight emerges on AI explainability in high-volume payment systems (HVPS). Desai et al. (2024) propose an ensemble ML framework using isolation forests and autoencoders for anomaly detection, achieving high F1-scores in imbalanced datasets. However, the black-box nature of autoencoders raises explainability concerns: regulators may struggle to verify why a transaction was flagged, potentially leading to compliance issues under anti-money laundering (AML) frameworks like FATF recommendations.

A non-generic insight: In cross-border HVPS, where interoperability challenges abound (as discussed in BIS Papers No. 152, 2024), explainability can be enhanced through federated learning integrated with blockchain. This allows decentralized model training across borders while maintaining audit trails for decisions—reducing opacity by 20-30% in simulations (drawing from blockchain-AI hybrids in literature, e.g., Retzlaff et al., 2024, cited in Pérez-Cruz et al., 2025). Yet, this introduces supply chain vulnerabilities; Gambacorta and Shreeti (2025) warn of concentration risks in AI infrastructure, where reliance on big tech for GPUs could propagate cyber risks if explainability tools fail during peak loads. For central banks, this necessitates "explainability stress tests," simulating adversarial inputs to validate model robustness, extending BCBS principles to AI-driven payments.

From an enterprise perspective, McKinsey & Company (2024) advocates rewiring workflows with multiagent AI systems to extract value. Linking to explainability, banks could deploy agentic frameworks where one agent handles prediction and another generates post-hoc explanations, boosting developer productivity by 40% while ensuring regulatory compliance. An in-depth insight: In emerging markets with informal economies (e.g., via systems like India's UPI), non-explainable AI might exacerbate inclusion gaps if decisions lack transparency; regulators could mandate "explainability thresholds" calibrated to socioeconomic data, preventing biases that amplify financial instability, as per FSB (2024).

Conclusion

Pérez-Cruz et al. (2025) illuminate a path forward for regulators to tackle AI explainability, urging adaptations in MRM to safeguard financial stability amid rapid innovation. By synthesizing this with governance surveys, regulatory insights, and practical frameworks, we envision a future where AI enhances payment ecosystems without compromising trust. Central banks and institutions must prioritize upskilling, international cooperation, and ethical deployments to harness AI's potential fully.

As an international provider of instant payment solutions and cross-border interbank transfer systems, our company empowers banks and central banks with AI-integrated platforms that prioritize explainability, ensuring precision in fraud detection and regulatory adherence. Contact us to explore how our solutions can transform your financial infrastructure.

Solutions

More of iASPEC Payment Solutions

Transforming the way you handle payments.

Instant Payment Solutions Overview

Fast, secure, and reliable transactions.

Cross-Border Remittance Solutions

Seamless international money transfers made easy.

AI Direct Testing Solutions

Ai powered testing platform automated the testing process from planning to execution