AI/ML and Regulation: A Risk Management Perspective in a Changing Landscape

Facebook
Twitter
LinkedIn

The growing use of artificial intelligence (AI) and machine learning (ML) is reshaping business models, operational processes, and decision-making structures across industries. While it remains uncertain how enduring this technological wave will be, its current impact is substantial enough that institutions can’t afford to ignore it. The core question is not whether AI will dominate the future, but how organisations position themselves amid a development pace that continues to exceed the rhythm of regulatory adaptation.

From an ethical standpoint, AI systems are often framed around the principles of non-maleficence (do no harm) and beneficence (do good). These technologies offer meaningful benefits in terms of automation, efficiency, and analytical power, yet they also introduce new layers of risk and accountability — particularly in strictly regulated sectors such as banking, insurance, and healthcare. Less regulated industries may adopt AI more quickly, but this speed does not necessarily translate to lower exposure to poorly controlled or misunderstood AI-driven processes.

A Fragmented Global Regulatory Landscape

There is no unified global approach to AI regulation. The European Union has opted for a prescriptive framework through the AI Act, the United States continues to rely on sector-specific guidance, and China is developing its own model focusing on algorithmic oversight and content governance. This fragmentation complicates cross-border consistency, ethical alignment, and collaborative development of AI systems.

Regulatory Context in Bosnia and Herzegovina

Bosnia and Herzegovina currently lacks a dedicated AI/ML regulatory framework. The banking sector largely relies on traditional processes but is gradually introducing digital tools — including generative models in limited use (e.g., Microsoft Copilot) and conversational interfaces.

Still, four regulatory documents provide indirect but relevant guidance for the application of AI/ML technologies:

  1. Law on Personal Data Protection

  2. Decision on the Internal Governance System in a Bank

  3. Decision on ICT Systems and ICT Risk Management

  4. Decision on Outsourcing in a Bank

Together with the Supervisory Expectations for 2025, these regulations create an initial foundation for examining AI/ML systems through established risk management and governance structures. While supervisory attention remains focused on models used in expected credit loss (ECL) calculations, regulators are increasingly emphasising broader technological and emerging risks.

Data Protection: Key Considerations for AI/ML

Three areas are particularly relevant:

  1. Lawfulness of processing

  2. Transparency and data control

  3. Automated decision-making and risk mitigation

AI/ML models are not granted a special regulatory status — they must comply with the same legal grounds applicable to any personal data processing.

Internal Governance: Implications for Model and Data Management

The decision on internal governance outlines expectations for organisational structure, control functions, and risk management processes. For AI/ML systems, the most relevant elements include:

  • model risk management requirements,

  • roles and responsibilities of control functions,

  • the internal control system,

  • data infrastructure and data quality standards.

Given their complexity, AI/ML systems typically demand even higher-quality datasets than traditional econometric models.

ICT Risk: Technology as the Foundation

The Decision on ICT Risk covers the full technological environment necessary for AI/ML systems — infrastructure, development, software integration, monitoring, and ongoing oversight. Recent regulatory changes, particularly regarding the strengthening or repositioning of the ICT risk function, indicate clear expectations for increased scrutiny of technological and emerging risks.

Outsourcing: The Most Sensitive Area for AI/ML Implementation

AI/ML development often requires capabilities that many banks do not possess internally, making outsourcing an attractive option. However, the Decision on Outsourcing sets strict requirements for materially important activities — a category into which many AI systems could reasonably fall.

Key obligations include:

  • comprehensive risk management,

  • unrestricted audit and access rights,

  • business continuity and exit strategies,

  • oversight of subcontractors,

  • due diligence on service providers.

Explainability remains one of the central challenges. Limited model transparency can hinder validation, supervisory review, and ultimately, regulatory approval. Although explainable AI (XAI) techniques exist, they introduce their own complexities, requiring expertise on both the bank’s and the supervisor’s side.

Relevant International Guidance for the Banking Sector

Several European bodies already provide direction on AI/ML use in financial services:

  1. EU AI Act – establishes requirements for safety, transparency, and accountability of high-risk AI systems. Creditworthiness assessment and risk modelling fall under high-risk categories, with implementation deadlines stretching from 2024 to 2027.

  2. EBA – Machine Learning for IRB Models (2023) – highlights challenges such as bias, stability, explainability, and links with GDPR and the AI Act.

  3. ECB Guide to Internal Models (2025) – outlines supervisory expectations for internal models, including components that incorporate ML techniques, with detailed requirements for validation, independent review, and lifecycle management.

Implications for Banks in BiH

If European trends continue, AI/ML systems in the domestic banking sector may ultimately be viewed as high-risk, while outsourcing arrangements involving AI could be treated as materially important. This would require banks to implement a robust model governance framework ensuring:

  1. transparency and explainability,

  2. independent validation,

  3. comprehensive auditability and lifecycle oversight.

 

AI/ML technologies offer competitive advantages but demand disciplined integration into governance and risk management structures. In a sector characterised by prudence and regulatory rigour, the introduction of AI/ML will likely evolve gradually, with emphasis on control, accountability, and clear justification for each application.

Pročitajte

KEEP THE BALANCE

The reason for this blog arose from two strong feelings that have been following me lately: first, the sudden death of a dear friend, whose

DETALJNIJE >