As banks and financial institutions (BFIs) increasingly turn to artificial intelligence (AI) to streamline operations and improve customer service, Nepal Rastra Bank (NRB) has drafted a comprehensive set of AI Guidelines and circulated them among stakeholders for feedback.
The guidelines intend to ensure safe, transparent and responsible use of AI across the financial system at a time when digital technologies are rapidly reshaping service delivery.
The proposed guidelines are designed for all institutions regulated by NRB, including commercial banks, development banks, finance companies, microfinance institutions, Nepal Infrastructure Bank, and payment system operators and service providers.
According to NRB, the primary objectives of the guidelines are to promote the adoption of AI tools in a way that enhances operational efficiency, innovation and customer experience, while safeguarding financial stability and institutional resilience.
The guidelines state that AI-driven decisions of licensed institutions must remain transparent, explainable, fair and accountable to ensure that customer rights and data privacy are protected and that no discriminatory outcomes arise from automated systems.
The central bank has also warned licensed institutions to mitigate a wide range of risks associated with AI, including operational, ethical, systemic, model-related and cyber risks. To address these issues, all licensed institutions are required to define their AI-related risk tolerance as part of their broader risk management structure and establish strong governance mechanisms with clearly assigned roles and responsibilities.
As per the guidelines, licensed institutions must develop a comprehensive AI strategy accompanied by an integrated governance framework. This should include detailed policies, procedures and internal controls to guide the secure development, deployment and monitoring of AI models. They must also ensure that AI systems can maintain critical services during disruptions and have mechanisms in place to detect faults, restore operations and minimize service impacts.
To strengthen oversight, the guidelines require listed institutions to set up a cross-disciplinary AI steering committee—or designate an existing committee—to guide strategy, risk oversight and compliance. The committee should include senior management members with adequate expertise in technology and AI-related risks, and the overall AI framework must be approved by the Board of Directors.
The guidelines also distinguish between internal use of third-party AI tools and formal outsourcing. While internal use of ready-made AI tools for tasks such as drafting, summarizing or analysis will not be treated as outsourcing, BFIs must still follow their own risk and compliance policies. However, any AI-enabled service provided by an external vendor will require full outsourcing procedures, including due diligence, board approval and formal notification to NRB.
Licensed institutions are required to submit annual reports to the central bank, detailing AI use, risk controls and customer impacts, and maintain documentation—including data sources, algorithms and decision-making processes—for all AI systems.