The Central Bank of Kenya (CBK) has cautioned financial institutions against rushing to adopt artificial intelligence (AI) without addressing ethical, legal, and operational risks that could undermine consumer trust and the stability of the banking sector.
In its latest report on the Bank Annual Report 2024, CBK noted that the rapid growth of AI tools in recent years has brought significant opportunities for the banking industry, including faster decision-making, fraud detection, and personalised services. However, it warned that these technologies also carry inherent risks if not deployed fairly, transparently, and responsibly.
“In recent years, the field of AI has witnessed rapid advancements, and many applications have emerged in the banking sector. As a result, it has become increasingly important to consider the consequences that AI systems have and how they can be used fairly, transparently, and appropriately,” CBK stated.
The regulator stated bias and discrimination as a major concern, explaining that AI models are trained on large datasets that may contain historical or systemic biases. The apex bank says if left unchecked, these biases could be amplified in AI-driven decision-making, such as screening applicants for loans or financial products, potentially locking out certain customer groups.
“Another concern is algorithmic bias. This type of bias occurs when there are systematic errors in the design and implementation of models,” the report noted, adding that human biases can also influence AI outcomes at every stage—from design to operation. CBK advised banks to ensure training data is representative and to put in place strong governance frameworks.
The report also urged the need for transparency and explainability in AI. Many AI systems operate as “black boxes”, where it is difficult to trace how inputs lead to outputs.
CBK warned that this opacity could create legal or regulatory challenges if banks cannot justify automated decisions to customers or regulators. It recommended the use of explainable AI tools that clarify the reasoning behind model predictions.
Data protection and privacy emerged as another area of concern, especially with the surge in mobile and internet banking. CBK warned that the vast amounts of personal data collected to train AI systems must be processed, stored, and used in strict compliance with data protection laws.
“Banks should employ effective data governance frameworks to protect and prevent the unauthorised access of sensitive information,” the report said, listing data encryption, anonymisation, and access controls as critical safeguards. It stressed that adherence to Kenya’s Data Protection Act and related global standards was non-negotiable.
CBK further warned that AI can be weaponised by cybercriminals to enhance the scale and sophistication of cyberattacks. To counter this, it advised banks to integrate AI-driven threat detection and real-time response capabilities into their cybersecurity strategies.
The regulator also addressed the question of accountability, stating that as AI becomes more involved in decision-making, financial institutions must clearly define who is responsible for the outputs of these models. This includes setting up mechanisms to detect and address unintended consequences swiftly.
“Financial institutions need to be accountable for the AI systems they create. Banks must also have clear mechanisms to detect and address any unintended consequences swiftly. This will ensure that AI systems are developed and deployed responsibly,” CBK said.
The report concluded by urging banks to balance innovation with responsibility, noting that the long-term success of AI in the sector will depend on whether customers trust that it is being used to serve their interests without bias, discrimination, or abuse of personal data.