“AI will transform financial services”
Yet to date most initiatives fail to deliver. Not because of the technology, but because organizations face challenges in integrating AI deeply in their fabric. Success requires working on four dimensions, strongly interdependent: governance, operating model, technology architecture and change management.
This article sets out a practical framework for each dimension.
-
Governance
Who decides? Who ensures compliance?
-
Operating Model
What to deliver? How to scale
-
Technology
Safe, economic, model-agnostic
-
Change
Adoption, skills, culture
1. Governance:
Who decides? Who ensures compliance?
A sound starting point is to establish an AI Authority: a cross-functional body that sets both the strategic ambitions and the boundaries for AI within the organization. To be effective, it must assemble senior leaders representing business, operations and control functions – not technologists alone.
The AI Authority ensures that the organization’s AI vision and strategic objectives are clearly articulated and that all activities align to them.
From there, process-level guardrails should be enacted. Among the most important:
- Human-in-the-loop accountability
Define who owns each AI-supported decision. Automated does not mean unaccountable and a named owner should be assigned to every process where AI plays a role. - Risk-tiered use case
Not all AI is equal, a clear framework should state the appropriate policy and controls for low, medium, high and critical risk levels. An internal chatbot is not a credit-scoring engine and attention should be distributed accordingly. - Data privacy by design
A policy should require the mapping of every data source feeding AI models and controls must be defined to enforce consent, anonymization and audit trails.
AI risk management should then be woven into existing risk management structures, with particular attention to agentic AI processes. Here periodic controls are insufficient: real-time monitoring and human validation checkpoints will be essential.
Finally, AI governance must ensure performance management is implemented, setting targets and measuring outcomes, just like for any other transformation program.
2. Operating Model:
What to deliver? How to scale ?
Beyond theoretical operating model design – also useful – the first order of business should be to deliver value early. The best practice is clear: start by delivering proven use cases and build upon them. Ideal use case are proven, narrow in scope but deep in impact. This way set-up cost is contained and results are quickly visible.
Where to look in Financial Services for such use cases?
The precise angle will depend on each institution’s processes, systems, data and pain points. But chances are that meaningful value resides in the following areas:
| Domain | Use Case & Potential Benefits |
|---|---|
| Retail Banking, Insurance | Customer Relation Center:
|
| Private Banking | Augmented Relationship Manager:
|
| Asset Management | RFP preparation:
|
| Insurance | Claims Management
|
| Technology – All sectors | AI-Assisted Coding:
|
Then, leverage on the momentum to structure the operating model (the “delivery” engine).
Key topics and best practices to consider:
- Centralized vs. Decentralized: decide if delivery team is a Centralized AI Centre of Excellence or a distributed workforce
- Reusable platform, not bespoke projects: Invest in shared infrastructure and resources so the 10th use case costs a fraction of the first
- Measure ROI: Each initiative should have predefined success criteria and be measured against them
- Source use cases close to the users: break silos early and go to the business lines and operators to understand their workflows and co-design solutions with them. Proximity to the end users is the best predictor of adoption at scale
- Leverage on the governance to accelerate decisions, be firm on guardrails, and keep the momentum going
3. Technology Architecture:
Enterprise AI means safe, economic and model agnostic AI
Architecture choices must support compliance and security, make economic sense, and maintain a strategic independence from any single model providers.
The central scenario for Enterprise AI architecture therefore should combines 3 elements:
- Access to frontier model through Virtual Private Cloud deployment (e.g. Azure AI Foundry, AWS Bedrock, Google Vertex AI) to benefit from latest advancements of leading models for carefully selected use cases
- Deployment of open-source models on private or on-premises infrastructure to contain cost for high-volume, lower complexity tasks and to support proprietary use cases where data must not leave the organization boundary
- An AI Gateway to route traffic to the most appropriate and economic model, monitor cost, enforce policies and provide a single observability layer across all model providers
This architecture delivers three critical properties:
- Compliance and security: Virtual private cloud, private cloud, and/or on-premises deployment to deal with data sensitivity level and manage sensitive data and proprietary use-case
- Economic control: Route complex reasoning tasks to the most sophisticated models available while directing high-volume, routine operations (document summarization, classification, extraction) to cost-efficient ones
- Provider independence: The AI Gateway makes the architecture agnostic from the model provider. No need to re-wire everything to adopt new models or switch providers
From there, many nuances are possible based on an institution’s existing infrastructure and its specific security and regulatory demands
4. Change Management
Technology only delivers value when people use it.
Adoption requires training, communication and incentive alignment. Without deliberate change management, even well-governed, well-architected AI programmes will underdeliver. Key actions to boost adoption include:
- Tiered upskilling:AI-literate, AI-enabled, AI-native
Not everyone needs to write prompts or fine-tune models and it is important to define skill tiers and tailor training accordingly.
For illustration, tiered implemented in an organization to segment training:
| AI-literate | understands what AI can and cannot do, can evaluate AI outputs critically |
| AI-enabled | uses AI tools daily within existing workflows, can configure standard solutions, contributes to use-case identification |
| AI-native | builds, adapts and optimizes AI solutions, contributes to the platform and coaches others |
- Champions network embedded in business lines
Identify champions in each unit (~5 to 10% of team) and give them early access, dedicated support, and a mandate to coach their peers. In change management peer influence outperforms top-down mandates. - Friction removal
Friction kills adoption. If AI saves hours per week in a time-consuming task but the process still requires manual sign-off on a legacy form in the end, adoption will stall. Work to remove residual friction systematically. - Incentive alignment
Link AI adoption to performance objectives and make it visible in appraisals whenever it makes sense. - Executive sponsorship
Finally, as always, change must also be visible at the top
AI is not a product you install, it’s a transformation you lead