Unveiling the vision and value of GenAI
What does it take to build an intelligent assistant capable of delivering meaningful value to customers in a dynamic and highly regulated domain such as financial services?
‘Large language models’ (LLMs), such as Google Cloud’s Generative AI technologies PaLM-2 and Gemini, have emerged as powerful tools for intelligent application development. These LLMs possess the ability to decode natural language into machine instructions, encode data into natural language, and serve as a general knowledge base. Yet, whilst the capabilities of LLMs hold immense promise, integrating them into real-world applications within a consumer-facing business has turned out to present some complex challenges.
The first challenge lies in bridging the gap between LLM capabilities and user needs. Whilst LLMs can decode and encode information, we want to use them to provide users with actionable insights and personalised experiences. For instance, users are not merely interested in generic information about a bank’s products and services; more importantly, they require specific, sensitive, and personalised information to help them make informed decisions.
Addressing this challenge requires a meticulous approach to integration, one that prioritises security, performance, reliability, cost-effectiveness and trust. At GFT we have utilised the power of Google Cloud LLM technology, combined with the live data provided by a modern core banking system to create a GenAI intelligent assistant. By integrating LLMs with a core banking system, new possibilities can be unlocked for delivering tailored and valuable experiences to retail banking customers.
Architecting intelligent interaction
Delivering this technical integration is often underemphasised in research and innovation projects, so GFT implemented an agent using the Google Cloud Platform (GCP) and a cloud-based core banking system.
Our specialists have created a robust infrastructure capable of handling the complex demands and scaling challenges of customer-facing retail banking.
The architecture of the intelligent assistant comprises several key components.
- Front-end: Implemented in React, the front-end serves as the user interface, providing customers with a flexible conversational interface that also offers proactive suggestions.
- Gateway and authentication system: Utilising JSON Web Tokens (JWT), the gateway ensures secure authentication and authorisation, only allowing access to sensitive banking data to fully authenticated users.
- Orchestration engine: Running on Google Kubernetes Engine (GKE), the orchestration engine manages the interaction between the core system and the LLMs, ensuring precise control and security.
- Integration layer: This layer manages streamed updates from the core banking system, providing access to a single source of ‘truth’ data such as balances, credit limits and user information via an API.
- Core banking system: Hosted on Google Cloud, the existing core banking engine enables secure and reliable access to customer banking data.
However, realising the vision of an intelligent banking assistant goes far beyond the technical integration – it necessitates a deep understanding of user needs and expectations and the ability to leverage modern AI to deliver them.