top of page
  • ioanadumitru

keepITtech: Enhancing Open Banking Solutions with Generative AI Pilot

Updated: Aug 1

Contributors: Radu Bobe & Ioana Dumitru


At IT Smart Systems, we were well-equipped to take on the challenge of starting our generative AI journey. With years of experience using various pre-AI tools in our products, we have sharpened our skills and knowledge to tackle this technology. Our team was excited to explore the possibilities that generative AI offered and was prepared to push the boundaries of innovation.  

By leveraging our expertise and staying abreast of the latest developments in the field, we are confident in our ability to harness the power of generative AI to drive meaningful advancements in various industries, improving the productivity of our workforce and our customer’s experience. The main challenge remains to do things responsibly and we grant a particular focus to that.  


Boost Onboarding and Support Team Efficiency 


In this article, we're diving into how artificial intelligence can revolutionize our internal processes, particularly for our open banking solutions development team. Our primary focus areas are streamlining the onboarding process for our Fintech product development team and enhancing ticket resolution for our support team. While our support team still values the personal touch, our genAI model boosts efficiency by quickly organizing tasks and inquiries.  


Let's explore how AI is significantly impacting our operations and helping us work smarter, not harder. 


A Deep Dive into Building a Bedrock Knowledge Base 


Before we dived into the specifics of implementing a generative AI knowledge base using Bedrock, it was a priority for us to establish an understanding of the components we wanted to use. This initial exploration included concepts like Large Language Models (LLMs), model training, generative AI, Retrieval-Augmented Generation (RAG), or tokenization. Establishing this foundational knowledge, as well as analyzing the differences between Prompt Engineering, RAG, and model training, was crucial before moving on to the specifics of implementing a generative AI knowledge base using Bedrock. 


We use the Amazon Bedrock Knowledge Base managed service as the core component of our solution. Although the knowledge base can also be implemented via Bedrock API calls, we found the managed service more suitable for our approach, as the solution’s main properties  (e.g.: model selection, chunk size, temperature) can be easily changed from the AWS interface.  


Then, we used Anthropic Claude 2 as LLM and Cohere Embed Multilingual v3 as the embedding model. Using a multilingual embedding model enhances the solution’s scalability, as the model supports texts from several languages (over 100 for Cohere Embed Multilingual v3). We store the embeddings in an Aurora Vector Database. In this way, we maintained a unique table entry for each vector embedding, having assigned an identifier, as well as the corresponding chunk text and the metadata.  


For data sources, we used a S3 bucket to upload all the relevant documents to the knowledge base. To keep everything up to date, we set up a Lambda job that automatically syncs all the changes from S3 (upload/modify/delete file) to the Bedrock knowledge base. This setup ensured that any new or updated documents in the S3 bucket were promptly reflected in the knowledge base, maintaining accuracy and consistency in our data. 

As a common practice in building AI applications, we used Python to initialize Boto3, the official AWS SDK for Python,  to create, configure, and manage AWS services. Then, we instantiated the knowledge base inside the RetrieveAndGenerate function. We used Streamlit library to build the web application displaying the Q&A window. We chose Streamlit due to its efficiency in creating web applications. The accessible syntax allowed us to focus on showcasing the functionality of our Generative AI product, rather than investing in front-end development. The solution is hosted in Amazon EC2 and is available to our internal employees.  

 

A comprehensive diagram presenting the project architecture is illustrated next:  

Diagram presenting the GenerativeAI project architecture

Exploring the genAI Pilot: Gathering Feedback for Future Improvements 


Our fintech product development team was so eager and tested the pilot, and their initial reactions were quite promising. They could foresee the positive impact on customer care response times by implementing template-response e-mails, a feature that had long been in the backlog. Additionally, they anticipated a boost in onboarding efficiency for new team members – reducing onboarding time and effort from the team. 

‘As AI is becoming more present in our daily lives, we were happy to hear the suggestion from our R&D AI team to prospect genAI to streamline our internal processes. Here at the Innovation Hub, we specialize in creating financial products that are highly sensitive for our customers, and we maintain close communication with them through our dedicated support team. We take great pride in our support team, so making the onboarding and ticketing processes easier is very important to us’, says Alex Lefter, the Manager of our Innovation Hub, the division responsible for our fintech products development and support.   

Furthermore, we expect promising results in improving the cross-department communication between our sales, communications, and development teams. We see a true value in the genAI pilot for this direction, so we've also started testing it. 


‘As the product owner for one of our financial solutions, ensuring a seamless experience for our customers is my top priority, both within the product itself and in their interactions with our support team. That's why I was eager to test the genAI solution developed by our talented colleagues which successfully enhances the efficiency of our support operations. I am grateful for their dedication to continuously improving our products and am excited to see the positive impact genAI will have on our customer care team’, said Daniela Popa, Product Owner of SmartPay. 

Key Takeaways


Our Generative AI Pilot for open banking products has shown promising results in optimizing internal processes for the Fintech product team. Early testing indicates improvements in onboarding and e-mail communication for the support team, including quicker responses from level 1 support and better ticket management. Additionally, the fintech sales team has benefited from faster documentation of solution features. The genAI pilot now supports responses in both Romanian and English, further enhancing its versatility. Moving forward, our team focuses on enhancing the public chatbot and developing an AI-enhanced FAQ page to continue improving user experience and efficiency. 

 

If you have any further inquiries regarding this solution or new genAI development capabilities for your own business or use cases, please contact our team for the ITSS Consultancy Package, and let's start the conversation. 





41 views0 comments

Comments


bottom of page