Generative AI technologies like ChatGPT or DALL-E are changing the way we live and work. This change has touched almost every industry out there, including the financial sector.
Imagine having a personal assistant who can help you with your daily tasks, document your conversations, and even spot new business opportunities. This is exactly what generative AI technologies like ChatGPT can do for relationship managers and client advisors.
RMs and client advisors spend a significant amount of time on repetitive and non-revenue-generating tasks. With the help of generative AI technologies, they can focus on more important tasks, like building relationships with clients or seeking new business opportunities.
But with every new technology come new challenges.
I had a conversation with Dr. Sina Wulfmeyer about Generative AI, ChatGPT, data protection and her experience working with these technologies at Unique. Here’s the digest of the most exciting topics from our conversation👇
What are the GPT use cases for banking insurance?
Relationship managers or client advisors spend about 70% of their time on non-revenue-generating tasks. So I think there's a lot of room for improvement with CRM automation and call summaries. With the help of Unique, its recording feature, automatic summary, and GPT-3 generated reports, relationship managers get more time back. They can spend this time with their current clients or building relationships with new clients - this means more focus on revenue-generating activities.
Let’s have a look at how some banks operate nowadays: there's a relationship manager and an assistant relationship manager, and, basically, the assistant does a lot of the documenting work. So I see Unique becoming an assistant – helping, documenting, and automating tasks for the relationship manager. Not only will this help to spot new upsell and cross-sell opportunities with better precision, but it will also be tremendously useful from a compliance perspective.
We do see a lot of business cases for Unique. Especially when it comes to documenting and generating reports. With our technology, you're able to determine what you want to get out of the conversation and I think that's a huge business case that we can work with.
What are the challenges and opportunities of implementing generative AI technologies in the banking and insurance industry?
The key question here is how we can establish business cases. And I think data protection is a huge, huge topic. I don't have a perfect answer to that. It’s currently very difficult to find out what OpenAI, the provider of ChatGPT, is doing with the data. If you read through their data statements, it says they keep some data (prompts but also output data; usage data) to improve their models but cut out personal identifying data. They don’t use names, for example, or telephone numbers, but they might be using everything else to improve the model. And for me personally, it's at the moment not fully explainable and we need to spend more time to understand all the details. We give something in, we get something out, and we don't exactly know about their intentions with the data. I'm a big fan of the technology, but the details on the algorithm behind it and the data protection behind it are still a big question mark.
Is it possible to achieve 100% GDPR compliance with this technology in use?
I think that's on the horizon. And that's our ambition to implement GDPR-compliant solutions because it's important for our clients. For financial services, that's one of the most important criteria. And as you might know, we've been accepted by Microsoft Azure to be one of their first startups using their European APIs or the European interface to OpenAI via Microsoft services. And I think that's a big opportunity for us to be able to implement GDPR-compliant solutions.
Do you see generative AI technologies being heavily regulated in the future?
I think we have to remember that we're still in a research phase or experimental phase with ChatGPT. The more we use it, the more we learn and I think some of the shortcomings are coming out slowly; people are testing the interface and are skeptical about the output. So I think we have to keep in mind that we're still learning.
And then from a regulatory standpoint, we also have to remember that the regulators need to catch up. Usually, regulators are a little bit slower than technology and the latter has been evolving at a very high speed lately. So I'm expecting to see regulation coming out soon. Maybe in two or three years, there will be specific regulations for generative AI. What we can already see is the work on the European AI Act, which is most likely coming this year, and which regulates artificial intelligence like ChatGPT, for example. And I expect it to become a big hurdle for companies, which means they will have to have certain compliance measures and security measures in place.
What kind of feedback do you hear from people in the banking and insurance industry regarding GPT?
I think we get mostly positive feedback. However, we have to keep in mind that not all people have been introduced to the technology so far. Some people have never heard of or seen ChatGPT. So for them, it's really mind-blowing when it comes to what you can do with data. Others might have played around with it in their private time and for fun and they are also amazed that it can be used in a business context and actually create business value. I think that's actually our proposition from Unique: we generate value for clients with ChatGPT/ GPT-3 models to be precise. So it's not only for playing around and having fun, but we really want to create tangible output that's measurable. From that perspective, the feedback is very positive. People are very excited to use it.
What are the downsides of this technology?
Generally speaking, GPT is a language model. It uses statistical probability to predict the next word. It doesn't have a brain of its own to really bring meaningful things together. And I think the biggest problem is that it can potentially create fake news and content that is more creative than factual.
What are the main cultural implications of generative AI in the workplace? Do you think it will be able to replace the workforce?
I don't think that AI will replace people or jobs. I mean, people who use AI or know how to work with AI might replace people who are unable to do so. I think it's here to stay. But we need to make sure that all the people are also trained to work with it and know how to use it. I would put this issue under the umbrella of data literacy, or AI literacy. People must learn how to work with it. We all need to learn how to prompt in order to get the right information.
And what do you think about call centers and advisory jobs that mostly require answering questions and giving information?
I think the change has been happening for a while now. Banks and insurance companies introduced chatbots to deal with customer queries. For example, when we have a problem with our banking cards, it's usually very hard to reach real people to get an answer. And I have mixed feelings about this because sometimes when you have a problem, you feel really relieved when you have a human helping you, guiding you through the problem. On the other hand, from a cost perspective, it would be easier to have a chatbot like ChatGPT answer your questions. So I think we'll see a mix: for the easy problems or easy questions in the call center, ChatGPT or any other language model will answer the question, but in the more complex cases, I think we still need humans.
What is the future of generative AI technologies in your opinion?
The AI revolution has only just started. So there's more to come. We at Unique are prepared and we're at the forefront of this revolution. We started to implement this technology quite early on. But what differentiates us from our competitors is that we are able to customize our technology to the needs of the financial services industries and bring actual and proven business value.