Artwork

תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

AI explained: AI and financial services

24:43
 
שתפו
 

Manage episode 444608664 series 3402558
תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI.

----more----

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.

Romin: Thank you, Claude. Good to be with everyone.

Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.

Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.

Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?

Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.

Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.

Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.

Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.

Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.

Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.

Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.

Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.

Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious.

Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well.

Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier.

Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them.

Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models.

Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see.

Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment?

Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns.

Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges.

Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions.

Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening.

Romin: Thank you.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

84 פרקים

Artwork
iconשתפו
 
Manage episode 444608664 series 3402558
תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI.

----more----

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London.

Romin: Thank you, Claude. Good to be with everyone.

Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector.

Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them.

Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI?

Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be.

Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility.

Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving.

Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult.

Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated.

Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future.

Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time.

Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances.

Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious.

Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well.

Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier.

Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them.

Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models.

Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see.

Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment?

Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns.

Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges.

Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions.

Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening.

Romin: Thank you.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

84 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר