Artwork

תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

AI for legal departments: Introduction to M365 Copilot

22:05
 
שתפו
 

Manage episode 418238201 series 3402558
תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Anthony Diana and Karim Alhassan are joined by Lighthouse’s John Collins to discuss Microsoft's AI-driven productivity tool, Copilot.

This episode presents an overview of Copilot and its various use cases, the technical nuances and details differentiating it from other generative AI tools, the identified compliance gaps and challenges currently seen in production, and the various risks legal departments should be aware of and accounting for.

This episode also provides a high-level overview of best practices that legal and business teams should consider as they continue to explore, pilot and roll out Copilot, including enhanced access controls, testing and user-training, which our speakers will further expand upon in future episodes.

----more----

Transcript:

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Anthony: Hello, this is Anthony Diana, a partner here in the Emerging Technologies Group at Reed Smith. Welcome to Tech Law Talks. Today will be the first part of a series with Lighthouse focusing on what legal departments need to know about Copilot, Microsoft's new artificial intelligence tool. With me today are John Collins from Lighthouse and Karim Alhassan from Reed Smith. Thanks guys for joining. So today, we really just want to give legal departments sort of at a very high level what they need to know about Copilot. As we know Copilot was just introduced, I guess like last fall, maybe November of last year by Microsoft. It has been in preview and the like and then a lot of a lot of organizations are at least contemplating the use of Copilot or some of them, I've, I've heard have already launched Copilot without the legal departments knowing which is an issue in of itself. So today, we just want to give a high level view of what Copilot is, what it does at a really high level. And then what are the things that legal department should be thinking about in terms of risks that they have to manage when launching Copilot? This will be of a high level of additional series, which will be a little bit more practical in terms of what legal department should actually be doing. So today is just sort of highlighting what the risks are and what you should be thinking about. So with that John, I'll start with you. Can you just give a very high level preview of what, what is Copilot that's being launched?

John: Sure, thanks Anthony for having me. So Copilot for M365 which is what we're talking about is Microsoft's flagship Generative AI product. And the best way to think about it is it's Microsoft which has a partnership with open AI is taken the ubiquitous ChatGPT and they've brought it into the Microsoft ecosystem. They've integrated it with all the different Microsoft applications that business people use like Word Excel, powerpoint and teams. And you can ask Copilot to draft a document for you to summarize a document for you. So again, the best way, think about it is that it's taking that generative AI technology that everyone is familiar with, with ChatGPT and bringing it into the Microsoft ecosystem and leveraging a number of other Microsoft technologies within the Microsoft environment to make this kind of a platform available to the business people.

Anthony: Yeah. And I think, you know, I think from at least from what Microsoft is saying and I think a lot of our clients are saying that this is, this is groundbreaking, this is going to be and frankly, it's probably going to be the largest and most influential AI tool Enterprise has because Microsoft is ubiquitous, right? Like all your data is flowing through there. So using AI in this way, should provide tons of productivity. Obviously, that's the big sell. But it is obviously everyone, if they get licenses for everybody, this is something that's going to impact I think most organizations pretty highly just because it is, you know, if you're using Microsoft M365 you're going to be dealing with AI and you know, sort of on a personal level, large scale and the like, and I think that's one of the challenges we'll see. So Karim could you just give a little bit? I mean, John gave a very nice overview just in terms of a few things that he said in terms of we've got this ChatGPT, what is it that's unique about Microsoft in terms of how it works from a from a technology perspective because I know a lot of people are saying I don't want people using ChatGPT for work.

Karim: Sure, thanks Anthony. You know, as opposed to kind of these publicly web based ChatGPT tools, the I think the big sell and what makes Copilot unique is that it's grounded in, you know, your enterprise data, right? And so essentially it integrates with the Microsoft graph, you know, so which allows, you know, users within the enterprise to leverage their M365 data, which adds context. And so rather than just going to, you know, GPT-4, which is, you know, as everyone knows, trained on publicly available data and has its own issues, you know, hallucinations and whatnot. You know, having this unique enterprise specific data kind of adding context to inputs and outputs, you know, leads to more efficiency. You know, another big thing and, and a big issue that legal departments are, you know, obviously thinking about is that having this data input into the tool, you know, one of the problems is that you're worried that it can, you know, train the underlying models. With Copilot, you know, that's not happening, you know, the instance of the LLM in which the tool is relying on is within your surface boundary. And so, you know, that kind of gives protections that you wouldn't necessarily have when you know, people are kind of just using these publicly available tools and So, you know, that's, I think that the big differentiating factor with Copilot as opposed to, you know, GPT-4, ChatGPT and, you know, kind of these other public tools.

Anthony: And I think that's critical and John, obviously I'll let you sort of expand on that too, but I do think that's a critical piece because I know a lot of people are uncomfortable using these large language models, like you said, they're public. The way Microsoft built this product is you get your, in essence your own version. So if you get a license, you're getting your own version of it and it's inside your tenant. So it doesn't go outside sort of your firewalls, not technically a firewall, but it's in your tenant. Um And I think that's critical and I think that gives a lot of people comfort. Um And at least that's what Microsoft is saying.

John: Yeah, just a, just a couple of things to point out is some folks might be familiar with or heard about that their Microsoft has this responsible AI framework where if you are using Azure Open AI tools and you're building your own custom uh ChatGPT but using the Microsoft or the Azure Open AI version of ChatGPT, this responsible AI framework. There is, Microsoft is actually retaining prompts and responses and a human being is monitoring those prompts and responses. But that's in the context of a custom development that an organization might do. Copilot for M365 actually opted out of that. And so to Karim and Anthony's point, Microsoft's not retaining the prompts for Copilot, the responses, they're not using the data to retrain the model. So there's that whole thing. The second thing I just want to point out is that you do have the ability with Copilot from 365 to have plugins and one of the plugins is that when you're chatting in Microsoft teams using the chat application, you have the option to actually send prompts out to the internet to further ground. Karim talked about grounding information in your organization's data. So there are some considerations around configuration. Do you want to allow that to happen? You know, there's still data protection there. But those are a couple of things that come to mind on this topic.

Anthony: Yeah, and look, and I think this is critical and I, I agree with you. I think that's one of the, there is a lot of, I don't say dangers, but there's a lot of risks associated with Copilot. And I think as you said, you really have to go through the settings to make sure that is one that I think we've been advising clients at least to turn off for now just because, and, and just to give, we give clarity here, I think we, we're talking about prompts and responses. A prompt is in essence, a, a question, right? It could be summarized the document or you can type in, you know, give me a recipe for blueberry muffins, right? You can type anything. It's a question through Copilot. So it's an app, you make that question. When we talk about grounding, right? I think this is an important concept for people to understand when you're grounding on a document. So for example, if you want to summarize a a document, right? We have a or a transcript of a meeting, right? Like OK, I want to summarize it. My understanding of the way it works is when you press summarize a document, what you're really doing is telling the tool to look at the document, use the language in that document to create, I'll call it a better prompt or question that has more context. Then that goes to the large language model which is again inside the tenant and then that will basically give you an answer. But it's really just saying the answer is this is the likely next word is the best way to describe it. It's about probability and the like so it doesn't know anything. It is just when you're grounding it in a document or your own enterprise data, all you're doing is basically asking better questions of this large language model. That's my understanding of it. I know people have different types of descriptions, but that's the way I think the best way to think about it. Um And then and again, this is where we start talking about confidentiality. Like why people are a little concerned about it going public, you know, to the public part is that those questions, right are gonna take potentially confidential information, whatever and send it to this outside model if it wasn't Copilot, be out there. And this is true with a lot of a a tools and you may not have control like John that of who's looking at it. Are they storing it there? How long? And they're storing it? You know, is it, is it secure all that stuff? That's the type of stuff that we normally worry about. However, here, because the way Copilot is, you know, built, some of those concerns are are less. Although as you pointed out John, there are features that you can go to the internet which could cause those same concerns any other flavor, Karim or John that you want to give again. That's my description of how it works.

John: But yeah, no, the thing I was gonna say, no, I think you gave a great description of the grounding. Um Karim had brought up the the graph. So the graph is something which is underlies your Microsoft 365 tenant. And Karim alluded to this earlier and essentially when the business people in your organization are communicating back and forth, sharing documents as links, they're chatting in Microsoft teams sending emails. This graph is a database that essentially collects all of that activity, you know, it, it knows that you're sharing documents with. So and so and that becomes one of the ways that Microsoft surfaces information for the grounding that Anthony alluded to. So how does it, how does Copilot know to look at a particular document or know that a document exists on a particular topic about blueberry muffins or whatever, you know, in part that's based on the graph and that's something that can scare a lot of people because it it tends to show that documents are being overshared or people are sharing information in a way that they shouldn't be. So that's another issue. I think Anthony, Karim that we're seeing people talk about.

Anthony: Yeah. And that is that is a key point. I mean, the way that Microsoft explains it is copilot is very individualistic, I'll say so if it's it's based and when you're talking about all the information could be grounded and it's based on the information that someone has access to. And this is, and I think John, this is the point you were saying as we start going through these risk, one of the big challenges, I think a lot of organizations are now seeing is Copilot’s exposing, as you noted, bad access controls is the best way to describe it, right? Like in a lot of people M365 there's not a focus on it. So because people may have more access than they need when you're using Copilot it really does expose that. I think that's probably one of the biggest risks that we're seeing and big, the biggest challenges um that we're talking to our clients about because, because it, it is limited to the access that the person has. There's a presumption that that person only has access to what they should have. And I think we all know now that's not the case. Um And I think that's one of the big dangers and we'll talk about this and she episodes about how do you deal with that specific risk. Um But again, I think as we talked about it, that is one of the big risks that legal departments have to think about is, you know, you should be talking to your IT folks and saying what access controls do we have in place. And the like, because that is one of the big issues that um people have so to give you, you know, highlight it, you have highly confidential information within your organization. You know, if people are using Copilot, that could be a bad thing because suddenly they can ask questions and get answers based on highly confidential information that they shouldn't be getting. So it, it is one of the, the big challenge that we have. One thing I want to talk about, Karim, before we get into a little bit more on the risks is sort of the product level differences. And we talk about copilot as if it's one product, but I think as we've heard and seen there's sort of different flavors, I guess of Copilot.

Karim: Sure. Uh Yeah, so as Anthony noted, um, you know, Copilot is an embedded tool within the, you know, 365 suite. And so, you know, you're gonna have Copilot for word Copilot for Powerpoint. And so there are different, you know, tools and, and there's different functionality within, you know, whatever product you're working within. And that of course, is going to affect, you know, one the artifacts that are created uh some of the prompts that you're able to use and leverage. And so it's not as simple as just thinking as Copilot as this unified, you know, product because there are going to be different configurations. And I'm sure Anthony will, will speak to this, you know, we've kind of noted that even some of the configurations around where these things are stored, you know, there are certain gaps depending on whether you're using Outlook for example. And so, you know, you really have to dig into these product specific configurations because it, you know, the risks do vary. And just to, to kind of add and uh John kind of point to this, there is, you know, one version of Copilot, which is 365 Chat I believe is what it's called. And that is the probably the most efficient uh from a product perspective because it can leverage data across the user um you know, personal graph. And so again, bigger risks may be there than if you were looking at just kind of Excel. So, you know, definitely product specific functionalities change. And so food for thought on that point.

Anthony: And John, I don't know if you've done lighthouse has done testing on each of these different applications, but what we've seen our clients do is actually test each application, you know, so we have Copilot for word Copilot for Excel. Copilot for teams and stuff because as Karim said, we have seen issues with each, I don't know if you've seen anything specific that you want to raise.

John: Yeah, well, I think you guys bring up some really good points. I think um the like for example, Outlook, the prompts actually don't get captured as, as an artifact versus in Word and Excel and Powerpoint. But then we're also seeing some interesting things in our testing. So we're, we're ongoing, we're doing ongoing testing. But when it comes to meeting summaries and the difference between recapping a meeting that only has a transcript versus a meeting that has a full recording and the artifacts that get generated between the different types of meeting summaries, whether it's a recap or a full what they call AI notes. So that we're seeing some, some, some the meeting recaps aren't being captured 100% verbatim. So there's a lot of variability there. I think the key thing is that you pointed out, Anthony is you got to do testing, you've got to test, you've got to get comfortable that, you know, what artifacts are being generated and, and where they stored and whether you can preserve, collect all of those things.

Anthony: Yeah. And, and I think one of the things that um I'm gonna talk a little bit about this generally and John, I'm sure you're saying the same thing is prompts and responses, or at least Microsoft is saying prompts and responses are generally stored in all these applications in a hidden folder in your um in a person's outlook, right? So their their mailbox, as we noted and Karim noted for whatever reason, outlooks prompts and responses aren't being saved there. Although I'm told that that fix is coming in may have been even this week so that there was a gap, obviously, it came out in November. So there's still gaps and I think, you know, as is true, everyone's doing testing, John's doing like Lighthouse is doing testing. I'm sure you're talking to Microsoft note these gaps, they're filling in the gaps. So it is a new product. So that's one of the risks that everyone has to deal with is if anybody telling you this is the way it works may not be absolutely correct because you have to test and certainly you can't really rely on the the documentation that Microsoft has because they're improving the product probably faster than their updating their documentation. So that's another challenge that we've seen. So, test, test test is really important. Um So I just think some things to think about. Okay, we're almost out of time. So we're getting too much in the risk. But I think we've already talked about at a high level, some of the risks as we've talked about, there's the confidentiality and access issues that you should be thinking about retention and over retention we'll get into later. But there is obviously a risk. A lot of litigation teams are saying I don't want to keep this around. People are asking these questions, getting responses, they not be accurate. From a litigation perspective, they don't want it around. There's not business case for it, business use for it. Um because usually when you're doing this and like, you know, you're asking to help draft an email or whatever, you're probably doing something with that data, even meeting transcripts. If you get, you know, a recap, some people are copying and pasting it somewhere just to keep it. But generally the idea is the the output of Copilot is usually sort of short term. So that's generally been the approach I think most people are taking um and to get rid of it, but it's not easy to do because it's Microsoft. And so that's, that's one of the risks. And I think as we talked about, there's going to be the discovery risks, John, right? Like we're talking about, can we preserve it. There may be instances you can or can't. And that's where the outlook thing becomes an issue. As I think Karim noted hallucinations or accuracy is a huge risk. It should be better. Right. As I think Karim said, because it's grounded in your data, it should be better than just sort of general. Um, but there's still risks, right and there's a reputational risk associated with it if, if someone's relying on a summary of a meeting and they don't look at it carefully, that's obviously gonna be a risk, particularly if they circulate that this was said. So a lot of things to think about. At a high level, you know, Karim and John, let, let's conclude, what are one of the risks that you're seeing that? Do you think legal department should be thinking about other than what I just said?

John: Yeah. Well, I think that um one of the things that legal departments seem to be struggling with, I'm sure you guys are hearing this, is that what about the content itself being tagged or labeled as created by generative AI? There, there's actually some areas in M365 like meeting summaries will say generated by AI. But then there's other areas like a word document was completely, you know, the person creates a document. They do, they do a very light touch review of it. The documents essentially 99% generative AI, you know, a lot of companies are asking, well, can we get metadata or some kind of way to mark something is having been created by? So that's one of the things that in addition to all the issues that you highlighted Anthony that we're hearing companies bring up.

Anthony: Yeah, Karim, anything anything else from, from your perspective?

Karim: Sure. Yeah, I think one big issue that, you know, and I'm sure we'll talk about this in future uh episodes is, you know, with this kind of emergence of, you know, this AI functionality, there's a tendency on behalf of users, you know, to, to heavily rely on the outputs. And I know a lot of people kind of talk about human in the loop. And so, you know, I heard somebody say, uh you know, these are called Copilots for a reason and not autopilot. And so the training aspect, you know, really needs to be ingrained because when people do sit and rely on these outputs as if they're, you know, foolproof, you know, you can run into operational reputational, um you know, risks. And so I think that's, that's one thing we're seeing that, you know, the training is going to, you know, be integral.

Anthony: Yeah, and I think the other thing that I'll finalize this just to think about and this is really important for legal departments, think about privilege, right? If you're using Copilot for and it's using privileged information, I know I've heard a number of GCs say my biggest concern is waiver of privilege because you may not know that you're using Copilot against privileged information. That's summarizing an answer which is basically privilege, but you don't know it and then it circulated broadly and stuff. So again, there's a lot to consider, I think as we've talked about, it's really about training and access controls and really understanding the issues. And like I said, in future episodes, uh with Lighthouse, we'll be talking about some of the risks more specifically and then what can you do to mitigate those risks? Because I think that's really what this is gonna be about is mitigating risks and understand the issues. So well thanks to John and Karim. I hope this was helpful. Like I said, we'll have future podcasts, but thanks everyone for joining and hopefully we'll listen soon.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcast on Spotify, Apple Podcasts, Google podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

85 פרקים

Artwork
iconשתפו
 
Manage episode 418238201 series 3402558
תוכן מסופק על ידי Reed Smith. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Reed Smith או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Anthony Diana and Karim Alhassan are joined by Lighthouse’s John Collins to discuss Microsoft's AI-driven productivity tool, Copilot.

This episode presents an overview of Copilot and its various use cases, the technical nuances and details differentiating it from other generative AI tools, the identified compliance gaps and challenges currently seen in production, and the various risks legal departments should be aware of and accounting for.

This episode also provides a high-level overview of best practices that legal and business teams should consider as they continue to explore, pilot and roll out Copilot, including enhanced access controls, testing and user-training, which our speakers will further expand upon in future episodes.

----more----

Transcript:

Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.

Anthony: Hello, this is Anthony Diana, a partner here in the Emerging Technologies Group at Reed Smith. Welcome to Tech Law Talks. Today will be the first part of a series with Lighthouse focusing on what legal departments need to know about Copilot, Microsoft's new artificial intelligence tool. With me today are John Collins from Lighthouse and Karim Alhassan from Reed Smith. Thanks guys for joining. So today, we really just want to give legal departments sort of at a very high level what they need to know about Copilot. As we know Copilot was just introduced, I guess like last fall, maybe November of last year by Microsoft. It has been in preview and the like and then a lot of a lot of organizations are at least contemplating the use of Copilot or some of them, I've, I've heard have already launched Copilot without the legal departments knowing which is an issue in of itself. So today, we just want to give a high level view of what Copilot is, what it does at a really high level. And then what are the things that legal department should be thinking about in terms of risks that they have to manage when launching Copilot? This will be of a high level of additional series, which will be a little bit more practical in terms of what legal department should actually be doing. So today is just sort of highlighting what the risks are and what you should be thinking about. So with that John, I'll start with you. Can you just give a very high level preview of what, what is Copilot that's being launched?

John: Sure, thanks Anthony for having me. So Copilot for M365 which is what we're talking about is Microsoft's flagship Generative AI product. And the best way to think about it is it's Microsoft which has a partnership with open AI is taken the ubiquitous ChatGPT and they've brought it into the Microsoft ecosystem. They've integrated it with all the different Microsoft applications that business people use like Word Excel, powerpoint and teams. And you can ask Copilot to draft a document for you to summarize a document for you. So again, the best way, think about it is that it's taking that generative AI technology that everyone is familiar with, with ChatGPT and bringing it into the Microsoft ecosystem and leveraging a number of other Microsoft technologies within the Microsoft environment to make this kind of a platform available to the business people.

Anthony: Yeah. And I think, you know, I think from at least from what Microsoft is saying and I think a lot of our clients are saying that this is, this is groundbreaking, this is going to be and frankly, it's probably going to be the largest and most influential AI tool Enterprise has because Microsoft is ubiquitous, right? Like all your data is flowing through there. So using AI in this way, should provide tons of productivity. Obviously, that's the big sell. But it is obviously everyone, if they get licenses for everybody, this is something that's going to impact I think most organizations pretty highly just because it is, you know, if you're using Microsoft M365 you're going to be dealing with AI and you know, sort of on a personal level, large scale and the like, and I think that's one of the challenges we'll see. So Karim could you just give a little bit? I mean, John gave a very nice overview just in terms of a few things that he said in terms of we've got this ChatGPT, what is it that's unique about Microsoft in terms of how it works from a from a technology perspective because I know a lot of people are saying I don't want people using ChatGPT for work.

Karim: Sure, thanks Anthony. You know, as opposed to kind of these publicly web based ChatGPT tools, the I think the big sell and what makes Copilot unique is that it's grounded in, you know, your enterprise data, right? And so essentially it integrates with the Microsoft graph, you know, so which allows, you know, users within the enterprise to leverage their M365 data, which adds context. And so rather than just going to, you know, GPT-4, which is, you know, as everyone knows, trained on publicly available data and has its own issues, you know, hallucinations and whatnot. You know, having this unique enterprise specific data kind of adding context to inputs and outputs, you know, leads to more efficiency. You know, another big thing and, and a big issue that legal departments are, you know, obviously thinking about is that having this data input into the tool, you know, one of the problems is that you're worried that it can, you know, train the underlying models. With Copilot, you know, that's not happening, you know, the instance of the LLM in which the tool is relying on is within your surface boundary. And so, you know, that kind of gives protections that you wouldn't necessarily have when you know, people are kind of just using these publicly available tools and So, you know, that's, I think that the big differentiating factor with Copilot as opposed to, you know, GPT-4, ChatGPT and, you know, kind of these other public tools.

Anthony: And I think that's critical and John, obviously I'll let you sort of expand on that too, but I do think that's a critical piece because I know a lot of people are uncomfortable using these large language models, like you said, they're public. The way Microsoft built this product is you get your, in essence your own version. So if you get a license, you're getting your own version of it and it's inside your tenant. So it doesn't go outside sort of your firewalls, not technically a firewall, but it's in your tenant. Um And I think that's critical and I think that gives a lot of people comfort. Um And at least that's what Microsoft is saying.

John: Yeah, just a, just a couple of things to point out is some folks might be familiar with or heard about that their Microsoft has this responsible AI framework where if you are using Azure Open AI tools and you're building your own custom uh ChatGPT but using the Microsoft or the Azure Open AI version of ChatGPT, this responsible AI framework. There is, Microsoft is actually retaining prompts and responses and a human being is monitoring those prompts and responses. But that's in the context of a custom development that an organization might do. Copilot for M365 actually opted out of that. And so to Karim and Anthony's point, Microsoft's not retaining the prompts for Copilot, the responses, they're not using the data to retrain the model. So there's that whole thing. The second thing I just want to point out is that you do have the ability with Copilot from 365 to have plugins and one of the plugins is that when you're chatting in Microsoft teams using the chat application, you have the option to actually send prompts out to the internet to further ground. Karim talked about grounding information in your organization's data. So there are some considerations around configuration. Do you want to allow that to happen? You know, there's still data protection there. But those are a couple of things that come to mind on this topic.

Anthony: Yeah, and look, and I think this is critical and I, I agree with you. I think that's one of the, there is a lot of, I don't say dangers, but there's a lot of risks associated with Copilot. And I think as you said, you really have to go through the settings to make sure that is one that I think we've been advising clients at least to turn off for now just because, and, and just to give, we give clarity here, I think we, we're talking about prompts and responses. A prompt is in essence, a, a question, right? It could be summarized the document or you can type in, you know, give me a recipe for blueberry muffins, right? You can type anything. It's a question through Copilot. So it's an app, you make that question. When we talk about grounding, right? I think this is an important concept for people to understand when you're grounding on a document. So for example, if you want to summarize a a document, right? We have a or a transcript of a meeting, right? Like OK, I want to summarize it. My understanding of the way it works is when you press summarize a document, what you're really doing is telling the tool to look at the document, use the language in that document to create, I'll call it a better prompt or question that has more context. Then that goes to the large language model which is again inside the tenant and then that will basically give you an answer. But it's really just saying the answer is this is the likely next word is the best way to describe it. It's about probability and the like so it doesn't know anything. It is just when you're grounding it in a document or your own enterprise data, all you're doing is basically asking better questions of this large language model. That's my understanding of it. I know people have different types of descriptions, but that's the way I think the best way to think about it. Um And then and again, this is where we start talking about confidentiality. Like why people are a little concerned about it going public, you know, to the public part is that those questions, right are gonna take potentially confidential information, whatever and send it to this outside model if it wasn't Copilot, be out there. And this is true with a lot of a a tools and you may not have control like John that of who's looking at it. Are they storing it there? How long? And they're storing it? You know, is it, is it secure all that stuff? That's the type of stuff that we normally worry about. However, here, because the way Copilot is, you know, built, some of those concerns are are less. Although as you pointed out John, there are features that you can go to the internet which could cause those same concerns any other flavor, Karim or John that you want to give again. That's my description of how it works.

John: But yeah, no, the thing I was gonna say, no, I think you gave a great description of the grounding. Um Karim had brought up the the graph. So the graph is something which is underlies your Microsoft 365 tenant. And Karim alluded to this earlier and essentially when the business people in your organization are communicating back and forth, sharing documents as links, they're chatting in Microsoft teams sending emails. This graph is a database that essentially collects all of that activity, you know, it, it knows that you're sharing documents with. So and so and that becomes one of the ways that Microsoft surfaces information for the grounding that Anthony alluded to. So how does it, how does Copilot know to look at a particular document or know that a document exists on a particular topic about blueberry muffins or whatever, you know, in part that's based on the graph and that's something that can scare a lot of people because it it tends to show that documents are being overshared or people are sharing information in a way that they shouldn't be. So that's another issue. I think Anthony, Karim that we're seeing people talk about.

Anthony: Yeah. And that is that is a key point. I mean, the way that Microsoft explains it is copilot is very individualistic, I'll say so if it's it's based and when you're talking about all the information could be grounded and it's based on the information that someone has access to. And this is, and I think John, this is the point you were saying as we start going through these risk, one of the big challenges, I think a lot of organizations are now seeing is Copilot’s exposing, as you noted, bad access controls is the best way to describe it, right? Like in a lot of people M365 there's not a focus on it. So because people may have more access than they need when you're using Copilot it really does expose that. I think that's probably one of the biggest risks that we're seeing and big, the biggest challenges um that we're talking to our clients about because, because it, it is limited to the access that the person has. There's a presumption that that person only has access to what they should have. And I think we all know now that's not the case. Um And I think that's one of the big dangers and we'll talk about this and she episodes about how do you deal with that specific risk. Um But again, I think as we talked about it, that is one of the big risks that legal departments have to think about is, you know, you should be talking to your IT folks and saying what access controls do we have in place. And the like, because that is one of the big issues that um people have so to give you, you know, highlight it, you have highly confidential information within your organization. You know, if people are using Copilot, that could be a bad thing because suddenly they can ask questions and get answers based on highly confidential information that they shouldn't be getting. So it, it is one of the, the big challenge that we have. One thing I want to talk about, Karim, before we get into a little bit more on the risks is sort of the product level differences. And we talk about copilot as if it's one product, but I think as we've heard and seen there's sort of different flavors, I guess of Copilot.

Karim: Sure. Uh Yeah, so as Anthony noted, um, you know, Copilot is an embedded tool within the, you know, 365 suite. And so, you know, you're gonna have Copilot for word Copilot for Powerpoint. And so there are different, you know, tools and, and there's different functionality within, you know, whatever product you're working within. And that of course, is going to affect, you know, one the artifacts that are created uh some of the prompts that you're able to use and leverage. And so it's not as simple as just thinking as Copilot as this unified, you know, product because there are going to be different configurations. And I'm sure Anthony will, will speak to this, you know, we've kind of noted that even some of the configurations around where these things are stored, you know, there are certain gaps depending on whether you're using Outlook for example. And so, you know, you really have to dig into these product specific configurations because it, you know, the risks do vary. And just to, to kind of add and uh John kind of point to this, there is, you know, one version of Copilot, which is 365 Chat I believe is what it's called. And that is the probably the most efficient uh from a product perspective because it can leverage data across the user um you know, personal graph. And so again, bigger risks may be there than if you were looking at just kind of Excel. So, you know, definitely product specific functionalities change. And so food for thought on that point.

Anthony: And John, I don't know if you've done lighthouse has done testing on each of these different applications, but what we've seen our clients do is actually test each application, you know, so we have Copilot for word Copilot for Excel. Copilot for teams and stuff because as Karim said, we have seen issues with each, I don't know if you've seen anything specific that you want to raise.

John: Yeah, well, I think you guys bring up some really good points. I think um the like for example, Outlook, the prompts actually don't get captured as, as an artifact versus in Word and Excel and Powerpoint. But then we're also seeing some interesting things in our testing. So we're, we're ongoing, we're doing ongoing testing. But when it comes to meeting summaries and the difference between recapping a meeting that only has a transcript versus a meeting that has a full recording and the artifacts that get generated between the different types of meeting summaries, whether it's a recap or a full what they call AI notes. So that we're seeing some, some, some the meeting recaps aren't being captured 100% verbatim. So there's a lot of variability there. I think the key thing is that you pointed out, Anthony is you got to do testing, you've got to test, you've got to get comfortable that, you know, what artifacts are being generated and, and where they stored and whether you can preserve, collect all of those things.

Anthony: Yeah. And, and I think one of the things that um I'm gonna talk a little bit about this generally and John, I'm sure you're saying the same thing is prompts and responses, or at least Microsoft is saying prompts and responses are generally stored in all these applications in a hidden folder in your um in a person's outlook, right? So their their mailbox, as we noted and Karim noted for whatever reason, outlooks prompts and responses aren't being saved there. Although I'm told that that fix is coming in may have been even this week so that there was a gap, obviously, it came out in November. So there's still gaps and I think, you know, as is true, everyone's doing testing, John's doing like Lighthouse is doing testing. I'm sure you're talking to Microsoft note these gaps, they're filling in the gaps. So it is a new product. So that's one of the risks that everyone has to deal with is if anybody telling you this is the way it works may not be absolutely correct because you have to test and certainly you can't really rely on the the documentation that Microsoft has because they're improving the product probably faster than their updating their documentation. So that's another challenge that we've seen. So, test, test test is really important. Um So I just think some things to think about. Okay, we're almost out of time. So we're getting too much in the risk. But I think we've already talked about at a high level, some of the risks as we've talked about, there's the confidentiality and access issues that you should be thinking about retention and over retention we'll get into later. But there is obviously a risk. A lot of litigation teams are saying I don't want to keep this around. People are asking these questions, getting responses, they not be accurate. From a litigation perspective, they don't want it around. There's not business case for it, business use for it. Um because usually when you're doing this and like, you know, you're asking to help draft an email or whatever, you're probably doing something with that data, even meeting transcripts. If you get, you know, a recap, some people are copying and pasting it somewhere just to keep it. But generally the idea is the the output of Copilot is usually sort of short term. So that's generally been the approach I think most people are taking um and to get rid of it, but it's not easy to do because it's Microsoft. And so that's, that's one of the risks. And I think as we talked about, there's going to be the discovery risks, John, right? Like we're talking about, can we preserve it. There may be instances you can or can't. And that's where the outlook thing becomes an issue. As I think Karim noted hallucinations or accuracy is a huge risk. It should be better. Right. As I think Karim said, because it's grounded in your data, it should be better than just sort of general. Um, but there's still risks, right and there's a reputational risk associated with it if, if someone's relying on a summary of a meeting and they don't look at it carefully, that's obviously gonna be a risk, particularly if they circulate that this was said. So a lot of things to think about. At a high level, you know, Karim and John, let, let's conclude, what are one of the risks that you're seeing that? Do you think legal department should be thinking about other than what I just said?

John: Yeah. Well, I think that um one of the things that legal departments seem to be struggling with, I'm sure you guys are hearing this, is that what about the content itself being tagged or labeled as created by generative AI? There, there's actually some areas in M365 like meeting summaries will say generated by AI. But then there's other areas like a word document was completely, you know, the person creates a document. They do, they do a very light touch review of it. The documents essentially 99% generative AI, you know, a lot of companies are asking, well, can we get metadata or some kind of way to mark something is having been created by? So that's one of the things that in addition to all the issues that you highlighted Anthony that we're hearing companies bring up.

Anthony: Yeah, Karim, anything anything else from, from your perspective?

Karim: Sure. Yeah, I think one big issue that, you know, and I'm sure we'll talk about this in future uh episodes is, you know, with this kind of emergence of, you know, this AI functionality, there's a tendency on behalf of users, you know, to, to heavily rely on the outputs. And I know a lot of people kind of talk about human in the loop. And so, you know, I heard somebody say, uh you know, these are called Copilots for a reason and not autopilot. And so the training aspect, you know, really needs to be ingrained because when people do sit and rely on these outputs as if they're, you know, foolproof, you know, you can run into operational reputational, um you know, risks. And so I think that's, that's one thing we're seeing that, you know, the training is going to, you know, be integral.

Anthony: Yeah, and I think the other thing that I'll finalize this just to think about and this is really important for legal departments, think about privilege, right? If you're using Copilot for and it's using privileged information, I know I've heard a number of GCs say my biggest concern is waiver of privilege because you may not know that you're using Copilot against privileged information. That's summarizing an answer which is basically privilege, but you don't know it and then it circulated broadly and stuff. So again, there's a lot to consider, I think as we've talked about, it's really about training and access controls and really understanding the issues. And like I said, in future episodes, uh with Lighthouse, we'll be talking about some of the risks more specifically and then what can you do to mitigate those risks? Because I think that's really what this is gonna be about is mitigating risks and understand the issues. So well thanks to John and Karim. I hope this was helpful. Like I said, we'll have future podcasts, but thanks everyone for joining and hopefully we'll listen soon.

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies practice, please email techlawtalks@reedsmith.com. You can find our podcast on Spotify, Apple Podcasts, Google podcasts, reedsmith.com, and our social media accounts.

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.

All rights reserved.

Transcript is auto-generated.

  continue reading

85 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר