Artwork

תוכן מסופק על ידי Grant Larsen. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Grant Larsen או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

FIR 140: Hey AI, Where Are MY RESULTS??

17:22
 
שתפו
 

Manage episode 316475092 series 1410522
תוכן מסופק על ידי Grant Larsen. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Grant Larsen או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Hey, AI where are my results? In this episode, we take a look at some of the fundamental principles to getting results for your business using AI.

Hey, this is Grant, Welcome to another episode of ClickAI Radio. All right, so I've been thinking about this issue around AI and the ability or inability of businesses to get the results from their AI efforts. So I'm gonna quote here again from Bernard Maher's book, it's called artificial intelligence in practice, he actually points out some interesting, he's got a series of use cases, or case studies, I should say, with their various use cases where AI has been applied, and what some of the outcomes or the results of those arms is going to pull from one of these here, this was a Kimberly Clark experience, where they were looking to produce some, some AI insights based on their customer data. Alright, so customer segmentation problem, how do we go after the market and try to improve our targeting with our marketing efforts and use AI to understand where to go with that. So what they did was, as they applied the AI, they ended up increasing their signup rates by 17%. And then they ran some other campaigns to optimize targeting of customers, this was for their dipende brand, and they saw a 24% increase in conversions. So how did they do that? Well, they they did it mostly by producing content, that more closely aligned with the customer profiles, that the AI predicted would be more responsive. So there's a great a great use case here, right in terms of applying AI to to this sort of marketing problem and customer segmentation area. Now, these customers, what they also found, was it, they ended up becoming more likely to be long term repeat buyers. So it's one thing to have an increase in conversions and sales, but to turn them into longer term buyers. That's a real bonus. Certainly, it also had the had the downstream effect of making them more likely to give positive recommendations to friends and family. So this is a great example of of successes for AI, right, it's a great use case for where we would apply it, and the kinds of sort of long term benefits that we want from AI.

Of course, what we're finding is more and more market leading companies are transitioning into tech companies. And if we're not thinking of becoming a tech company, it actually is continues to move to our disadvantage. So it's essential that we do that, of course, we want to be competitive and, and even stay ahead of the pack, if you will. But turns out that AI driven Analytics is a lot more powerful than the you know, traditional business intelligence solutions are out there previously, and still certainly in use today by a lot of organizations that are focusing a course on customer segmentation. But the real point is, by applying AI ultimately into the space of doing custom customer segmentation, the AI is able to see things of course that difficult for our brains to get get get wrapped around. But it's also has another impact that we're seeing, and that is that the businesses that are competing, of course, as our businesses each compete in the pool and the talent pool and the challenges we face at this time of year not producing this at the beginning of 2022 when we've got a lot of not only supply chain challenges, but resourcing problems and a lot of competition for talent in our businesses. You know, building our business in tech savvy ways and leveraging the the technologies such as AI, of course to help us to be more competitive. Those also tend to attract a certain type of talent to our to our companies also. So it's a it's important part of building our representation, a reputation and our branding.

But I want to talk about But the other side of it, right, so that sounds like you know, roses and apple pie and motherhood and so forth, right? Everything's great. Turns out, of course, as we all know, AI has had its failures. But I found that particularly interesting view of this from the I triple e.org site, they were describing some AI failures. And it's interesting, as you look at the different set of failures, typically, it's it seems to be a lot in some of these areas where, where there's, you know, AI is being used in these use cases that are, you know, pushing the edge and the envelope, which, of course, is what we should be doing for our r&d. But you know, when I think about AI, my focus is more on how can I apply it to benefit my business, right, to improve my customer service to increase my revenues, my profitability, my efficiencies, and so forth? Well, it turns out that a lot of these are fairly r&d centric, use cases, that IEEE sort of points out not surprising, given that it's IEEE but I'm just gonna point out some of these right, and then and then take, hopefully point out some takeaways. Number one was brittleness, they felt like AI is quite brittle, especially this was in a computer visioning use case. And that as as the as the world of the imaging kept changing, then they had to do a lot of a lot of AI rework so so brittleness of AI models, in some use cases, that's That's true statement. So is that a failure? Well, it might be, I guess, right, depending on your use case, number two, the embedded bias problem, I touched on that earlier, especially when I was having a conversation recently with a couple of the organizations that I've done some interviews with, so I'm not going to drill more on to that. This is something that we have to be mindful of, as we're building our AI models to help our businesses. Number three, catastrophic forgetting, that's an interesting phrase right there catastrophic for getting in here, they point out the double deep fake problem, right. And, and here, they're discussing how, you know, bad actors in the market, who are doing this deep fake work. And, you know, organizations trying to compete against that the constant retraining of AI models to include not only new deep fake techniques, but also the need to deal with previous and older styles of deep fake.

This certainly is critical. Is it a failure? I don't know, I kind of look at that as the cost of doing business conceptually, right? That if we're in that world, where we got to keep figuring out which we do, how to address this deep fake problem. But nevertheless, AI models are, you know, if you don't include the data in your retrained model, then yes, it can, quote unquote, forget old stuff. So it's not like our cognitive brains were well, I guess depends how old you are. Then you don't forget things as humans, right. Okay, number four explainability. This is a challenge with AI. And in particular, it's a challenge as it relates to giving X explanations for results. And predictions can sometimes be be difficult to do, right, where there's some of these questions. And predictions that come out are hard to to explain how it is that this was arrived at. And as a result, I'm going to give a suggestion on this here in a moment to help address some of this stuff. I think explainability is a real challenge for so is it a failure? It's it's it's a challenge for sure. Number five Quantifying uncertainty. So hey, here we go. Once again, having sufficient data sets that deal with fringe or edge use cases is critical. This is a, I guess, maybe a failure in the sense that sometimes as humans, we don't do that. And we focus on the Happy Day scenarios. But turns out, our brains and real life have to constantly be dealing with all the other fringe cases as well. So it does put the onus on us as business owners to make sure that we're collecting as wide a set of data as possible. And that not necessarily means more and more data means more coverage of you know, the different use cases of data out there that we have number six common sense this was a failure, the I triple C I triple E was pointing out common sense meaning turns out AI lacks common sense. Sometimes us as humans lack common sense in any event, the ability, of course, to reach some logical, acceptable conclusions, right based on our, you know, vast understanding and context of everyday knowledge. You know, a lot of times we take that for granted and well AI doesn't really have that at least in our current current maturity levels of AI, we just don't have that. as such. I think one of the things that I've seen and how I tend to view it as I work with organizations is this AI stuff really should be looked at as augmented intelligence.

So the bottom line is, Do not throw your brain away, right this stuff is, is AI stuff in our Yes, we should look at it as it's going to be bringing insights Bhutia, challenge it, right. And so when it comes into us with some insights, we should look at in terms of the realm of possibility, as well, turns out one more number seven terms of failures, what I triple E was pointing out was math. He said, Well, look, you know, simple number crunching tends not to be handled so well with AI. So in addition to not throwing your brain away, don't throw away your old calculator, right? Or, of course, all of your old, you know, sequential linear software, right? That's doing real stuff today, running your business in the economy and such. So AI's got its place, and these seven, quote unquote, failure areas, I think, have that I think there's some some techniques that we can use to get around some of it. Not all of it, but some of it, but what it does is it pushes us into a set of use cases, where, excuse me, we can still get AI value. So to get the best insights from your AI, I think the net net is somewhere in this area, a letter A, I decided not to do a number one, letter A define your Chris questioning, right, which drives focused AI model preparation. So it means we got to think we got to think about the business and not about the technology. As we're as we're organizing, what we're going to go do with AI that has has to happen. Be not number two, but be I guess, here we go be. So when it comes to data, be prepared to continually add to your data set and rebuild your models, this notion that you know, build it once and it's done kind of thing that that needs to go away. And then see, use AI like an augmented intelligence tool, as I pointed out earlier, right? So use an AI platform as part of this that can take your feedback to influence the AI model. You are the brains.

Alright, and so as you're getting some insight or predictions on things to do, be sure to come back and inform what was the impact of that was a positive? Was it negative that do what we expected? We need the AI models to continue to be informed and learn from this. Alright, so AI does a great job identifying insights on correlations, of course that are difficult for our brains to see, but we should review it as business owners within the context of our business. Right, we got to keep that in mind and evaluate the veracity of applying that insight to our business. I had to get the veracity word in there just shouted intelligent earlier look at the very the can we actually take this insight and apply to our business? Right? That's really what we're after, not just to have the, you know, the aha moment. Oh, that's great insight. But hey, can I make some concrete adjustments, it's going to change my business and actually bring some bring some benefits about so a I want its approach approached right, using the techniques that I mentioned above my experiences, it really can't help our business. But the industry is also gathering context on the failure scenarios. And so of course, we're going to be best served if we take that into consideration, no doubt about that. So I was looking at something else here. medium.com, you can find some interesting blog, articles and such out there. It pointed out, okay, let's say that, is there a single point of failure for AI. And I found this to be an interesting point that was made here on this particular piece here. Is there a single point of failure for AI and there was this blog on medium.com. That suggests there is and as I reviewed it, I thought, Hmm, interesting. So this piece suggests that to get started with AI, you should not choose a company wide AI implementation, but rather a proof of concept that gets the company accustomed to the new normal, right in my experience, the new normal includes the following activities.

These are going to sound like the ABC items I just mentioned. I'm going to use 123. Now just switch it up number one, alright, the new normal if you're going to go apply AI Think about doing these things. Number one clarity in the questions, the business questions or seeking from the AI. Sound familiar? Number two, data curation doesn't have to be perfect, but you got to have for thought and organization on your data. And of course, its potential relevance to the questions you're asking. And number three, a mindset of iterative AI model refinement over time. Alright, so I think I've said those three things several ways in this episode. All right, so let's get to the point of this particular piece here. Is there a single reason for AI project failure? And you hear all sorts of stats out there, right? Maybe half of the projects fail? Or some say 85% of the projects fail? Is there a golden thread? According to this particular bar, blog, or article? They stay? They say, yes. And the answer is expectation management. Right? So organizations have of course, said amazing things to me about what they expect from their AI. This has to be managed early in the process. So I agree with the sentiment, right? It's certainly not as quantifiable, right? It's not to say, Oh, the, you know, common point of failure is you didn't have clean data or enough data or whatever. Certainly, the best practices are we got to have that data and etc, etc. But expectation management is really an interesting point on that. And I agree with that. As I've reviewed this thought about this right, I've developed a guide for smart steps to business outcomes with AI. If you're interested in that, reach out to me click ai radio.com Just join my email list. I'll make sure you get get that to you. Hey, everyone, thanks for joining and until next time, then edge manage manage your AI expectations to achieve the results which are there to benefit your business using AI.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

  continue reading

159 פרקים

Artwork
iconשתפו
 
Manage episode 316475092 series 1410522
תוכן מסופק על ידי Grant Larsen. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Grant Larsen או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Hey, AI where are my results? In this episode, we take a look at some of the fundamental principles to getting results for your business using AI.

Hey, this is Grant, Welcome to another episode of ClickAI Radio. All right, so I've been thinking about this issue around AI and the ability or inability of businesses to get the results from their AI efforts. So I'm gonna quote here again from Bernard Maher's book, it's called artificial intelligence in practice, he actually points out some interesting, he's got a series of use cases, or case studies, I should say, with their various use cases where AI has been applied, and what some of the outcomes or the results of those arms is going to pull from one of these here, this was a Kimberly Clark experience, where they were looking to produce some, some AI insights based on their customer data. Alright, so customer segmentation problem, how do we go after the market and try to improve our targeting with our marketing efforts and use AI to understand where to go with that. So what they did was, as they applied the AI, they ended up increasing their signup rates by 17%. And then they ran some other campaigns to optimize targeting of customers, this was for their dipende brand, and they saw a 24% increase in conversions. So how did they do that? Well, they they did it mostly by producing content, that more closely aligned with the customer profiles, that the AI predicted would be more responsive. So there's a great a great use case here, right in terms of applying AI to to this sort of marketing problem and customer segmentation area. Now, these customers, what they also found, was it, they ended up becoming more likely to be long term repeat buyers. So it's one thing to have an increase in conversions and sales, but to turn them into longer term buyers. That's a real bonus. Certainly, it also had the had the downstream effect of making them more likely to give positive recommendations to friends and family. So this is a great example of of successes for AI, right, it's a great use case for where we would apply it, and the kinds of sort of long term benefits that we want from AI.

Of course, what we're finding is more and more market leading companies are transitioning into tech companies. And if we're not thinking of becoming a tech company, it actually is continues to move to our disadvantage. So it's essential that we do that, of course, we want to be competitive and, and even stay ahead of the pack, if you will. But turns out that AI driven Analytics is a lot more powerful than the you know, traditional business intelligence solutions are out there previously, and still certainly in use today by a lot of organizations that are focusing a course on customer segmentation. But the real point is, by applying AI ultimately into the space of doing custom customer segmentation, the AI is able to see things of course that difficult for our brains to get get get wrapped around. But it's also has another impact that we're seeing, and that is that the businesses that are competing, of course, as our businesses each compete in the pool and the talent pool and the challenges we face at this time of year not producing this at the beginning of 2022 when we've got a lot of not only supply chain challenges, but resourcing problems and a lot of competition for talent in our businesses. You know, building our business in tech savvy ways and leveraging the the technologies such as AI, of course to help us to be more competitive. Those also tend to attract a certain type of talent to our to our companies also. So it's a it's important part of building our representation, a reputation and our branding.

But I want to talk about But the other side of it, right, so that sounds like you know, roses and apple pie and motherhood and so forth, right? Everything's great. Turns out, of course, as we all know, AI has had its failures. But I found that particularly interesting view of this from the I triple e.org site, they were describing some AI failures. And it's interesting, as you look at the different set of failures, typically, it's it seems to be a lot in some of these areas where, where there's, you know, AI is being used in these use cases that are, you know, pushing the edge and the envelope, which, of course, is what we should be doing for our r&d. But you know, when I think about AI, my focus is more on how can I apply it to benefit my business, right, to improve my customer service to increase my revenues, my profitability, my efficiencies, and so forth? Well, it turns out that a lot of these are fairly r&d centric, use cases, that IEEE sort of points out not surprising, given that it's IEEE but I'm just gonna point out some of these right, and then and then take, hopefully point out some takeaways. Number one was brittleness, they felt like AI is quite brittle, especially this was in a computer visioning use case. And that as as the as the world of the imaging kept changing, then they had to do a lot of a lot of AI rework so so brittleness of AI models, in some use cases, that's That's true statement. So is that a failure? Well, it might be, I guess, right, depending on your use case, number two, the embedded bias problem, I touched on that earlier, especially when I was having a conversation recently with a couple of the organizations that I've done some interviews with, so I'm not going to drill more on to that. This is something that we have to be mindful of, as we're building our AI models to help our businesses. Number three, catastrophic forgetting, that's an interesting phrase right there catastrophic for getting in here, they point out the double deep fake problem, right. And, and here, they're discussing how, you know, bad actors in the market, who are doing this deep fake work. And, you know, organizations trying to compete against that the constant retraining of AI models to include not only new deep fake techniques, but also the need to deal with previous and older styles of deep fake.

This certainly is critical. Is it a failure? I don't know, I kind of look at that as the cost of doing business conceptually, right? That if we're in that world, where we got to keep figuring out which we do, how to address this deep fake problem. But nevertheless, AI models are, you know, if you don't include the data in your retrained model, then yes, it can, quote unquote, forget old stuff. So it's not like our cognitive brains were well, I guess depends how old you are. Then you don't forget things as humans, right. Okay, number four explainability. This is a challenge with AI. And in particular, it's a challenge as it relates to giving X explanations for results. And predictions can sometimes be be difficult to do, right, where there's some of these questions. And predictions that come out are hard to to explain how it is that this was arrived at. And as a result, I'm going to give a suggestion on this here in a moment to help address some of this stuff. I think explainability is a real challenge for so is it a failure? It's it's it's a challenge for sure. Number five Quantifying uncertainty. So hey, here we go. Once again, having sufficient data sets that deal with fringe or edge use cases is critical. This is a, I guess, maybe a failure in the sense that sometimes as humans, we don't do that. And we focus on the Happy Day scenarios. But turns out, our brains and real life have to constantly be dealing with all the other fringe cases as well. So it does put the onus on us as business owners to make sure that we're collecting as wide a set of data as possible. And that not necessarily means more and more data means more coverage of you know, the different use cases of data out there that we have number six common sense this was a failure, the I triple C I triple E was pointing out common sense meaning turns out AI lacks common sense. Sometimes us as humans lack common sense in any event, the ability, of course, to reach some logical, acceptable conclusions, right based on our, you know, vast understanding and context of everyday knowledge. You know, a lot of times we take that for granted and well AI doesn't really have that at least in our current current maturity levels of AI, we just don't have that. as such. I think one of the things that I've seen and how I tend to view it as I work with organizations is this AI stuff really should be looked at as augmented intelligence.

So the bottom line is, Do not throw your brain away, right this stuff is, is AI stuff in our Yes, we should look at it as it's going to be bringing insights Bhutia, challenge it, right. And so when it comes into us with some insights, we should look at in terms of the realm of possibility, as well, turns out one more number seven terms of failures, what I triple E was pointing out was math. He said, Well, look, you know, simple number crunching tends not to be handled so well with AI. So in addition to not throwing your brain away, don't throw away your old calculator, right? Or, of course, all of your old, you know, sequential linear software, right? That's doing real stuff today, running your business in the economy and such. So AI's got its place, and these seven, quote unquote, failure areas, I think, have that I think there's some some techniques that we can use to get around some of it. Not all of it, but some of it, but what it does is it pushes us into a set of use cases, where, excuse me, we can still get AI value. So to get the best insights from your AI, I think the net net is somewhere in this area, a letter A, I decided not to do a number one, letter A define your Chris questioning, right, which drives focused AI model preparation. So it means we got to think we got to think about the business and not about the technology. As we're as we're organizing, what we're going to go do with AI that has has to happen. Be not number two, but be I guess, here we go be. So when it comes to data, be prepared to continually add to your data set and rebuild your models, this notion that you know, build it once and it's done kind of thing that that needs to go away. And then see, use AI like an augmented intelligence tool, as I pointed out earlier, right? So use an AI platform as part of this that can take your feedback to influence the AI model. You are the brains.

Alright, and so as you're getting some insight or predictions on things to do, be sure to come back and inform what was the impact of that was a positive? Was it negative that do what we expected? We need the AI models to continue to be informed and learn from this. Alright, so AI does a great job identifying insights on correlations, of course that are difficult for our brains to see, but we should review it as business owners within the context of our business. Right, we got to keep that in mind and evaluate the veracity of applying that insight to our business. I had to get the veracity word in there just shouted intelligent earlier look at the very the can we actually take this insight and apply to our business? Right? That's really what we're after, not just to have the, you know, the aha moment. Oh, that's great insight. But hey, can I make some concrete adjustments, it's going to change my business and actually bring some bring some benefits about so a I want its approach approached right, using the techniques that I mentioned above my experiences, it really can't help our business. But the industry is also gathering context on the failure scenarios. And so of course, we're going to be best served if we take that into consideration, no doubt about that. So I was looking at something else here. medium.com, you can find some interesting blog, articles and such out there. It pointed out, okay, let's say that, is there a single point of failure for AI. And I found this to be an interesting point that was made here on this particular piece here. Is there a single point of failure for AI and there was this blog on medium.com. That suggests there is and as I reviewed it, I thought, Hmm, interesting. So this piece suggests that to get started with AI, you should not choose a company wide AI implementation, but rather a proof of concept that gets the company accustomed to the new normal, right in my experience, the new normal includes the following activities.

These are going to sound like the ABC items I just mentioned. I'm going to use 123. Now just switch it up number one, alright, the new normal if you're going to go apply AI Think about doing these things. Number one clarity in the questions, the business questions or seeking from the AI. Sound familiar? Number two, data curation doesn't have to be perfect, but you got to have for thought and organization on your data. And of course, its potential relevance to the questions you're asking. And number three, a mindset of iterative AI model refinement over time. Alright, so I think I've said those three things several ways in this episode. All right, so let's get to the point of this particular piece here. Is there a single reason for AI project failure? And you hear all sorts of stats out there, right? Maybe half of the projects fail? Or some say 85% of the projects fail? Is there a golden thread? According to this particular bar, blog, or article? They stay? They say, yes. And the answer is expectation management. Right? So organizations have of course, said amazing things to me about what they expect from their AI. This has to be managed early in the process. So I agree with the sentiment, right? It's certainly not as quantifiable, right? It's not to say, Oh, the, you know, common point of failure is you didn't have clean data or enough data or whatever. Certainly, the best practices are we got to have that data and etc, etc. But expectation management is really an interesting point on that. And I agree with that. As I've reviewed this thought about this right, I've developed a guide for smart steps to business outcomes with AI. If you're interested in that, reach out to me click ai radio.com Just join my email list. I'll make sure you get get that to you. Hey, everyone, thanks for joining and until next time, then edge manage manage your AI expectations to achieve the results which are there to benefit your business using AI.

Thank you for joining Grant on ClickAI Radio. Don't forget to subscribe and leave feedback. And remember to download your free ebook, visit ClickAIRadio.com now.

  continue reading

159 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר