15 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)
«
»
155 - Understanding Human Engagement Risk When Designing AI and GenAI User Experiences
Manage episode 447359650 series 2527129
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences.
While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward.
Highlights/ Skip to:
- (1:48) Ovetta's background and what she brings to Google’s Core ML group
- (6:03) How Ovetta and her team work with data scientists and engineers deep in the stack
- (9:09) How AI is changing the front-end of applications
- (12:46) The type of people you should seek out to design your AI and LLM UXs
- (16:15) Explaining why we’re only at the very start of major GenAI breakthroughs
- (22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams
- (31:11) The potential harms of carelessly deploying GenAI technology
- (42:09) Defining acceptable levels of risk when using GenAI in real-world applications
- (53:16) Closing thoughts from Ovetta and where you can find her
Quotes from Today’s Episode
- “If artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. That’s why companies have strong legal, compliance, and regulatory policies around [AI], how it’s built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.” - Ovetta Sampson (10:03)
- “[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. That’s the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as they’re testing product design.” - Ovetta Sampson (13:45)
- “We’re in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. It’s changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and you’re like, all right, let’s go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?” - Ovetta Sampson (24:16)
- “We have to be very intentional [when building AI tools], and that’s the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be “Negative Nancy,” but this technology has huge potential to harm. It has harmed. If we don’t have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If we’re not looking out for the humans, then who actually is?” - Ovetta Sampson (32:10)
- “[Research shows] things happen to our brain when we’re exposed to artificial intelligence… there are real human engagement risks that are an opportunity for design. When you’re designing a self-driving car, you can’t just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop system—something that’s going to happen [in case] the algorithm is wrong. If you don’t have that designed, there’s going to be a large human engagement risk that a car is going to run over somebody who’s [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?” - Ovetta Sampson (39:42)
- “Model goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. That’s good, right? And then people started seeing photos of apes when [they] typed in ‘black people.’ Companies have to get used to going to their customers in a wide spectrum and asking them when they’re [models or apps are] right and wrong. They can’t take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If you’re going to use this technology, you need to put into place the governance that needs to be there.” - Ovetta Sampson (44:08)
113 פרקים
Manage episode 447359650 series 2527129
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences.
While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward.
Highlights/ Skip to:
- (1:48) Ovetta's background and what she brings to Google’s Core ML group
- (6:03) How Ovetta and her team work with data scientists and engineers deep in the stack
- (9:09) How AI is changing the front-end of applications
- (12:46) The type of people you should seek out to design your AI and LLM UXs
- (16:15) Explaining why we’re only at the very start of major GenAI breakthroughs
- (22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams
- (31:11) The potential harms of carelessly deploying GenAI technology
- (42:09) Defining acceptable levels of risk when using GenAI in real-world applications
- (53:16) Closing thoughts from Ovetta and where you can find her
Quotes from Today’s Episode
- “If artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. That’s why companies have strong legal, compliance, and regulatory policies around [AI], how it’s built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.” - Ovetta Sampson (10:03)
- “[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. That’s the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as they’re testing product design.” - Ovetta Sampson (13:45)
- “We’re in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. It’s changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and you’re like, all right, let’s go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?” - Ovetta Sampson (24:16)
- “We have to be very intentional [when building AI tools], and that’s the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be “Negative Nancy,” but this technology has huge potential to harm. It has harmed. If we don’t have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If we’re not looking out for the humans, then who actually is?” - Ovetta Sampson (32:10)
- “[Research shows] things happen to our brain when we’re exposed to artificial intelligence… there are real human engagement risks that are an opportunity for design. When you’re designing a self-driving car, you can’t just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop system—something that’s going to happen [in case] the algorithm is wrong. If you don’t have that designed, there’s going to be a large human engagement risk that a car is going to run over somebody who’s [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?” - Ovetta Sampson (39:42)
- “Model goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. That’s good, right? And then people started seeing photos of apes when [they] typed in ‘black people.’ Companies have to get used to going to their customers in a wide spectrum and asking them when they’re [models or apps are] right and wrong. They can’t take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If you’re going to use this technology, you need to put into place the governance that needs to be there.” - Ovetta Sampson (44:08)
113 פרקים
Kaikki jaksot
×
1 170 - Turning Data into Impactful AI Products at Experian: Lessons from North American Chief AI Officer Shri Santhnam (Promoted Episode) 42:33

1 169 - AI Product Management and UX: What’s New (If Anything) About Making Valuable LLM-Powered Products with Stuart Winter-Tear 1:01:05

1 168 - 10 Challenges Internal Data Teams May Face Building Their First Revenue-Generating Data Product 38:24

1 167 - AI Product Management and Design: How Natalia Andreyeva and Team at Infor Nexus Create B2B Data Products that Customers Value 37:34

1 166 - Can UX Quality Metrics Increase Your Data Product's Business Value and Adoption? 26:12

1 165 - How to Accommodate Multiple User Types and Needs in B2B Analytics and AI Products When You Lack UX Resources 49:04

1 164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge 45:25

1 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function 41:41

1 162 - Beyond UI: Designing User Experiences for LLM and GenAI-Based Products 42:07

1 161 - Designing and Selling Enterprise AI Products [Worth Paying For] 34:00

1 160 - Leading Product Through a Merger/Acquisition: Lessons from The Predictive Index’s CPO Adam Berke 42:10

1 159 - Uncorking Customer Insights: How Data Products Revealed Hidden Gems in Liquor & Hospitality Retail 40:47

1 158 - From Resistance to Reliance: Designing Data Products for Non-Believers with Anna Jacobson of Operator Collective 43:41

1 157 - How this materials science SAAS company brings PM+UX+data science together to help materials scientists accelerate R&D 34:58

1 156-The Challenges of Bringing UX Design and Data Science Together to Make Successful Pharma Data Products with Jeremy Forman 41:37

1 155 - Understanding Human Engagement Risk When Designing AI and GenAI User Experiences 55:33

1 154 - 10 Things Founders of B2B SAAS Analytics and AI Startups Get Wrong About DIY Product and UI/UX Design 44:47

1 153 - What Impressed Me About How John Felushko Does Product and UX at the Analytics SAAS Company, LabStats 57:31

1 152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product 53:00

1 151 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 49:57

1 150 - How Specialized LLMs Can Help Enterprises Deliver Better GenAI User Experiences with Mark Ramsey 52:22

1 149 - What the Data Says About Why So Many Data Science and AI Initiatives Are Still Failing to Produce Value with Evan Shellshear 50:18

1 148 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 2) 26:36

1 147 - UI/UX Design Considerations for LLMs in Enterprise Applications (Part 1) 25:46

1 146 - (Rebroadcast) Beyond Data Science - Why Human-Centered AI Needs Design with Ben Shneiderman 42:07

1 145 - Data Product Success: Adopting a Customer-Centric Approach With Malcolm Hawker, Head of Data Management at Profisee 53:09

1 144 - The Data Product Debate: Essential Tech or Excessive Effort? with Shashank Garg, CEO of Infocepts (Promoted Episode) 52:38

1 143 - The (5) Top Reasons AI/ML and Analytics SAAS Product Leaders Come to Me For UI/UX Design Help 50:01

1 142 - Live Webinar Recording: My UI/UX Design Audit of a New Podcast Analytics Service w/ Chris Hill (CEO, Humblepod) 50:56

1 141 - How They’re Adopting a Producty Approach to Data Products at RBC with Duncan Milne 43:49

1 140 - Why Data Visualization Alone Doesn’t Fix UI/UX Design Problems in Analytical Data Products with T from Data Rocks NZ 42:44

1 139 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 51:02

1 138 - VC Spotlight: The Impact of AI on SAAS and Data/Developer Products in 2024 w/ Ellen Chisa of BoldStart Ventures 33:05

1 137 - Immature Data, Immature Clients: When Are Data Products the Right Approach? feat. Data Product Architect, Karen Meppen 44:50

1 136 - Navigating the Politics of UX Research and Data Product Design with Caroline Zimmerman 44:16

1 135 - “No Time for That:” Enabling Effective Data Product UX Research in Product-Immature Organizations 52:47

1 134 - What Sanjeev Mohan Learned Co-Authoring “Data Products for Dummies” 46:52


1 132 - Leveraging Behavioral Science to Increase Data Product Adoption with Klara Lindner 42:56

1 131 - 15 Ways to Increase User Adoption of Data Products (Without Handcuffs, Threats and Mandates) with Brian T. O’Neill 36:57

1 130 - Nick Zervoudis on Data Product Management, UX Design Training and Overcoming Imposter Syndrome 48:56

1 129 - Why We Stopped, Deleted 18 Months of ML Work, and Shifted to a Data Product Mindset at Coolblue 35:21

1 128 - Data Products for Dummies and The Importance of Data Product Management with Vishal Singh of Starburst 53:01

1 127 - On the Road to Adopting a “Producty” Approach to Data Products at the UK’s Care Quality Commission with Jonathan Cairns-Terry 36:55

1 126 - Designing a Product for Making Better Data Products with Anthony Deighton 47:38
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.