Artwork

תוכן מסופק על ידי Sentience Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Sentience Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Kurt Gray on human-robot interaction and mind perception

59:10
 
שתפו
 

Manage episode 345687471 series 2596584
תוכן מסופק על ידי Sentience Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Sentience Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 פרקים

Artwork
iconשתפו
 
Manage episode 345687471 series 2596584
תוכן מסופק על ידי Sentience Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Sentience Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

And then you're like, actually, I can't know what it's like to be a bat—again, the problem of other minds, right? There's this fundamental divide between a human mind and a bat, but at least a bat's a mammal. What is it like to be an AI? I have no idea. So I think [mind perception] could make us less sympathetic to them in some sense because it's—I don't know, they're a circuit board, there are these algorithms, and so who knows? I can subjugate them now under the heel of human desire because they're not like me.

  • Kurt Gray

What is mind perception? What do we know about mind perception of AI/robots? Why do people like to use AI for some decisions but not moral decisions? Why would people rather give up hundreds of hospital beds than let AI make moral decisions?

Kurt Gray is a Professor at the University of North Carolina at Chapel Hill, where he directs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He studies morality, politics, religion, perceptions of AI, and how best to bridge divides.

Topics discussed in the episode:

  • Introduction (0:00)
  • How did a geophysicist come to be doing social psychology? (0:51)
  • What do the Deepest Beliefs Lab and the Center for the Science of Moral Understanding do? (3:11)
  • What is mind perception? (4:45)
  • What is a mind? (7:45)
  • Agency vs experience, or thinking vs feeling (9:40)
  • Why do people see moral exemplars as being insensitive to pain? (10:45)
  • How will people perceive minds in robots/AI? (18:50)
  • Perspective taking as a tool to reduce substratism towards AI (29:30)
  • Why don’t people like using AI to make moral decisions? (32:25)
  • What would be the moral status of AI if they are not sentient? (38:00)
  • The presence of robots can make people seem more similar (44:10)
  • What can we expect about discrimination towards digital minds in the future? (48:30)

Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcast

Support the show

  continue reading

23 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר