Artwork

תוכן מסופק על ידי re:verb, Calvin Pollak, and Alex Helberg. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי re:verb, Calvin Pollak, and Alex Helberg או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

E97: re:joinder - OI: Oprahficial Intelligence

1:27:57
 
שתפו
 

Manage episode 445606213 series 2460300
תוכן מסופק על ידי re:verb, Calvin Pollak, and Alex Helberg. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי re:verb, Calvin Pollak, and Alex Helberg או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

On today’s show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey’s recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters’ claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world.

We conclude with a reflection on the words of novelist Marilynne Robinson, the show’s final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.”

Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don’t have to!

Works & Concepts cited in this episode:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Rather, S. (2024, 25 Apr.). How is one of America's biggest spy agencies using AI? We're suing to find out. ACLU. [On the potential harms of the US NSA implementing AI into existing dragnet surveillance programs domestically & internationally]

Robins-Early, N. (2024, 3 Apr.). George Carlin’s estate settles lawsuit over comedian’s AI doppelganger. The Guardian.

re:verb Episode 12: re:blurb - Stasis Theory

re:verb Episode 17: re:blurb - Dialogicality

re:verb Episode 75: A.I. Writing and Academic Integrity (w/ Dr. S. Scott Graham)

re:verb Episode 91: Thinking Rhetorically (w/ Robin Reames) [Episode referencing the importance of following the stasis categories in public debates]

  continue reading

97 פרקים

Artwork
iconשתפו
 
Manage episode 445606213 series 2460300
תוכן מסופק על ידי re:verb, Calvin Pollak, and Alex Helberg. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי re:verb, Calvin Pollak, and Alex Helberg או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

On today’s show, we once again fire up our rhetorical stovetop to roast some dubious public argumentation: Oprah Winfrey’s recent ABC special, “AI and the Future of Us.” In this re:joinder episode, Alex and Calvin listen through and discuss audio clips from the show featuring a wide array of guests - from corporate leaders like Sam Altman and Bill Gates to technologists like Aza Raskin and Tristan Harris, and even FBI Director Christopher Wray - and dismantle some of the mystifying rhetorical hype tropes that they (and Oprah) circulate about the proliferation of large language models (LLMs) and other “AI” technologies into our lives. Along the way, we use rhetorical tools from previous episodes, such as the stasis framework, to show which components of the debate around AI are glossed over, and which are given center-stage. We also bring our own sociopolitical and media analysis to the table to help contextualize (and correct) the presenters’ claims about the speed of large language model development, the nature of its operation, and the threats - both real and imagined - that this new technological apparatus might present to the world.

We conclude with a reflection on the words of novelist Marilynne Robinson, the show’s final guest, who prompts us to think about the many ways in which “difficulty is the point” when it comes to human work and developing autonomy. Meanwhile, the slick and tempting narratives promoting “ease” and “efficiency” with AI technology might actually belie a much darker vision of “the future of us.”

Join us as we critique and rejoin some of the most common tropes of AI hype, all compacted into one primetime special. In the spirit of automating consumptive labor, we watched it so you don’t have to!

Works & Concepts cited in this episode:

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).

Rather, S. (2024, 25 Apr.). How is one of America's biggest spy agencies using AI? We're suing to find out. ACLU. [On the potential harms of the US NSA implementing AI into existing dragnet surveillance programs domestically & internationally]

Robins-Early, N. (2024, 3 Apr.). George Carlin’s estate settles lawsuit over comedian’s AI doppelganger. The Guardian.

re:verb Episode 12: re:blurb - Stasis Theory

re:verb Episode 17: re:blurb - Dialogicality

re:verb Episode 75: A.I. Writing and Academic Integrity (w/ Dr. S. Scott Graham)

re:verb Episode 91: Thinking Rhetorically (w/ Robin Reames) [Episode referencing the importance of following the stasis categories in public debates]

  continue reading

97 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר