The director’s commentary track for Daring Fireball. Long digressions on Apple, technology, design, movies, and more.
…
continue reading
תוכן מסופק על ידי LessWrong. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי LessWrong או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
“Safety researchers should take a public stance” by Ishual, Mateusz Bagiński
MP3•בית הפרקים
Manage episode 507408782 series 3364760
תוכן מסופק על ידי LessWrong. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי LessWrong או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance
---
Narrated by TYPE III AUDIO.
…
continue reading
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance
---
Narrated by TYPE III AUDIO.
621 פרקים
MP3•בית הפרקים
Manage episode 507408782 series 3364760
תוכן מסופק על ידי LessWrong. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי LessWrong או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance
---
Narrated by TYPE III AUDIO.
…
continue reading
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
September 19th, 2025
Source:
https://www.lesswrong.com/posts/fF8pvsn3AGQhYsbjp/safety-researchers-should-take-a-public-stance
---
Narrated by TYPE III AUDIO.
621 פרקים
כל הפרקים
×ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.