Artwork

תוכן מסופק על ידי Foresight Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Foresight Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Jan Leike | Superintelligent Alignment

9:57
 
שתפו
 

Manage episode 382006351 series 2943147
תוכן מסופק על ידי Foresight Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Foresight Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

137 פרקים

Artwork
iconשתפו
 
Manage episode 382006351 series 2943147
תוכן מסופק על ידי Foresight Institute. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Foresight Institute או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Jan Leike is a Research Scientist at Google DeepMind and a leading voice in AI Alignment, with affiliations at the Future of Humanity Institute and the Machine Intelligence Research Institute. At OpenAI, he co-leads the Superalignment Team, contributing to AI advancements such as InstructGPT and ChatGPT. Holding a PhD from the Australian National University, Jan's work focuses on ensuring AI Alignment.


Key Highlights

  • The launch of OpenAI's Superalignment team, targeting the alignment of superintelligence in four years.
  • The aim to automate of alignment research, currently leveraging 20% of OpenAI's computational power.
  • How traditional reinforcement learning from human feedback may fall short in scaling language model alignment.
  • Why there is a focus on scalable oversight, generalization, automation interpretability, and adversarial testing to ensure alignment reliability.
  • Experimentation with intentionally misaligned models to evaluate alignment strategies.

Dive deeper into the session: Full Summary


About Foresight Institute

Foresight Institute is a research organization and non-profit that supports the beneficial development of high-impact technologies. Since our founding in 1987 on a vision of guiding powerful technologies, we have continued to evolve into a many-armed organization that focuses on several fields of science and technology that are too ambitious for legacy institutions to support.


Allison Duettmann

The President and CEO of Foresight Institute, Allison Duettmann directs the Intelligent Cooperation, Molecular Machines, Biotech & Health Extension, Neurotech, and Space Programs, alongside Fellowships, Prizes, and Tech Trees. She has also been pivotal in co-initiating the Longevity Prize, pioneering initiatives like Existentialhope.com, and contributing to notable works like "Superintelligence: Coordination & Strategy" and "Gaming the Future".


Get Involved with Foresight:

Follow Us: Twitter | Facebook | LinkedIn


Note: Explore every word spoken on this podcast through Fathom.fm, an innovative podcast search engine.



Hosted on Acast. See acast.com/privacy for more information.

  continue reading

137 פרקים

همه قسمت ها

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר