The Alignment Problem From a Deep Learning Perspective
סדרה בארכיון ("עדכון לא פעיל" status)
When? This feed was archived on February 21, 2025 21:08 (). Last successful fetch was on January 02, 2025 12:05 ()
Why? עדכון לא פעיל status. השרתים שלנו לא הצליחו לאחזר פודקאסט חוקי לזמן ממושך.
What now? You might be able to find a more up-to-date version using the search function. This series will no longer be checked for updates. If you believe this to be in error, please check if the publisher's feed link below is valid and contact support to request the feed be restored or if you have any other concerns about this.
Manage episode 373008819 series 3498845
Within the coming decades, artificial general intelligence (AGI) may surpass human capabilities at a wide range of important tasks. We outline a case for expecting that, without substantial effort to prevent it, AGIs could learn to pursue goals which are undesirable (i.e. misaligned) from a human perspective. We argue that if AGIs are trained in ways similar to today's most capable models, they could learn to act deceptively to receive higher reward, learn internally-represented goals which generalize beyond their training distributions, and pursue those goals using power-seeking strategies. We outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world, and briefly review research directions aimed at preventing this outcome.
Original article:
https://arxiv.org/abs/2209.00626
Authors:
Richard Ngo, Lawrence Chan, Sören Mindermann
A podcast by BlueDot Impact.
Learn more on the AI Safety Fundamentals website.
85 פרקים