Artwork

תוכן מסופק על ידי European Leadership Network. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי European Leadership Network או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

Fake Brains & Killer Robots

1:32:33
 
שתפו
 

Manage episode 432058675 series 3528929
תוכן מסופק על ידי European Leadership Network. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי European Leadership Network או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Welcome to “Fake Brains & Killer Robots”, the fifth episode of “Ok Doomer!” the podcast series by The European Leadership Network’s (ELN) New European Voices on Existential Risk (NEVER) network. Hosted by the ELN’s Policy and Impact Director, Jane Kinninmont, and the ELN’s Project and Communications Coordinator, Edan Simpson, this episode will focus on the potential existential risks associated with artificial intelligence.

Jane kicks off the episode with “What’s the Problem?” We hear from Alice Saltini, a Policy Fellow at the European Leadership Network who has been focusing on the interactions between AI and nuclear command and control systems.

Alice discusses the immediate threats of AI, such as hallucinations and cyber vulnerabilities in nuclear command and control systems, emphasising the need for caution, regulation and international cooperation to mitigate the risks associated with AI and nuclear weapons.

Edan’s “How To Fix It” panel features Dr Ganna Pogrebna, Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University in Australia. Ganna is also the Organiser of the Behavioural Data Science strand at the Alan Turing Institute, the United Kingdom’s national centre of excellence for AI and Data Science in London, where she serves as a fellow.

She’s joined by NEVER member Konrad Siefert. Konrad is co-CEO of the Simon Institute for Long-term Governance, which works to improve the international regime complex for governing rapid technological change and representing future generations in institutional design and policy processes. Previously, he co-founded Effective Altruism Switzerland.

Our third and final guest is NEVER member Nicolo Miotto; Nicolò currently works at the Organisation for Security and Co-operation in Europe (OSCE) Conflict Prevention Centre. Nicolò’s research foci include arms control, disarmament and non-proliferation, emerging disruptive technologies, and terrorism and violent extremism.

The panel discusses how best to govern, regulate, and limit the risks of AI and what that actually means; the role of multilateral institutions such as the UN in implementing these efforts; what potential opportunities and setbacks new forms of AI could have for arms control, especially regarding WMD proliferation; and to what extent AI developers are aware of the possible misuses of new technologies and how best to safeguard against them.

Moving on to “Turn Back the Clock,” we look back to a time in history when humanity faced a potential existential threat but pulled back from the brink of destruction. On today’s episode, Jane is joined by Dr Jochen Hung, Associate Professor of Cultural History at Utrecht University in the Netherlands. They discuss historical perspectives on technological change and its impact on society, drawing parallels between the anxieties and hopes of people in the 1920s concerning modern technologies and those of the present day.

Finally, as always, the episode is wrapped up in “The Debrief,” where Jane and Edan review the episode to make sense of everything they've covered.

Catch up on previous episodes, and make sure to subscribe to future episodes of "Ok Doomer!

------------------

Follow the ELN on:

X (formerly Twitter)

LinkedIn

Facebook

The ELN's website

The NEVER webpage

  continue reading

6 פרקים

Artwork
iconשתפו
 
Manage episode 432058675 series 3528929
תוכן מסופק על ידי European Leadership Network. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי European Leadership Network או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Welcome to “Fake Brains & Killer Robots”, the fifth episode of “Ok Doomer!” the podcast series by The European Leadership Network’s (ELN) New European Voices on Existential Risk (NEVER) network. Hosted by the ELN’s Policy and Impact Director, Jane Kinninmont, and the ELN’s Project and Communications Coordinator, Edan Simpson, this episode will focus on the potential existential risks associated with artificial intelligence.

Jane kicks off the episode with “What’s the Problem?” We hear from Alice Saltini, a Policy Fellow at the European Leadership Network who has been focusing on the interactions between AI and nuclear command and control systems.

Alice discusses the immediate threats of AI, such as hallucinations and cyber vulnerabilities in nuclear command and control systems, emphasising the need for caution, regulation and international cooperation to mitigate the risks associated with AI and nuclear weapons.

Edan’s “How To Fix It” panel features Dr Ganna Pogrebna, Executive Director of the Artificial Intelligence and Cyber Futures Institute at Charles Sturt University in Australia. Ganna is also the Organiser of the Behavioural Data Science strand at the Alan Turing Institute, the United Kingdom’s national centre of excellence for AI and Data Science in London, where she serves as a fellow.

She’s joined by NEVER member Konrad Siefert. Konrad is co-CEO of the Simon Institute for Long-term Governance, which works to improve the international regime complex for governing rapid technological change and representing future generations in institutional design and policy processes. Previously, he co-founded Effective Altruism Switzerland.

Our third and final guest is NEVER member Nicolo Miotto; Nicolò currently works at the Organisation for Security and Co-operation in Europe (OSCE) Conflict Prevention Centre. Nicolò’s research foci include arms control, disarmament and non-proliferation, emerging disruptive technologies, and terrorism and violent extremism.

The panel discusses how best to govern, regulate, and limit the risks of AI and what that actually means; the role of multilateral institutions such as the UN in implementing these efforts; what potential opportunities and setbacks new forms of AI could have for arms control, especially regarding WMD proliferation; and to what extent AI developers are aware of the possible misuses of new technologies and how best to safeguard against them.

Moving on to “Turn Back the Clock,” we look back to a time in history when humanity faced a potential existential threat but pulled back from the brink of destruction. On today’s episode, Jane is joined by Dr Jochen Hung, Associate Professor of Cultural History at Utrecht University in the Netherlands. They discuss historical perspectives on technological change and its impact on society, drawing parallels between the anxieties and hopes of people in the 1920s concerning modern technologies and those of the present day.

Finally, as always, the episode is wrapped up in “The Debrief,” where Jane and Edan review the episode to make sense of everything they've covered.

Catch up on previous episodes, and make sure to subscribe to future episodes of "Ok Doomer!

------------------

Follow the ELN on:

X (formerly Twitter)

LinkedIn

Facebook

The ELN's website

The NEVER webpage

  continue reading

6 פרקים

همه قسمت ها

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר