Artwork

תוכן מסופק על ידי information labs and Information labs. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי information labs and Information labs או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

AI lab TL;DR | Aline Larroyed - The Fallacy Of The File

7:45
 
שתפו
 

Manage episode 521469029 series 3480798
תוכן מסופק על ידי information labs and Information labs. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי information labs and Information labs או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large language models. Moving from the technical realities of parameter optimisation to the policy dangers of doctrinal drift, they explore how misleading language can distort copyright debates, inflate compliance burdens, and threaten Europe’s research and innovation ecosystem.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

⏲️[02:00] Q2-what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

⏲️[03:32] Q3-What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

⏲️[04:39] Q4-If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

⏲️[06:28] Q5-What is the one core idea you want policymakers to take away from your research?

⏲️[07:32] Wrap-up & Outro

💭 Q1 - In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

🗣️ “A large language model is not a filing cabinet full of copyrighted material.”

💭 Q2 - what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

🗣️ "Training is parameter optimisation, not the storage of protected expression.”

💭 Q3 - What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

🗣️ "Stretching the reproduction right to cover statistical learning would be disastrous for research and innovation in Europe.”

💭 Q4 - If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

🗣️ "We need mechanism-aware regulation, not metaphor-driven lawmaking.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ "Don’t write rules for filing cabinets when we are dealing with statistical models.”

📌 About Our Guests

🎙️ Aline Larroyed | Dublin City University

🌐 linkedin.com/in/aline-l-624a3655

🌐 Article | The Fallacy Of The File: How The Memorisation Metaphor Misguides Copyright Law And Stifles AI Innovation

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5782882

Aline Larroyed is a postdoctoral researcher at Dublin City University and holds a PhD in International Law with a background in linguistics. She brings 20 years of experience in human rights, intellectual property, and international regulation, and is a member of the Institute for Globalization and International Regulation at Maastricht University and the COST LITHME network.

#AI #ArtificialIntelligence #GenerativeAI

  continue reading

37 פרקים

Artwork
iconשתפו
 
Manage episode 521469029 series 3480798
תוכן מסופק על ידי information labs and Information labs. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי information labs and Information labs או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

🔍 In this episode, Caroline and Alene unravel why the popular idea of “AI memorisation” leads policymakers down the wrong path—and how this metaphor obscures what actually happens inside large language models. Moving from the technical realities of parameter optimisation to the policy dangers of doctrinal drift, they explore how misleading language can distort copyright debates, inflate compliance burdens, and threaten Europe’s research and innovation ecosystem.

📌 TL;DR Highlights

⏲️[00:00] Intro

⏲️[00:33] Q1-In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

⏲️[02:00] Q2-what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

⏲️[03:32] Q3-What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

⏲️[04:39] Q4-If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

⏲️[06:28] Q5-What is the one core idea you want policymakers to take away from your research?

⏲️[07:32] Wrap-up & Outro

💭 Q1 - In your view, what is the biggest misunderstanding behind the ‘memorisation’ metaphor in AI, and why is this framing so problematic when applied to copyright law?

🗣️ “A large language model is not a filing cabinet full of copyrighted material.”

💭 Q2 - what actually happens inside a large language model during training, and explain why this process should not be treated as copyright ‘reproduction’?

🗣️ "Training is parameter optimisation, not the storage of protected expression.”

💭 Q3 - What do you see as the main legal, economic, and innovation risks for Europe if policymakers continue relying on the memorisation metaphor when designing AI regulation?

🗣️ "Stretching the reproduction right to cover statistical learning would be disastrous for research and innovation in Europe.”

💭 Q4 - If ‘memorisation’ is the wrong frame, what alternative concepts or policy focus areas should policymakers adopt to regulate AI more accurately and effectively?

🗣️ "We need mechanism-aware regulation, not metaphor-driven lawmaking.”

💭 Q5 - What is the one core idea you want policymakers to take away from your research?

🗣️ "Don’t write rules for filing cabinets when we are dealing with statistical models.”

📌 About Our Guests

🎙️ Aline Larroyed | Dublin City University

🌐 linkedin.com/in/aline-l-624a3655

🌐 Article | The Fallacy Of The File: How The Memorisation Metaphor Misguides Copyright Law And Stifles AI Innovation

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5782882

Aline Larroyed is a postdoctoral researcher at Dublin City University and holds a PhD in International Law with a background in linguistics. She brings 20 years of experience in human rights, intellectual property, and international regulation, and is a member of the Institute for Globalization and International Regulation at Maastricht University and the COST LITHME network.

#AI #ArtificialIntelligence #GenerativeAI

  continue reading

37 פרקים

All episodes

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה