14 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits
Manage episode 468244511 series 3461851
Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits
Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
58 פרקים
Manage episode 468244511 series 3461851
Full transcript with links to resources available at https://mlsecops.com/podcast/ai-vulnerabilities-ml-supply-chains-to-llm-and-agent-exploits
Join host Dan McInerney and AI security expert Sierra Haex as they explore the evolving challenges of AI security. They discuss vulnerabilities in ML supply chains, the risks in tools like Ray and untested AI model files, and how traditional security measures intersect with emerging AI threats. The conversation also covers the rise of open-source models like DeepSeek and the security implications of deploying autonomous AI agents, offering critical insights for anyone looking to secure distributed AI systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
58 פרקים
כל הפרקים
×
1 Season 3 Finale: Top Insights, Hacks, and Lessons from the Frontlines of AI Security 24:15

1 Breaking and Securing Real-World LLM Apps 53:31

1 How Red Teamers Are Exposing Flaws in AI Pipelines 41:46

1 Securing AI for Government: Inside the Leidos + Protect AI Partnership 34:04

1 AI Agent Security: Threats & Defenses for Modern Deployments 31:39

1 Autonomous Agents Beyond the Hype 24:02

1 Beyond Prompt Injection: AI’s Real Security Gaps 26:02

1 What’s Hot in AI Security at RSA Conference 2025? 24:14

1 Unpacking the Cloud Security Alliance AI Controls Matrix 35:53

1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53

1 AI Security: Vulnerability Detection and Hidden Model File Risks 38:19

1 AI Governance Essentials: Empowering Procurement Teams to Navigate AI Risk 37:41

1 Crossroads: AI, Cybersecurity, and How to Prepare for What's Next 33:15

1 AI Beyond the Hype: Lessons from Cloud on Risk and Security 41:06

1 Generative AI Prompt Hacking and Its Impact on AI Security & Safety 31:59

1 The MLSecOps Podcast Season 2 Finale 40:54

1 Exploring Generative AI Risk Assessment and Regulatory Compliance 37:37

1 MLSecOps Culture: Considerations for AI Development and Security Teams 38:44

1 Practical Offensive and Adversarial ML for Red Teams 35:24

1 Expert Talk from RSA Conference: Securing Generative AI 25:42

1 Practical Foundations for Securing AI 38:10

1 Evaluating RAG and the Future of LLM Security: Insights with LlamaIndex 31:04

1 AI Threat Research: Spotlight on the Huntr Community 31:48
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.