14 subscribers
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


AI Security: Vulnerability Detection and Hidden Model File Risks
Manage episode 454669245 series 3461851
In this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
56 פרקים
Manage episode 454669245 series 3461851
In this episode of the MLSecOps Podcast, the team dives into the transformative potential of Vulnhuntr: zero shot vulnerability discovery using LLMs. Madison Vorbrich hosts Dan McInerney and Marcello Salvati to discuss Vulnhuntr’s ability to autonomously identify vulnerabilities, including zero-days, using large language models (LLMs) like Claude. They explore the evolution of AI tools for security, the gap between traditional and AI-based static code analysis, and how Vulnhuntr enables both developers and security teams to proactively safeguard their projects. The conversation also highlights Protect AI’s bug bounty platform, huntr.com, and its expansion into model file vulnerabilities (MFVs), emphasizing the critical need to secure AI supply chains and systems.
Thanks for checking out the MLSecOps Podcast! Get involved with the MLSecOps Community and find more resources at https://community.mlsecops.com.
Additional tools and resources to check out:
Protect AI Guardian: Zero Trust for ML Models
Recon: Automated Red Teaming for GenAI
Protect AI’s ML Security-Focused Open Source Tools
LLM Guard Open Source Security Toolkit for LLM Interactions
Huntr - The World's First AI/Machine Learning Bug Bounty Platform
56 פרקים
Wszystkie odcinki
×
1 How Red Teamers Are Exposing Flaws in AI Pipelines 41:46

1 Securing AI for Government: Inside the Leidos + Protect AI Partnership 34:04


1 AI Agent Security: Threats & Defenses for Modern Deployments 31:39

1 Autonomous Agents Beyond the Hype 24:02

1 Beyond Prompt Injection: AI’s Real Security Gaps 26:02

1 What’s Hot in AI Security at RSA Conference 2025? 24:14

1 Unpacking the Cloud Security Alliance AI Controls Matrix 35:53

1 From Pickle Files to Polyglots: Hidden Risks in AI Supply Chains 41:21

1 Rethinking AI Red Teaming: Lessons in Zero Trust and Model Protection 36:52

1 AI Security: Map It, Manage It, Master It 41:18

1 Agentic AI: Tackling Data, Security, and Compliance Risks 23:22

1 AI Vulnerabilities: ML Supply Chains to LLM and Agent Exploits 24:08

1 Implementing Enterprise AI Governance: Balancing Ethics, Innovation & Risk for Business Success 38:39

1 Unpacking Generative AI Red Teaming and Practical Security Solutions 51:53
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.