תוכן מסופק על ידי Red Hat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Red Hat או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט התחל במצב לא מקוון עם האפליקציה Player FM !
Squid Game is back—and this time, the knives are out. In the thrilling Season 3 premiere, Player 456 is spiraling and a brutal round of hide-and-seek forces players to kill or be killed. Hosts Phil Yu and Kiera Please break down Gi-hun’s descent into vengeance, Guard 011’s daring betrayal of the Game, and the shocking moment players are forced to choose between murdering their friends… or dying. Then, Carlos Juico and Gavin Ruta from the Jumpers Jump podcast join us to unpack their wild theories for the season. Plus, Phil and Kiera face off in a high-stakes round of “Hot Sweet Potato.” SPOILER ALERT! Make sure you watch Squid Game Season 3 Episode 1 before listening on. Play one last time. IG - @SquidGameNetflix X (f.k.a. Twitter) - @SquidGame Check out more from Phil Yu @angryasianman , Kiera Please @kieraplease and the Jumpers Jump podcast Listen to more from Netflix Podcasts . Squid Game: The Official Podcast is produced by Netflix and The Mash-Up Americans.…
תוכן מסופק על ידי Red Hat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Red Hat או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Explore the future of enterprise AI with Red Hat's SVP and AI CTO, Brian Stevens. In this episode, we delve into how AI is being practically reimagined for real-world business environments, focusing on the pivotal shift to production-quality inference at scale and the transformative power of open source. Brian Stevens shares his expertise and unique perspective on: • The evolution of AI from experimental stages to essential, production-ready enterprise solutions. • Key lessons from the early days of enterprise Linux and their application to today’s AI inference challenges. • The critical role of projects like vLLM in optimizing AI models and creating a common, efficient inference stack for diverse hardware. • Innovations in GPU-based inference and distributed systems (like KV cache) that enable AI scalability. Tune in for a deep dive into the infrastructure and strategies making enterprise AI a reality. Whether you're a seasoned technologist, an AI practitioner, or a leader charting your company's AI journey, this discussion will provide valuable insights into building an accessible, efficient, and powerful AI future with open source.
תוכן מסופק על ידי Red Hat. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Red Hat או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Explore the future of enterprise AI with Red Hat's SVP and AI CTO, Brian Stevens. In this episode, we delve into how AI is being practically reimagined for real-world business environments, focusing on the pivotal shift to production-quality inference at scale and the transformative power of open source. Brian Stevens shares his expertise and unique perspective on: • The evolution of AI from experimental stages to essential, production-ready enterprise solutions. • Key lessons from the early days of enterprise Linux and their application to today’s AI inference challenges. • The critical role of projects like vLLM in optimizing AI models and creating a common, efficient inference stack for diverse hardware. • Innovations in GPU-based inference and distributed systems (like KV cache) that enable AI scalability. Tune in for a deep dive into the infrastructure and strategies making enterprise AI a reality. Whether you're a seasoned technologist, an AI practitioner, or a leader charting your company's AI journey, this discussion will provide valuable insights into building an accessible, efficient, and powerful AI future with open source.
Explore what it takes to run massive language models efficiently with Red Hat's Senior Principal Software Engineer in AI Engineering, Nick Hill. In this episode, we go behind the headlines to uncover the systems-level engineering making AI practical, focusing on the pivotal challenge of inference optimization and the transformative power of the vLLM open-source project. Nick Hill shares his experiences working in AI including: • The evolution of AI optimization, from early handcrafted systems like IBM Watson to the complex demands of today's generative AI. • The critical role of open-source projects like vLLM in creating a common, efficient inference stack for diverse hardware platforms. • Key innovations like PagedAttention that solve GPU memory fragmentation and manage the KV cache for scalable, high-throughput performance. • How the open-source community is rapidly translating academic research into real-world, production-ready solutions for AI. Join us to explore the infrastructure and optimization strategies making large-scale AI a reality. This conversation is essential for any technologist, engineer, or leader who wants to understand the how and why of AI performance. You’ll come away with a new appreciation for the clever, systems-level work required to build a truly scalable and open AI future.…
Explore the future of enterprise AI with Red Hat's SVP and AI CTO, Brian Stevens. In this episode, we delve into how AI is being practically reimagined for real-world business environments, focusing on the pivotal shift to production-quality inference at scale and the transformative power of open source. Brian Stevens shares his expertise and unique perspective on: • The evolution of AI from experimental stages to essential, production-ready enterprise solutions. • Key lessons from the early days of enterprise Linux and their application to today’s AI inference challenges. • The critical role of projects like vLLM in optimizing AI models and creating a common, efficient inference stack for diverse hardware. • Innovations in GPU-based inference and distributed systems (like KV cache) that enable AI scalability. Tune in for a deep dive into the infrastructure and strategies making enterprise AI a reality. Whether you're a seasoned technologist, an AI practitioner, or a leader charting your company's AI journey, this discussion will provide valuable insights into building an accessible, efficient, and powerful AI future with open source.…
Ready to go deeper into the ever-evolving landscape of technology? Technically Speaking, hosted by Red Hat CTO and SVP of Global Engineering Chris Wright, is back and reimagined to guide you with more depth and candor. This series cuts through the noise, offering insightful, casual, and now even more in-depth conversations with leading experts from across the globe. Each discussion delves further into new and emerging technologies, helping you understand not just the 'what,' but the 'why' and 'how' these advancements will impact long-term strategic developments for your company and your career. From AI and open source innovation to cloud computing and beyond, Chris Wright and his guests humanize technology, providing an unparalleled insider’s look at what’s next with enhanced detail and open discussion. The revamped ""Technically Speaking with Chris Wright"" champions innovation and thought leadership, blending even deeper-dive discussions with updates on the latest tech news. Tune in for richer insights on complex topics, explore varied perspectives with greater nuance, and equip yourself to shape the future of technology. Discover how to turn today's emerging tech into tomorrow's strategic advantage.…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.