Artwork

תוכן מסופק על ידי The New Stack Podcast and The New Stack. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי The New Stack Podcast and The New Stack או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
icon Daily Deals

VMware's Kubernetes Evolution: Quashing Complexity

30:40
 
שתפו
 

Manage episode 480909581 series 2574278
תוכן מסופק על ידי The New Stack Podcast and The New Stack. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי The New Stack Podcast and The New Stack או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization.

Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.

Learn more from The New Stack about the latest insights with VMware

Has VMware Finally Caught Up With Kubernetes?

VMware’s Golden Path

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

302 פרקים

Artwork

VMware's Kubernetes Evolution: Quashing Complexity

The New Stack Podcast

113 subscribers

published

iconשתפו
 
Manage episode 480909581 series 2574278
תוכן מסופק על ידי The New Stack Podcast and The New Stack. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי The New Stack Podcast and The New Stack או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization.

Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem.

Learn more from The New Stack about the latest insights with VMware

Has VMware Finally Caught Up With Kubernetes?

VMware’s Golden Path

Join our community of newsletter subscribers to stay on top of the news and at the top of your game.

  continue reading

302 פרקים

כל הפרקים

×
 
AI agents hold the promise of continuously testing, scanning, and fixing code for security vulnerabilities, but we're still progressing toward that vision. Startups like Aptori are helping bridge the gap by building AI-powered security engineers for enterprises. Aptori maps an organization’s codebase, APIs, and cloud infrastructure in real time to understand data flows and authorization logic, allowing it to detect and eventually remediate security issues. At Google Cloud Next, Aptori CEO Sumeet Singh discussed how earlier tools merely alerted developers to issues—often overwhelming them—but newer models like Gemini 2.5 Flash and Claude Sonnet 4 are improving automated code fixes, making them more practical. Singh and co-founder Travis Newhouse previously built AppFormix, which automated OpenStack cloud operations before being acquired by Juniper Networks. Their experiences with slow release cycles due to security bottlenecks inspired Aptori’s focus. While the goal is autonomous agents, Singh emphasizes the need for transparency and deterministic elements in AI tools to ensure trust and reliability in enterprise security workflows. Learn more from The New Stack about the latest insights in AI application security: AI Is Changing Cybersecurity Fast and Most Analysts Aren’t Ready AI Security Agents Combat AI-Generated Code Risks Developers Are Embracing AI To Streamline Threat Detection and Stay Ahead Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
In this episode of The New Stack Makers , Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers. Demchuk emphasizes that Nitric doesn't remove platform team control but enforces it consistently. Guardrails defined by platform teams guide infrastructure provisioning, ensuring security and compliance — even as developers use AI tools to rapidly generate code. The result is a streamlined workflow where developers move faster, AI enhances productivity, and platform teams retain oversight. This episode offers engineering leaders insight into a paradigm shift in how cloud infrastructure is managed in the AI era. Learn more from The New Stack about the latest insights about Nitric: Building a Serverless Meme Generator With Nitric and OpenAI Why Most Companies Are Struggling With Infrastructure as Code Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
CodeRabbit, led by founder Harjot Gill, is tackling one of software development's biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of The New Stack Makers , Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews. Unlike rigid, rule-based static analysis tools, Code Rabbit builds rich context at scale by spinning up sandbox environments for pull requests and allowing AI agents to navigate codebases like human reviewers. These agents can run CLI commands, analyze syntax trees, and pull in external context from Jira or vulnerability databases. Gill envisions a hybrid future where AI handles the grunt work of code review, empowering humans to focus on architecture and intent—ultimately reducing bugs, delays, and development costs. Learn more from The New Stack about the latest insights about AI code reviews: CodeRabbit's AI Code Reviews Now Live Free in VS Code, Cursor AI Coding Agents Level Up from Helpers to Team Players Augment Code: An AI Coding Tool for 'Real' Development Work Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year's third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google joins the race alongside Microsoft and OpenAI. Despite the buzz, Lardinois pointed out that many companies are still exploring how AI agents can fit into real-world workflows. The panel also highlighted how Google now delivers a full-stack AI development experience — from models to deployment platforms like Vertex AI. New enterprise tools like Agent Space and Agent Garden further signal Google’s commitment to making agents a core part of modern software development. Learn more from The New Stack about the latest in AI agents: How AI Agents Will Change the Web for Users and Developers AI Agents: A Comprehensive Introduction for Developers AI Agents Are Coming for Your SaaS Stack Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input. A key enabler is Google’s newly announced open Agent2Agent (A2A) protocol, which allows AI agents from different vendors to communicate and collaborate securely across platforms. Over 50 companies, including PayPal, Salesforce, and Atlassian, are already adopting it. However, deploying agentic AI at scale requires more than individual tools—it demands an AI platform with runtime frameworks, UIs, and connectors. These platforms allow enterprises to integrate agents across clouds and systems, paving the way for AI that is collaborative, adaptive, and embedded in core operations. As AI becomes foundational, developers are transitioning from coding to architecting dynamic, learning systems. Learn more from The New Stack about the latest insights about Agent2Agent Protocol: Google’s Agent2Agent Protocol Helps AI Agents Talk to Each Other A2A, MCP, Kafka and Flink: The New Stack for AI Agents Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time support. Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual. Learn more from The New Stack about the latest AI insights with Google Cloud: Google AI Coding Tool Now Free, With 90x Copilot’s Output Gemini 2.5 Pro: Google’s Coding Genius Gets an Upgrade Q&A: How Google Itself Uses Its Gemini Large Language Model Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that designing TPUs involves balancing power, thermal constraints, and interconnectivity. Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from CPUs to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs. Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud: Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure A2A, MCP, Kafka and Flink: The New Stack for AI Agents Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in supporting AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference. Allen explained that GKE offers scalability, elasticity, and support for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google’s continued investment in open source and scalable architecture. Learn more from The New Stack about the latest insights with Google Cloud: Google Kubernetes Engine Customized for Faster AI Work KubeCon Europe: How Google Will Evolve Kubernetes in the AI Era Apache Ray Finds a Home on the Google Kubernetes Engine Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to support GPU virtualization. Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem. Learn more from The New Stack about the latest insights with VMware Has VMware Finally Caught Up With Kubernetes? VMware’s Golden Path Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq , allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly. The urgency behind Prequel’s mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change. Learn more from The New Stack about the latest Observability insights Why Consolidating Observability Tools Is a Smart Move Building an Observability Culture: Getting Everyone Onboard Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
 
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong support for Arm’s architecture through vital tools and system software. Wafaa also challenged the hype around GPUs in AI, asserting that CPUs—especially those enhanced with Arm’s Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. CPUs offer greater flexibility, and Arm’s innovations aim to reduce dependency on expensive GPU fleets. On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing. Learn more from The New Stack about the latest insights about Arm: Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm Arm: See a Demo About Migrating a x86-Based App to ARM64 Join our community of newsletter subscribers to stay on top of the news and at the top of your game .…
 
In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex. Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime. Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap. Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs: ScaleOps Adds Predictive Horizontal Scaling, Smart Placement ScaleOps Dynamically Right-Sizes Containers at Runtime Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source. The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now supports eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem. Learn more from The New Stack about Heroku's approach to Platform-as-a-Service: Return to PaaS: Building the Platform of Our Dreams Heroku Moved Twelve-Factor Apps to Open Source. What’s Next? How Heroku Is Positioned To Help Ops Engineers in the GenAI Era Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
In this episode of The New Stack Makers , recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that users would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices. The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and supporting federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries. The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images. Learn more from The New Stack about Container Security and AI Chainguard Takes Aim At Vulnerable Java Libraries Clean Container Images: A Supply Chain Security Revolution Revolutionizing Offensive Security: A New Era With Agentic AI Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
In a candid episode of The New Stack Makers , Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials. Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro. Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress. Learn more from The New Stack about the relationship between enterprise cloud providers and open source software: The Metamorphosis of Open Source: An Industry in Transition The Complex Relationship Between Cloud Providers and Open Source How Open Source Has Turned the Tables on Enterprise Software Join our community of newsletter subscribers to stay on top of the news and at the top of your game.…
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה