Africa-focused technology, digital and innovation ecosystem insight and commentary.
…
continue reading
Player FM - Internet Radio Done Right
71 subscribers
Checked 2d ago
הוסף לפני two שנים
תוכן מסופק על ידי Tobias Macey. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Tobias Macey או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות
T
The Big Pitch with Jimmy Carr


1 Phil Wang Pitches Psychological Thriller Starring WHO?! 24:35
24:35
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי24:35
It’s the very first episode of The Big Pitch with Jimmy Carr and our first guest is Phil Wang! And Phil’s subgenre is…This Place is Evil. We’re talking psychological torture, we’re talking gory death scenes, we’re talking Lorraine Kelly?! The Big Pitch with Jimmy Carr is a brand new comedy podcast where each week a different celebrity guest pitches an idea for a film based on one of the SUPER niche sub-genres on Netflix. From ‘Steamy Crime Movies from the 1970s’ to ‘Australian Dysfunctional Family Comedies Starring A Strong Female Lead’, our celebrity guests will pitch their wacky plot, their dream cast, the marketing stunts, and everything in between. By the end of every episode, Jimmy Carr, Comedian by night / “Netflix Executive” by day, will decide whether the pitch is greenlit or condemned to development hell! Listen on all podcast platforms and watch on the Netflix Is A Joke YouTube Channel . The Big Pitch is a co-production by Netflix and BBC Studios Audio. Jimmy Carr is an award-winning stand-up comedian and writer, touring his brand-new show JIMMY CARR: LAUGHS FUNNY throughout the USA from May to November this year, as well as across the UK and Europe, before hitting Australia and New Zealand in early 2026. All info and tickets for the tour are available at JIMMYCARR.COM Production Coordinator: Becky Carewe-Jeffries Production Manager: Mabel Finnegan-Wright Editor: Stuart Reid Producer: Pete Strauss Executive Producer: Richard Morris Executive Producers for Netflix: Kathryn Huyghue, Erica Brady, and David Markowitz Set Design: Helen Coyston Studios: Tower Bridge Studios Make Up: Samantha Coughlan Cameras: Daniel Spencer Sound: Charlie Emery Branding: Tim Lane Photography: James Hole…
Exploring NATS: A Multi-Paradigm Connectivity Layer for Distributed Applications
Manage episode 479486284 series 3449056
תוכן מסופק על ידי Tobias Macey. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Tobias Macey או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Summary
In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.
Announcements
Parting Question
…
continue reading
In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
- Your host is Tobias Macey and today I'm interviewing Derek Collison about NATS, a multi-paradigm connectivity layer for distributed applications.
- Introduction
- How did you get involved in the area of data management?
- Can you describe what NATS is and the story behind it?
- How have your experiences in past roles (cloud foundry, TIBCO messaging systems) informed the core principles of NATS?
- What other sources of inspiration have you drawn on in the design and evolution of NATS? (e.g. Kafka, RabbitMQ, etc.)
- There are several patterns and abstractions that NATS can support, many of which overlap with other well-regarded technologies. When designing a system or service, what are the heuristics that should be used to determine whether NATS should act as a replacement or addition to those capabilities? (e.g. considerations of scale, speed, ecosystem compatibility, etc.)
- There is often a divide in the technologies and architecture used between operational/user-facing applications and data systems. How does the unification of multiple messaging patterns in NATS shift the ways that teams think about the relationship between these use cases?
- How does the shared communication layer of NATS with multiple protocol and pattern adaptaters reduce the need to replicate data and logic across application and data layers?
- Can you describe how the core NATS system is architected?
- How have the design and goals of NATS evolved since you first started working on it?
- In the time since you first began writing NATS (~2012) there have been several evolutionary stages in both application and data implementation patterns. How have those shifts influenced the direction of the NATS project and its ecosystem?
- For teams who have an existing architecture, what are some of the patterns for adoption of NATS that allow them to augment or migrate their capabilities?
- What are some of the ecosystem investments that you and your team have made to ease the adoption and integration of NATS?
- What are the most interesting, innovative, or unexpected ways that you have seen NATS used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on NATS?
- When is NATS the wrong choice?
- What do you have planned for the future of NATS?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
- NATS
- NATS JetStream
- Synadia
- Cloud Foundry
- TIBCO
- Applied Physics Lab - Johns Hopkins University
- Cray Supercomputer
- RVCM Certified Messaging
- TIBCO ZMS
- IBM MQ
- JMS == Java Message Service
- RabbitMQ
- MongoDB
- NodeJS
- Redis
- AMQP == Advanced Message Queueing Protocol
- Pub/Sub Pattern
- Circuit Breaker Pattern
- Zero MQ
- Akamai
- Fastly
- CDN == Content Delivery Network
- At Most Once
- At Least Once
- Exactly Once
- AWS Kinesis
- Memcached
- SQS
- Segment
- Rudderstack
- DLQ == Dead Letter Queue
- MQTT == Message Queueing Telemetry Transport
- NATS Kafka Bridge
- 10BaseT Network
- Web Assembly
- RedPanda
- Pulsar Functions
- mTLS
- AuthZ (Authorization)
- AuthN (Authentication)
- NATS Auth Callouts
- OPA == Open Policy Agent
- RAG == Retrieval Augmented Generation
- Home Assistant
- Tailscale
- Ollama
- CDC == Change Data Capture
- gRPC
468 פרקים
Manage episode 479486284 series 3449056
תוכן מסופק על ידי Tobias Macey. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Tobias Macey או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Summary
In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.
Announcements
Parting Question
…
continue reading
In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles.
Announcements
- Hello and welcome to the Data Engineering Podcast, the show about modern data management
- Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details.
- Your host is Tobias Macey and today I'm interviewing Derek Collison about NATS, a multi-paradigm connectivity layer for distributed applications.
- Introduction
- How did you get involved in the area of data management?
- Can you describe what NATS is and the story behind it?
- How have your experiences in past roles (cloud foundry, TIBCO messaging systems) informed the core principles of NATS?
- What other sources of inspiration have you drawn on in the design and evolution of NATS? (e.g. Kafka, RabbitMQ, etc.)
- There are several patterns and abstractions that NATS can support, many of which overlap with other well-regarded technologies. When designing a system or service, what are the heuristics that should be used to determine whether NATS should act as a replacement or addition to those capabilities? (e.g. considerations of scale, speed, ecosystem compatibility, etc.)
- There is often a divide in the technologies and architecture used between operational/user-facing applications and data systems. How does the unification of multiple messaging patterns in NATS shift the ways that teams think about the relationship between these use cases?
- How does the shared communication layer of NATS with multiple protocol and pattern adaptaters reduce the need to replicate data and logic across application and data layers?
- Can you describe how the core NATS system is architected?
- How have the design and goals of NATS evolved since you first started working on it?
- In the time since you first began writing NATS (~2012) there have been several evolutionary stages in both application and data implementation patterns. How have those shifts influenced the direction of the NATS project and its ecosystem?
- For teams who have an existing architecture, what are some of the patterns for adoption of NATS that allow them to augment or migrate their capabilities?
- What are some of the ecosystem investments that you and your team have made to ease the adoption and integration of NATS?
- What are the most interesting, innovative, or unexpected ways that you have seen NATS used?
- What are the most interesting, unexpected, or challenging lessons that you have learned while working on NATS?
- When is NATS the wrong choice?
- What do you have planned for the future of NATS?
Parting Question
- From your perspective, what is the biggest gap in the tooling or technology for data management today?
- Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.
- Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.
- If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story.
- NATS
- NATS JetStream
- Synadia
- Cloud Foundry
- TIBCO
- Applied Physics Lab - Johns Hopkins University
- Cray Supercomputer
- RVCM Certified Messaging
- TIBCO ZMS
- IBM MQ
- JMS == Java Message Service
- RabbitMQ
- MongoDB
- NodeJS
- Redis
- AMQP == Advanced Message Queueing Protocol
- Pub/Sub Pattern
- Circuit Breaker Pattern
- Zero MQ
- Akamai
- Fastly
- CDN == Content Delivery Network
- At Most Once
- At Least Once
- Exactly Once
- AWS Kinesis
- Memcached
- SQS
- Segment
- Rudderstack
- DLQ == Dead Letter Queue
- MQTT == Message Queueing Telemetry Transport
- NATS Kafka Bridge
- 10BaseT Network
- Web Assembly
- RedPanda
- Pulsar Functions
- mTLS
- AuthZ (Authorization)
- AuthN (Authentication)
- NATS Auth Callouts
- OPA == Open Policy Agent
- RAG == Retrieval Augmented Generation
- Home Assistant
- Tailscale
- Ollama
- CDC == Change Data Capture
- gRPC
468 פרקים
כל הפרקים
×
1 Amazon S3: The Backbone of Modern Data Systems 1:01:01
1:01:01
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:01:01
Summary In this episode of the Data Engineering Podcast Mai-Lan Tomsen Bukovec, Vice President of Technology at AWS, talks about the evolution of Amazon S3 and its profound impact on data architecture. From her work on compute systems to leading the development and operations of S3, Mylan shares insights on how S3 has become a foundational element in modern data systems, enabling scalable and cost-effective data lakes since its launch alongside Hadoop in 2006. She discusses the architectural patterns enabled by S3, the importance of metadata in data management, and how S3's evolution has been driven by customer needs, leading to innovations like strong consistency and S3 tables. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th. Your host is Tobias Macey and today I'm interviewing Mai-Lan Tomsen Bukovec about the evolutions of S3 and how it has transformed data architecture Interview Introduction How did you get involved in the area of data management? Most everyone listening knows what S3 is, but can you start by giving a quick summary of what roles it plays in the data ecosystem? What are the major generational epochs in S3, with a particular focus on analytical/ML data systems? The first major driver of analytical usage for S3 was the Hadoop ecosystem. What are the other elements of the data ecosystem that helped shape the product direction of S3? Data storage and retrieval have been core primitives in computing since its inception. What are the characteristics of S3 and all of its copycats that led to such a difference in architectural patterns vs. other shared data technologies? (e.g. NFS, Gluster, Ceph, Samba, etc.) How does the unified pool of storage that is exemplified by S3 help to blur the boundaries between application data, analytical data, and ML/AI data? What are some of the default patterns for storage and retrieval across those three buckets that can lead to anti-patterns which add friction when trying to unify those use cases? The age of AI is leading to a massive potential for unlocking unstructured data, for which S3 has been a massive dumping ground over the years. How is that changing the ways that your customers think about the value of the assets that they have been hoarding for so long? What new architectural patterns is that generating? What are the most interesting, innovative, or unexpected ways that you have seen S3 used for analytical/ML/Ai applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on S3? When is S3 the wrong choice? What do you have planned for the future of S3? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links AWS S3 Kinesis Kafka SQS EMR Drupal Wordpress Netflix Blog on S3 as a Source of Truth Hadoop MapReduce Nasa JPL FINRA == Financial Industry Regulatory Authority S3 Object Versioning S3 Cross Region S3 Tables Iceberg Parquet AWS KMS Iceberg REST DuckDB NFS == Network File System Samba GlusterFS Ceph MinIO S3 Metadata Photoshop Generative Fill Adobe Firefly Turbotax AI Assistant AWS Access Analyzer Data Products S3 Access Point AWS Nova Models LexisNexis Protege S3 Intelligent Tiering S3 Principal Engineering Tenets The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Scaling Data Operations With Platform Engineering 42:20
42:20
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי42:20
Summary In this episode of the Data Engineering Podcast Chakravarthy Kotaru talks about scaling data operations through standardized platform offerings. From his roots as an Oracle developer to leading the data platform at a major online travel company, Chakravarthy shares insights on managing diverse database technologies and providing databases as a service to streamline operations. He explains how his team has transitioned from DevOps to a platform engineering approach, centralizing expertise and automating repetitive tasks with AWS Service Catalog. Join them as they discuss the challenges of migrating legacy systems, integrating AI and ML for automation, and the importance of organizational buy-in in driving data platform success. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. This is a pharmaceutical Ad for Soda Data Quality. Do you suffer from chronic dashboard distrust? Are broken pipelines and silent schema changes wreaking havoc on your analytics? You may be experiencing symptoms of Undiagnosed Data Quality Syndrome — also known as UDQS. Ask your data team about Soda. With Soda Metrics Observability, you can track the health of your KPIs and metrics across the business — automatically detecting anomalies before your CEO does. It’s 70% more accurate than industry benchmarks, and the fastest in the category, analyzing 1.1 billion rows in just 64 seconds. And with Collaborative Data Contracts, engineers and business can finally agree on what “done” looks like — so you can stop fighting over column names, and start trusting your data again.Whether you’re a data engineer, analytics lead, or just someone who cries when a dashboard flatlines, Soda may be right for you. Side effects of implementing Soda may include: Increased trust in your metrics, reduced late-night Slack emergencies, spontaneous high-fives across departments, fewer meetings and less back-and-forth with business stakeholders, and in rare cases, a newfound love of data. Sign up today to get a chance to win a $1000+ custom mechanical keyboard. Visit dataengineeringpodcast.com/soda to sign up and follow Soda’s launch week. It starts June 9th. Your host is Tobias Macey and today I'm interviewing Chakri Kotaru about scaling successful data operations through standardized platform offerings Interview Introduction How did you get involved in the area of data management? Can you start by outlining the different ways that you have seen teams you work with fail due to lack of structure and opinionated design? Why NoSQL? Pairing different styles of NoSQL for different problems Useful patterns for each NoSQL style (document, column family, graph, etc.) Challenges in platform automation and scaling edge cases What challenges do you anticipate as a result of the new pressures as a result of AI applications? What are the most interesting, innovative, or unexpected ways that you have seen platform engineering practices applied to data systems? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data platform engineering? When is NoSQL the wrong choice? What do you have planned for the future of platform principles for enabling data teams/data applications? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Riak DynamoDB SQL Server Cassandra ScyllaDB CAP Theorem Terraform AWS Service Catalog Blog Post The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 From Data Discovery to AI: The Evolution of Semantic Layers 49:30
49:30
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי49:30
Summary In this episode of the Data Engineering Podcast, host Tobias Macy welcomes back Shinji Kim to discuss the evolving role of semantic layers in the era of AI. As they explore the challenges of managing vast data ecosystems and providing context to data users, they delve into the significance of semantic layers for AI applications. They dive into the nuances of semantic modeling, the impact of AI on data accessibility, and the importance of business logic in semantic models. Shinji shares her insights on how SelectStar is helping teams navigate these complexities, and together they cover the future of semantic modeling as a native construct in data systems. Join them for an in-depth conversation on the evolving landscape of data engineering and its intersection with AI. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Shinji Kim about the role of semantic layers in the era of AI Interview Introduction How did you get involved in the area of data management? Semantic modeling gained a lot of attention ~4-5 years ago in the context of the "modern data stack". What is your motivation for revisiting that topic today? There are several overlapping concepts – "semantic layer," "metrics layer," "headless BI." How do you define these terms, and what are the key distinctions and overlaps? Do you see these concepts converging, or do they serve distinct long-term purposes? Data warehousing and business intelligence have been around for decades now. What new value does semantic modeling beyond practices like star schemas, OLAP cubes, etc.? What benefits does a semantic model provide when integrating your data platform into AI use cases? How is it different between using AI as an interface to your analytical use cases vs. powering customer facing AI applications with your data? Putting in the effort to create and maintain a set of semantic models is non-zero. What role can LLMs play in helping to propose and construct those models? For teams who have already invested in building this capability, what additional context and metadata is necessary to provide guidance to LLMs when working with their models? What's the most effective way to create a semantic layer without turning it into a massive project? There are several technologies available for building and serving these models. What are the selection criteria that you recommend for teams who are starting down this path? What are the most interesting, innovative, or unexpected ways that you have seen semantic models used? What are the most interesting, unexpected, or challenging lessons that you have learned while working with semantic modeling? When is semantic modeling the wrong choice? What do you predict for the future of semantic modeling? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links SelectStar Sun Microsystems Markov Chain Monte Carlo Semantic Modeling Semantic Layer Metrics Layer Headless BI Cube Podcast Episode AtScale Star Schema Data Vault OLAP Cube RAG == Retrieval Augmented Generation AI Engineering Podcast Episode KNN == K-Nearest Neighbers HNSW == Hierarchical Navigable Small World dbt Metrics Layer Soda Data LookML Hex PowerBI Tableau Semantic View (Snowflake) Databricks Genie Snowflake Cortex Analyst Malloy The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Balancing Off-the-Shelf and Custom Solutions in Data Engineering 46:05
46:05
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי46:05
Summary In this episode of the Data Engineering Podcast Tulika Bhatt, a senior software engineer at Netflix, talks about her experiences with large-scale data processing and the future of data engineering technologies. Tulika shares her journey into the data engineering field, discussing her work at BlackRock and Verizon before joining Netflix, and explains the challenges and innovations involved in managing Netflix's impression data for personalization and user experience. She highlights the importance of balancing off-the-shelf solutions with custom-built systems using technologies like Spark, Flink, and Iceberg, and delves into the complexities of ensuring data quality and observability in high-speed environments, including robust alerting strategies and semantic data auditing. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Tulika Bhatt about her experiences working on large scale data processing and her insights on the future trajectory of the supporting technologies Interview Introduction How did you get involved in the area of data management? Can you start by outlining the ways that operating at large scale change the ways that you need to think about the design of data systems? When dealing with small-scale data systems it can be feasible to have manual processes. What are the elements of large scal data systems that demand autopmation? How can those large-scale automation principles be down-scaled to the systems that the rest of the world are operating? A perennial problem in data engineering is that of data quality. The past 4 years has seen a significant growth in the number of tools and practices available for automating the validation and verification of data. In your experience working with high volume data flows, what are the elements of data validation that are still unsolved? Generative AI has taken the world by storm over the past couple years. How has that changed the ways that you approach your daily work? What do you see as the future realities of working with data across various axes of large scale, real-time, etc.? What are the most interesting, innovative, or unexpected ways that you have seen solutions to large-scale data management designed? What are the most interesting, unexpected, or challenging lessons that you have learned while working on data management across axes of scale? What are the ways that you are thinking about the future trajectory of your work?? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links BlackRock Spark Flink Kafka Cassandra RocksDB Netflix Maestro workflow orchestrator Pagerduty Iceberg The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 StarRocks: Bridging Lakehouse and OLAP for High-Performance Analytics 59:41
59:41
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי59:41
Summary In this episode of the Data Engineering Podcast Sida Shen, product manager at CelerData, talks about StarRocks, a high-performance analytical database. Sida discusses the inception of StarRocks, which was forked from Apache Doris in 2020 and evolved into a high-performance Lakehouse query engine. He explains the architectural design of StarRocks, highlighting its capabilities in handling high concurrency and low latency queries, and its integration with open table formats like Apache Iceberg, Delta Lake, and Apache Hudi. Sida also discusses how StarRocks differentiates itself from other query engines by supporting on-the-fly joins and eliminating the need for denormalization pipelines, and shares insights into its use cases, such as customer-facing analytics and real-time data processing, as well as future directions for the platform. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Sida Shen about StarRocks, a high performance analytical database supporting shared nothing and shared data patterns Interview Introduction How did you get involved in the area of data management? Can you describe what StarRocks is and the story behind it? There are numerous analytical databases on the market. What are the attributes of StarRocks that differentiate it from other options? Can you describe the architecture of StarRocks? What are the "-ilities" that are foundational to the design of the system? How have the design and focus of the project evolved since it was first created? What are the tradeoffs involved in separating the communication layer from the data layers? The tiered architecture enables the shared nothing and shared data behaviors, which allows for the implementation of lakehouse patterns. What are some of the patterns that are possible due to the single interface/dual pattern nature of StarRocks? The shared data implementation has cacheing built in to accelerate interaction with datasets. What are some of the limitations/edge cases that operators and consumers should be aware of? StarRocks supports management of lakehouse tables (Iceberg, Delta, Hudi, etc.), which overlaps with use cases for Trino/Presto/Dremio/etc. What are the cases where StarRocks acts as a replacement for those systems vs. a supplement to them? The other major category of engines that StarRocks overlaps with is OLAP databases (e.g. Clickhouse, Firebolt, etc.). Why might someone use StarRocks in addition to or in place of those techologies? We would be remiss if we ignored the dominating trend of AI and the systems that support it. What is the role of StarRocks in the context of an AI application? What are the most interesting, innovative, or unexpected ways that you have seen StarRocks used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on StarRocks? When is StarRocks the wrong choice? What do you have planned for the future of StarRocks? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links StarRocks CelerData Apache Doris SIMD == Single Instruction Multiple Data Apache Iceberg ClickHouse Podcast Episode Druid Firebolt Podcast Episode Snowflake BigQuery Trino Databricks Dremio Data Lakehouse Delta Lake Apache Hive C++ Cost-Based Optimizer Iceberg Summit Tencent Games Presentation Apache Paimon Lance Podcast Episode Delta Uniform Apache Arrow StarRocks Python UDF Debezium Podcast Episode The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Exploring NATS: A Multi-Paradigm Connectivity Layer for Distributed Applications 1:12:50
1:12:50
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי1:12:50
Summary In this episode of the Data Engineering Podcast Derek Collison, creator of NATS and CEO of Synadia, talks about the evolution and capabilities of NATS as a multi-paradigm connectivity layer for distributed applications. Derek discusses the challenges and solutions in building distributed systems, and highlights the unique features of NATS that differentiate it from other messaging systems. He delves into the architectural decisions behind NATS, including its ability to handle high-speed global microservices, support for edge computing, and integration with Jetstream for data persistence, and explores the role of NATS in modern data management and its use cases in industries like manufacturing and connected vehicles. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Derek Collison about NATS, a multi-paradigm connectivity layer for distributed applications. Interview Introduction How did you get involved in the area of data management? Can you describe what NATS is and the story behind it? How have your experiences in past roles (cloud foundry, TIBCO messaging systems) informed the core principles of NATS? What other sources of inspiration have you drawn on in the design and evolution of NATS? (e.g. Kafka, RabbitMQ, etc.) There are several patterns and abstractions that NATS can support, many of which overlap with other well-regarded technologies. When designing a system or service, what are the heuristics that should be used to determine whether NATS should act as a replacement or addition to those capabilities? (e.g. considerations of scale, speed, ecosystem compatibility, etc.) There is often a divide in the technologies and architecture used between operational/user-facing applications and data systems. How does the unification of multiple messaging patterns in NATS shift the ways that teams think about the relationship between these use cases? How does the shared communication layer of NATS with multiple protocol and pattern adaptaters reduce the need to replicate data and logic across application and data layers? Can you describe how the core NATS system is architected? How have the design and goals of NATS evolved since you first started working on it? In the time since you first began writing NATS (~2012) there have been several evolutionary stages in both application and data implementation patterns. How have those shifts influenced the direction of the NATS project and its ecosystem? For teams who have an existing architecture, what are some of the patterns for adoption of NATS that allow them to augment or migrate their capabilities? What are some of the ecosystem investments that you and your team have made to ease the adoption and integration of NATS? What are the most interesting, innovative, or unexpected ways that you have seen NATS used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on NATS? When is NATS the wrong choice? What do you have planned for the future of NATS? Contact Info GitHub LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links NATS NATS JetStream Synadia Cloud Foundry TIBCO Applied Physics Lab - Johns Hopkins University Cray Supercomputer RVCM Certified Messaging TIBCO ZMS IBM MQ JMS == Java Message Service RabbitMQ MongoDB NodeJS Redis AMQP == Advanced Message Queueing Protocol Pub/Sub Pattern Circuit Breaker Pattern Zero MQ Akamai Fastly CDN == Content Delivery Network At Most Once At Least Once Exactly Once AWS Kinesis Memcached SQS Segment Rudderstack Podcast Episode DLQ == Dead Letter Queue MQTT == Message Queueing Telemetry Transport NATS Kafka Bridge 10BaseT Network Web Assembly RedPanda Podcast Episode Pulsar Functions mTLS AuthZ (Authorization) AuthN (Authentication) NATS Auth Callouts OPA == Open Policy Agent RAG == Retrieval Augmented Generation AI Engineering Podcast Episode Home Assistant Podcast.__init__ Episode Tailscale Ollama CDC == Change Data Capture gRPC The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Advanced Lakehouse Management With The LakeKeeper Iceberg REST Catalog 57:13
57:13
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי57:13
Summary In this episode of the Data Engineering Podcast Viktor Kessler, co-founder of Vakmo, talks about the architectural patterns in the lake house enabled by a fast and feature-rich Iceberg catalog. Viktor shares his journey from data warehouses to developing the open-source project, Lakekeeper, an Apache Iceberg REST catalog written in Rust that facilitates building lake houses with essential components like storage, compute, and catalog management. He discusses the importance of metadata in making data actionable, the evolution of data catalogs, and the challenges and innovations in the space, including integration with OpenFGA for fine-grained access control and managing data across formats and compute engines. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Viktor Kessler about architectural patterns in the lakehouse that are unlocked by a fast and feature-rich Iceberg catalog Interview Introduction How did you get involved in the area of data management? Can you describe what LakeKeeper is and the story behind it? What is the core of the problem that you are addressing? There has been a lot of activity in the catalog space recently. What are the driving forces that have highlighted the need for a better metadata catalog in the data lake/distributed data ecosystem? How would you characterize the feature sets/problem spaces that different entrants are focused on addressing? Iceberg as a table format has gained a lot of attention and adoption across the data ecosystem. The REST catalog format has opened the door for numerous implementations. What are the opportunities for innovation and improving user experience in that space? What is the role of the catalog in managing security and governance? (AuthZ, auditing, etc.) What are the channels for propagating identity and permissions to compute engines? (how do you avoid head-scratching about permission denied situations) Can you describe how LakeKeeper is implemented? How have the design and goals of the project changed since you first started working on it? For someone who has an existing set of Iceberg tables and catalog, what does the migration process look like? What new workflows or capabilities does LakeKeeper enable for data teams using Iceberg tables across one or more compute frameworks? What are the most interesting, innovative, or unexpected ways that you have seen LakeKeeper used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on LakeKeeper? When is LakeKeeper the wrong choice? What do you have planned for the future of LakeKeeper? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links LakeKeeper SAP Microsoft Access Microsoft Excel Apache Iceberg Podcast Episode Iceberg REST Catalog PyIceberg Spark Trino Dremio Hive Metastore Hadoop NATS Polars DuckDB Podcast Episode DataFusion Atlan Podcast Episode Open Metadata Podcast Episode Apache Atlas OpenFGA Hudi Podcast Episode Delta Lake Podcast Episode Lance Table Format Podcast Episode Unity Catalog Polaris Catalog Apache Gravitino Podcast Episode Keycloak Open Policy Agent (OPA) Apache Ranger Apache NiFi The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Simplifying Data Pipelines with Durable Execution 39:49
39:49
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי39:49
Summary In this episode of the Data Engineering Podcast Jeremy Edberg, CEO of DBOS, about durable execution and its impact on designing and implementing business logic for data systems. Jeremy explains how DBOS's serverless platform and orchestrator provide local resilience and reduce operational overhead, ensuring exactly-once execution in distributed systems through the use of the Transact library. He discusses the importance of version management in long-running workflows and how DBOS simplifies system design by reducing infrastructure needs like queues and CI pipelines, making it beneficial for data pipelines, AI workloads, and agentic AI. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Jeremy Edberg about durable execution and how it influences the design and implementation of business logic Interview Introduction How did you get involved in the area of data management? Can you describe what DBOS is and the story behind it? What is durable execution? What are some of the notable ways that inclusion of durable execution in an application architecture changes the ways that the rest of the application is implemented? (e.g. error handling, logic flow, etc.) Many data pipelines involve complex, multi-step workflows. How does DBOS simplify the creation and management of resilient data pipelines? How does durable execution impact the operational complexity of data management systems? One of the complexities in durable execution is managing code/data changes to workflows while existing executions are still processing. What are some of the useful patterns for addressing that challenge and how does DBOS help? Can you describe how DBOS is architected? How have the design and goals of the system changed since you first started working on it? What are the characteristics of Postgres that make it suitable for the persistence mechanism of DBOS? What are the guiding principles that you rely on to determine the boundaries between the open source and commercial elements of DBOS? What are the most interesting, innovative, or unexpected ways that you have seen DBOS used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on DBOS? When is DBOS the wrong choice? What do you have planned for the future of DBOS? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links DBOS Exactly Once Semantics Temporal Sempahore Postgres DBOS Transact Python Typescript Idempotency Keys Agentic AI State Machine YugabyteDB Podcast Episode CockroachDB Supabase Neon Podcast Episode Airflow The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Overcoming Redis Limitations: The Dragonfly DB Approach 43:58
43:58
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי43:58
Summary In this episode of the Data Engineering Podcast Roman Gershman, CTO and founder of Dragonfly DB, explores the development and impact of high-speed in-memory databases. Roman shares his experience creating a more efficient alternative to Redis, focusing on performance gains, scalability, and cost efficiency, while addressing limitations such as high throughput and low latency scenarios. He explains how Dragonfly DB solves operational complexities for users and delves into its technical aspects, including maintaining compatibility with Redis while innovating on memory efficiency. Roman discusses the importance of cost efficiency and operational simplicity in driving adoption and shares insights on the broader ecosystem of in-memory data stores, future directions like SSD tiering and vector search capabilities, and the lessons learned from building a new database engine. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Roman Gershman about building a high-speed in-memory database and the impact of the performance gains on data applications Interview Introduction How did you get involved in the area of data management? Can you describe what DragonflyDB is and the story behind it? What is the core problem/use case that is solved by making a "faster Redis"? The other major player in the high performance key/value database space is Aerospike. What are the heuristics that an engineer should use to determine whether to use that vs. Dragonfly/Redis? Common use cases for Redis involve application caches and queueing (e.g. Celery/RQ). What are some of the other applications that you have seen Redis/Dragonfly used for, particularly in data engineering use cases? There is a piece of tribal wisdom that it takes 10 years for a database to iron out all of the kinks. At the same time, there have been substantial investments in commoditizing the underlying components of database engines. Can you describe how you approached the implementation of DragonflyDB to arive at a functional and reliable implementation? What are the architectural elements that contribute to the performance and scalability benefits of Dragonfly? How have the design and goals of the system changed since you first started working on it? For teams who migrate from Redis to Dragonfly, beyond the cost savings what are some of the ways that it changes the ways that they think about their overall system design? What are the most interesting, innovative, or unexpected ways that you have seen Dragonfly used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on DragonflyDB? When is DragonflyDB the wrong choice? What do you have planned for the future of DragonflyDB? Contact Info GitHub LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links DragonflyDB Redis Elasticache ValKey Aerospike Laravel Sidekiq Celery Seastar Framework Shared-Nothing Architecture io_uring midi-redis Dunning-Kruger Effect Rust The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Bringing AI Into The Inner Loop of Data Engineering With Ascend 52:47
52:47
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי52:47
Summary In this episode of the Data Engineering Podcast Sean Knapp, CEO of Ascend.io, explores the intersection of AI and data engineering. He discusses the evolution of data engineering and the role of AI in automating processes, alleviating burdens on data engineers, and enabling them to focus on complex tasks and innovation. The conversation covers the challenges and opportunities presented by AI, including the need for intelligent tooling and its potential to streamline data engineering processes. Sean and Tobias also delve into the impact of generative AI on data engineering, highlighting its ability to accelerate development, improve governance, and enhance productivity, while also noting the current limitations and future potential of AI in the field. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Sean Knapp about how Ascend is incorporating AI into their platform to help you keep up with the rapid rate of change Interview Introduction How did you get involved in the area of data management? Can you describe what Ascend is and the story behind it? The last time we spoke was August of 2022 . What are the most notable or interesting evolutions in your platform since then? In that same time "AI" has taken up all of the oxygen in the data ecosystem. How has that impacted the ways that you and your customers think about their priorities? The introduction of AI as an API has caused many organizations to try and leap-frog their data maturity journey and jump straight to building with advanced capabilities. How is that impacting the pressures and priorities felt by data teams? At the same time that AI-focused product goals are straining data teams capacities, AI also has the potential to act as an accelerator to their work. What are the roadblocks/speedbumps that are in the way of that capability? Many data teams are incorporating AI tools into parts of their workflow, but it can be clunky and cumbersome. How are you thinking about the fundamental changes in how your platform works with AI at its center? Can you describe the technical architecture that you have evolved toward that allows for AI to drive the experience rather than being a bolt-on? What are the concrete impacts that these new capabilities have on teams who are using Ascend? What are the most interesting, innovative, or unexpected ways that you have seen Ascend + AI used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on incorporating AI into the core of Ascend? When is Ascend the wrong choice? What do you have planned for the future of AI in Ascend? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Ascend Cursor AI Code Editor Devin GitHub Copilot OpenAI DeepResearch S3 Tables AWS Glue AWS Bedrock Snowpark Co-Intelligence : Living and Working with AI by Ethan Mollick (affiliate link) OpenAI o3 The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Astronomer's Role in the Airflow Ecosystem: A Deep Dive with Pete DeJoy 51:41
51:41
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי51:41
Summary In this episode of the Data Engineering Podcast Pete DeJoy, co-founder and product lead at Astronomer, talks about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3. Pete shares his journey into data engineering, discusses Astronomer's contributions to the Airflow project, and highlights the critical role of Airflow in powering operational data products. He covers the evolution of Airflow, its position in the data ecosystem, and the challenges faced by data engineers, including infrastructure management and observability. The conversation also touches on the upcoming Airflow 3 release, which introduces data awareness, architectural improvements, and multi-language support, and Astronomer's observability suite, Astro Observe, which provides insights and proactive recommendations for Airflow users. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Pete DeJoy about building and managing Airflow pipelines on Astronomer and the upcoming improvements in Airflow 3 Interview Introduction Can you describe what Astronomer is and the story behind it? How would you characterize the relationship between Airflow and Astronomer? Astronomer just released your State of Airflow 2025 Report yesterday and it is the largest data engineering survey ever with over 5,000 respondents. Can you talk a bit about top level findings in the report? What about the overall growth of the Airflow project over time? How have the focus and features of Astronomer changed since it was last featured on the show in 2017? Astro Observe GA’d in early February, what does the addition of pipeline observability mean for your customers? What are other capabilities similar in scope to observability that Astronomer is looking at adding to the platform? Why is Airflow so critical in providing an elevated Observability–or cataloging, or something simlar - experience in a DataOps platform? What are the notable evolutions in the Airflow project and ecosystem in that time? What are the core improvements that are planned for Airflow 3.0? What are the most interesting, innovative, or unexpected ways that you have seen Astro used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Airflow and Astro? What do you have planned for the future of Astro/Astronomer/Airflow? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Astronomer Airflow Maxime Beauchemin MongoDB Databricks Confluent Spark Kafka Dagster Podcast Episode Prefect Airflow 3 The Rise of the Data Engineer blog post dbt Jupyter Notebook Zapier cosmos library for dbt in Airflow Ruff Airflow Custom Operator Snowflake The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Accelerated Computing in Modern Data Centers With Datapelago 55:36
55:36
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי55:36
Summary In this episode of the Data Engineering Podcast Rajan Goyal, CEO and co-founder of Datapelago, talks about improving efficiencies in data processing by reimagining system architecture. Rajan explains the shift from hyperconverged to disaggregated and composable infrastructure, highlighting the importance of accelerated computing in modern data centers. He discusses the evolution from proprietary to open, composable stacks, emphasizing the role of open table formats and the need for a universal data processing engine, and outlines Datapelago's strategy to leverage existing frameworks like Spark and Trino while providing accelerated computing benefits. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Rajan Goyal about how to drastically improve efficiencies in data processing by re-imagining the system architecture Interview Introduction How did you get involved in the area of data management? Can you start by outlining the main factors that contribute to performance challenges in data lake environments? The different components of open data processing systems have evolved from different starting points with different objectives. In your experience, how has that un-planned and un-synchronized evolution of the ecosystem hindered the capabilities and adoption of open technologies? The introduction of a new cross-cutting capability (e.g. Iceberg) has typically taken a substantial amount of time to gain support across different engines and ecosystems. What do you see as the point of highest leverage to improve the capabilities of the entire stack with the least amount of co-ordination? What was the motivating insight that led you to invest in the technology that powers Datapelago? Can you describe the system design of Datapelago and how it integrates with existing data engines? The growth in the generation and application of unstructured data is a notable shift in the work being done by data teams. What are the areas of overlap in the fundamental nature of data (whether structured, semi-structured, or unstructured) that you are able to exploit to bridge the processing gap? What are the most interesting, innovative, or unexpected ways that you have seen Datapelago used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on Datapelago? When is Datapelago the wrong choice? What do you have planned for the future of Datapelago? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Links Datapelago MIPS Architecture ARM Architecture AWS Nitro Mellanox Nvidia Von Neumann Architecture TPU == Tensor Processing Unit FPGA == Field-Programmable Gate Array Spark Trino Iceberg Podcast Episode Delta Lake Podcast Episode Hudi Podcast Episode Apache Gluten Intermediate Representation Turing Completeness LLVM Amdahl's Law LSTM == Long Short-Term Memory The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 The Future of Data Engineering: AI, LLMs, and Automation 59:39
59:39
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי59:39
Summary In this episode of the Data Engineering Podcast Gleb Mezhanskiy, CEO and co-founder of DataFold, talks about the intersection of AI and data engineering. He discusses the challenges and opportunities of integrating AI into data engineering, particularly using large language models (LLMs) to enhance productivity and reduce manual toil. The conversation covers the potential of AI to transform data engineering tasks, such as text-to-SQL interfaces and creating semantic graphs to improve data accessibility, and explores practical applications of LLMs in automating code reviews, testing, and understanding data lineage. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Gleb Mezhanskiy about Interview Introduction How did you get involved in the area of data management? modern data stack is dead where is AI in the data stack? "buy our tool to ship AI" opportunities for LLM in DE workflow Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Datafold Copilot Cursor IDE AI Agents DataChat AI Engineering Podcast Episode Metrics Layer Emacs LangChain LangGraph CrewAI The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 Evolving Responsibilities in AI Data Management 38:57
38:57
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי38:57
Summary In this episode of the Data Engineering Podcast Bartosz Mikulski talks about preparing data for AI applications. Bartosz shares his journey from data engineering to MLOps and emphasizes the importance of data testing over software development in AI contexts. He discusses the types of data assets required for AI applications, including extensive test datasets, especially in generative AI, and explains the differences in data requirements for various AI application styles. The conversation also explores the skills data engineers need to transition into AI, such as familiarity with vector databases and new data modeling strategies, and highlights the challenges of evolving AI applications, including frequent reprocessing of data when changing chunking strategies or embedding models. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Bartosz Mikulski about how to prepare data for use in AI applications Interview Introduction How did you get involved in the area of data management? Can you start by outlining some of the main categories of data assets that are needed for AI applications? How does the nature of the application change those requirements? (e.g. RAG app vs. agent, etc.) How do the different assets map to the stages of the application lifecycle? What are some of the common roles and divisions of responsibility that you see in the construction and operation of a "typical" AI application? For data engineers who are used to data warehousing/BI, what are the skills that map to AI apps? What are some of the data modeling patterns that are needed to support AI apps? chunking strategies metadata management What are the new categories of data that data engineers need to manage in the context of AI applications? agent memory generation/evolution conversation history management data collection for fine tuning What are some of the notable evolutions in the space of AI applications and their patterns that have happened in the past ~1-2 years that relate to the responsibilities of data engineers? What are some of the skills gaps that teams should be aware of and identify training opportunities for? What are the most interesting, innovative, or unexpected ways that you have seen data teams address the needs of AI applications? What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI applications and their reliance on data? What are some of the emerging trends that you are paying particular attention to? Contact Info Website LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links Spark Ray Chunking Strategies Hypothetical document embeddings Model Fine Tuning Prompt Compression The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…

1 CSVs Will Never Die And OneSchema Is Counting On It 54:40
54:40
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי54:40
Summary In this episode of the Data Engineering Podcast Andrew Luo, CEO of OneSchema, talks about handling CSV data in business operations. Andrew shares his background in data engineering and CRM migration, which led to the creation of OneSchema, a platform designed to automate CSV imports and improve data validation processes. He discusses the challenges of working with CSVs, including inconsistent type representation, lack of schema information, and technical complexities, and explains how OneSchema addresses these issues using multiple CSV parsers and AI for data type inference and validation. Andrew highlights the business case for OneSchema, emphasizing efficiency gains for companies dealing with large volumes of CSV data, and shares plans to expand support for other data formats and integrate AI-driven transformation packs for specific industries. Announcements Hello and welcome to the Data Engineering Podcast, the show about modern data management Data migrations are brutal. They drag on for months—sometimes years—burning through resources and crushing team morale. Datafold's AI-powered Migration Agent changes all that. Their unique combination of AI code translation and automated data validation has helped companies complete migrations up to 10 times faster than manual approaches. And they're so confident in their solution, they'll actually guarantee your timeline in writing. Ready to turn your year-long migration into weeks? Visit dataengineeringpodcast.com/datafold today for the details. Your host is Tobias Macey and today I'm interviewing Andrew Luo about how OneSchema addresses the headaches of dealing with CSV data for your business Interview Introduction How did you get involved in the area of data management? Despite the years of evolution and improvement in data storage and interchange formats, CSVs are just as prevalent as ever. What are your opinions/theories on why they are so ubiquitous? What are some of the major sources of CSV data for teams that rely on them for business and analytical processes? The most obvious challenge with CSVs is their lack of type information, but they are notorious for having numerous other problems. What are some of the other major challenges involved with using CSVs for data interchange/ingestion? Can you describe what you are building at OneSchema and the story behind it? What are the core problems that you are solving, and for whom? Can you describe how you have architected your platform to be able to manage the variety, volume, and multi-tenancy of data that you process? How have the design and goals of the product changed since you first started working on it? What are some of the major performance issues that you have encountered while dealing with CSV data at scale? What are some of the most surprising things that you have learned about CSVs in the process of building OneSchema? What are the most interesting, innovative, or unexpected ways that you have seen OneSchema used? What are the most interesting, unexpected, or challenging lessons that you have learned while working on OneSchema? When is OneSchema the wrong choice? What do you have planned for the future of OneSchema? Contact Info LinkedIn Parting Question From your perspective, what is the biggest gap in the tooling or technology for data management today? Closing Announcements Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems. Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes. If you've learned something or tried out a project from the show then tell us about it! Email hosts@dataengineeringpodcast.com with your story. Links OneSchema EDI == Electronic Data Interchange UTF-8 BOM (Byte Order Mark) Characters SOAP CSV RFC Iceberg SSIS == SQL Server Integration Services MS Access Datafusion JSON Schema SFTP == Secure File Transfer Protocol The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.