
התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


1 Amber Canavan: The Labels That Lie 30:34
#012 Serverless Data Orchestration, AI in the Data Stack, AI Pipelines
Manage episode 428522571 series 3585930
In this episode, Nicolay sits down with Hugo Lu, founder and CEO of Orchestra, a modern data orchestration platform. As data pipelines and analytics workflows become increasingly complex, spanning multiple teams, tools and cloud services, the need for unified orchestration and visibility has never been greater.
Orchestra is a serverless data orchestration tool that aims to provide a unified control plane for managing data pipelines, infrastructure, and analytics across an organization's modern data stack.
The core architecture involves users building pipelines as code which then run on Orchestra's serverless infrastructure. It can orchestrate tasks like data ingestion, transformation, AI calls, as well as monitoring and getting analytics on data products. All with end-to-end visibility, data lineage and governance even when organizations have a scattered, modular data architecture across teams and tools.
Key Quotes:
- Find the right level of abstraction when building data orchestration tasks/workflows. "I think the right level of abstraction is always good. I think like Prefect do this really well, right? Their big sell was, just put a decorator on a function and it becomes a task. That is a great idea. You know, just make tasks modular and have them do all the boilerplate stuff like error logging, monitoring of data, all of that stuff.”
- Modularize data pipeline components: "It's just around understanding what that dev workflow should look like. I think it should be a bit more modular." Having a modular architecture where different components like data ingestion, transformation, model training are decoupled allows better flexibility and scalability.
- Adopt a streaming/event-driven architecture for low-latency AI use cases: "If you've got an event-driven architecture, then, you know, that's not what you use an orchestration tool for...if you're having a conversation with a chatbot, like, you know, you're sending messages, you're sending events, you're getting a response back. That I would argue should be dealt with by microservices."
Hugo Lu:
Nicolay Gerold:
00:00 Introduction to Orchestra and its Focus on Data Products
08:03 Unified Control Plane for Data Stack and End-to-End Control
14:42 Use Cases and Unique Applications of Orchestra
19:31 Retaining Existing Dev Workflows and Best Practices in Orchestra
22:23 Event-Driven Architectures and Monitoring in Orchestra
23:49 Putting Data Products First and Monitoring Health and Usage
25:40 The Future of Data Orchestration: Stream-Based and Cost-Effective
data orchestration, Orchestra, serverless architecture, versatility, use cases, maturity levels, challenges, AI workloads
63 פרקים
Manage episode 428522571 series 3585930
In this episode, Nicolay sits down with Hugo Lu, founder and CEO of Orchestra, a modern data orchestration platform. As data pipelines and analytics workflows become increasingly complex, spanning multiple teams, tools and cloud services, the need for unified orchestration and visibility has never been greater.
Orchestra is a serverless data orchestration tool that aims to provide a unified control plane for managing data pipelines, infrastructure, and analytics across an organization's modern data stack.
The core architecture involves users building pipelines as code which then run on Orchestra's serverless infrastructure. It can orchestrate tasks like data ingestion, transformation, AI calls, as well as monitoring and getting analytics on data products. All with end-to-end visibility, data lineage and governance even when organizations have a scattered, modular data architecture across teams and tools.
Key Quotes:
- Find the right level of abstraction when building data orchestration tasks/workflows. "I think the right level of abstraction is always good. I think like Prefect do this really well, right? Their big sell was, just put a decorator on a function and it becomes a task. That is a great idea. You know, just make tasks modular and have them do all the boilerplate stuff like error logging, monitoring of data, all of that stuff.”
- Modularize data pipeline components: "It's just around understanding what that dev workflow should look like. I think it should be a bit more modular." Having a modular architecture where different components like data ingestion, transformation, model training are decoupled allows better flexibility and scalability.
- Adopt a streaming/event-driven architecture for low-latency AI use cases: "If you've got an event-driven architecture, then, you know, that's not what you use an orchestration tool for...if you're having a conversation with a chatbot, like, you know, you're sending messages, you're sending events, you're getting a response back. That I would argue should be dealt with by microservices."
Hugo Lu:
Nicolay Gerold:
00:00 Introduction to Orchestra and its Focus on Data Products
08:03 Unified Control Plane for Data Stack and End-to-End Control
14:42 Use Cases and Unique Applications of Orchestra
19:31 Retaining Existing Dev Workflows and Best Practices in Orchestra
22:23 Event-Driven Architectures and Monitoring in Orchestra
23:49 Putting Data Products First and Monitoring Health and Usage
25:40 The Future of Data Orchestration: Stream-Based and Cost-Effective
data orchestration, Orchestra, serverless architecture, versatility, use cases, maturity levels, challenges, AI workloads
63 פרקים
כל הפרקים
×
1 #056 Building Solo: How One Engineer Uses AI Agents to Ship Production Code 1:12:24

1 #055 Embedding Intelligence: AI's Move to the Edge 1:05:35

1 #054 Building Frankenstein Models with Model Merging and the Future of AI 1:06:55

1 #053 AI in the Terminal: Enhancing Coding with Warp 1:04:30

1 #052 Don't Build Models, Build Systems That Build Models 59:22

1 #051 Build systems that can be debugged at 4am by tired humans with no context 1:05:51

1 #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 1:06:57

1 #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 11:00

1 #049 BAML: The Programming Language That Turns LLMs into Predictable Functions 1:02:38

1 #049 TAKEAWAYS BAML: The Programming Language That Turns LLMs into Predictable Functions 1:12:34

1 #048 Why Your AI Agents Need Permission to Act, Not Just Read 57:02

1 #047 Architecting Information for Search, Humans, and Artificial Intelligence 57:21

1 #046 Building a Search Database From First Principles 53:28

1 #045 RAG As Two Things - Prompt Engineering and Search 1:02:43

1 #044 Graphs Aren't Just For Specialists Anymore 1:03:34

1 #043 Knowledge Graphs Won't Fix Bad Data 1:10:58

1 #042 Temporal RAG, Embracing Time for Smarter, Reliable Knowledge Graphs 1:33:43

1 #041 Context Engineering, How Knowledge Graphs Help LLMs Reason 1:33:34

1 #040 Vector Database Quantization, Product, Binary, and Scalar 52:11

1 #039 Local-First Search, How to Push Search To End-Devices 53:08

1 #038 AI-Powered Search, Context Is King, But Your RAG System Ignores Two-Thirds of It 1:14:23

1 #037 Chunking for RAG: Stop Breaking Your Documents Into Meaningless Pieces 49:12

1 #036 How AI Can Start Teaching Itself - Synthetic Data Deep Dive 48:10

1 #035 A Search System That Learns As You Use It (Agentic RAG) 45:29

1 #034 Rethinking Search Inside Postgres, From Lexemes to BM25 47:15

1 #033 RAG's Biggest Problems & How to Fix It (ft. Synthetic Data) 51:25

1 #032 Improving Documentation Quality for RAG Systems 46:36

1 #031 BM25 As The Workhorse Of Search; Vectors Are Its Visionary Cousin 54:04

1 #030 Vector Search at Scale, Why One Size Doesn't Fit All 36:25

1 #029 Search Systems at Scale, Avoiding Local Maxima and Other Engineering Lessons 54:46

1 #028 Training Multi-Modal AI, Inside the Jina CLIP Embedding Model 49:21

1 #027 Building the database for AI, Multi-modal AI, Multi-modal Storage 44:53

1 #026 Embedding Numbers, Categories, Locations, Images, Text, and The World 46:43

1 #025 Data Models to Remove Ambiguity from AI and Search 58:39

1 #024 How ColPali is Changing Information Retrieval 54:56

1 #023 The Power of Rerankers in Modern Search 42:28

1 #022 The Limits of Embeddings, Out-of-Domain Data, Long Context, Finetuning (and How We're Fixing It) 46:05

1 #021 The Problems You Will Encounter With RAG At Scale And How To Prevent (or fix) Them 50:08

1 #020 The Evolution of Search, Finding Search Signals, GenAI Augmented Retrieval 52:15

1 #019 Data-driven Search Optimization, Analysing Relevance 51:13

1 #018 Query Understanding: Doing The Work Before The Query Hits The Database 53:01


1 #017 Unlocking Value from Unstructured Data, Real-World Applications of Generative AI 36:27

1 #016 Data Processing for AI, Integrating AI into Data Pipelines, Spark 46:25
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.