התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


1 Shuai Wang’s Journey from China to Charleston 38:30
#030 Vector Search at Scale, Why One Size Doesn't Fit All
Manage episode 448926657 series 3585930
Ever wondered why your vector search becomes painfully slow after scaling past a million vectors? You're not alone - even tech giants struggle with this.
Charles Xie, founder of Zilliz (company behind Milvus), shares how they solved vector database scaling challenges at 100B+ vector scale:
Key Insights:
- Multi-tier storage strategy:
- GPU memory (1% of data, fastest)
- RAM (10% of data)
- Local SSD
- Object storage (slowest but cheapest)
- Real-time search solution:
- New data goes to buffer (searchable immediately)
- Index builds in background when buffer fills
- Combines buffer & main index results
- Performance optimization:
- GPU acceleration for 10k-50k queries/second
- Customizable trade-offs between:
- Cost
- Latency
- Search relevance
- Future developments:
- Self-learning indices
- Hybrid search methods (dense + sparse)
- Graph embedding support
- Colbert integration
Perfect for teams hitting scaling walls with their current vector search implementation or planning for future growth.
Worth watching if you're building production search systems or need to optimize costs vs performance.
Charles Xie:
Nicolay Gerold:
00:00 Introduction to Search System Challenges 00:26 Introducing Milvus: The Open Source Vector Database 00:58 Interview with Charles: Founder of Zilliz 02:20 Scalability and Performance in Vector Databases 03:35 Challenges in Distributed Systems 05:46 Data Consistency and Real-Time Search 12:12 Hierarchical Storage and GPU Acceleration 18:34 Emerging Technologies in Vector Search 23:21 Self-Learning Indexes and Future Innovations 28:44 Key Takeaways and Conclusion
61 פרקים
Manage episode 448926657 series 3585930
Ever wondered why your vector search becomes painfully slow after scaling past a million vectors? You're not alone - even tech giants struggle with this.
Charles Xie, founder of Zilliz (company behind Milvus), shares how they solved vector database scaling challenges at 100B+ vector scale:
Key Insights:
- Multi-tier storage strategy:
- GPU memory (1% of data, fastest)
- RAM (10% of data)
- Local SSD
- Object storage (slowest but cheapest)
- Real-time search solution:
- New data goes to buffer (searchable immediately)
- Index builds in background when buffer fills
- Combines buffer & main index results
- Performance optimization:
- GPU acceleration for 10k-50k queries/second
- Customizable trade-offs between:
- Cost
- Latency
- Search relevance
- Future developments:
- Self-learning indices
- Hybrid search methods (dense + sparse)
- Graph embedding support
- Colbert integration
Perfect for teams hitting scaling walls with their current vector search implementation or planning for future growth.
Worth watching if you're building production search systems or need to optimize costs vs performance.
Charles Xie:
Nicolay Gerold:
00:00 Introduction to Search System Challenges 00:26 Introducing Milvus: The Open Source Vector Database 00:58 Interview with Charles: Founder of Zilliz 02:20 Scalability and Performance in Vector Databases 03:35 Challenges in Distributed Systems 05:46 Data Consistency and Real-Time Search 12:12 Hierarchical Storage and GPU Acceleration 18:34 Emerging Technologies in Vector Search 23:21 Self-Learning Indexes and Future Innovations 28:44 Key Takeaways and Conclusion
61 פרקים
כל הפרקים
×
1 Maxime Labonne on Model Merging, AI Trends, and Beyond 1:06:55

1 #053 AI in the Terminal: Enhancing Coding with Warp 1:04:30

1 #052 Don't Build Models, Build Systems That Build Models 59:22

1 #051 Build systems that can be debugged at 4am by tired humans with no context 1:05:51

1 #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 1:06:57

1 #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 11:00

1 #049 BAML: The Programming Language That Turns LLMs into Predictable Functions 1:02:38

1 #049 TAKEAWAYS BAML: The Programming Language That Turns LLMs into Predictable Functions 1:12:34

1 #048 Why Your AI Agents Need Permission to Act, Not Just Read 57:02

1 #047 Architecting Information for Search, Humans, and Artificial Intelligence 57:21

1 #046 Building a Search Database From First Principles 53:28

1 #045 RAG As Two Things - Prompt Engineering and Search 1:02:43

1 #044 Graphs Aren't Just For Specialists Anymore 1:03:34

1 #043 Knowledge Graphs Won't Fix Bad Data 1:10:58

1 #042 Temporal RAG, Embracing Time for Smarter, Reliable Knowledge Graphs 1:33:43

1 #041 Context Engineering, How Knowledge Graphs Help LLMs Reason 1:33:34

1 #040 Vector Database Quantization, Product, Binary, and Scalar 52:11

1 #039 Local-First Search, How to Push Search To End-Devices 53:08

1 #038 AI-Powered Search, Context Is King, But Your RAG System Ignores Two-Thirds of It 1:14:23

1 #037 Chunking for RAG: Stop Breaking Your Documents Into Meaningless Pieces 49:12

1 #036 How AI Can Start Teaching Itself - Synthetic Data Deep Dive 48:10

1 #035 A Search System That Learns As You Use It (Agentic RAG) 45:29

1 #034 Rethinking Search Inside Postgres, From Lexemes to BM25 47:15

1 #033 RAG's Biggest Problems & How to Fix It (ft. Synthetic Data) 51:25

1 #032 Improving Documentation Quality for RAG Systems 46:36

1 #031 BM25 As The Workhorse Of Search; Vectors Are Its Visionary Cousin 54:04

1 #030 Vector Search at Scale, Why One Size Doesn't Fit All 36:25

1 #029 Search Systems at Scale, Avoiding Local Maxima and Other Engineering Lessons 54:46

1 #028 Training Multi-Modal AI, Inside the Jina CLIP Embedding Model 49:21

1 #027 Building the database for AI, Multi-modal AI, Multi-modal Storage 44:53

1 #026 Embedding Numbers, Categories, Locations, Images, Text, and The World 46:43

1 #025 Data Models to Remove Ambiguity from AI and Search 58:39

1 #024 How ColPali is Changing Information Retrieval 54:56

1 #023 The Power of Rerankers in Modern Search 42:28

1 #022 The Limits of Embeddings, Out-of-Domain Data, Long Context, Finetuning (and How We're Fixing It) 46:05

1 #021 The Problems You Will Encounter With RAG At Scale And How To Prevent (or fix) Them 50:08

1 #020 The Evolution of Search, Finding Search Signals, GenAI Augmented Retrieval 52:15

1 #019 Data-driven Search Optimization, Analysing Relevance 51:13

1 #018 Query Understanding: Doing The Work Before The Query Hits The Database 53:01


1 #017 Unlocking Value from Unstructured Data, Real-World Applications of Generative AI 36:27

1 #016 Data Processing for AI, Integrating AI into Data Pipelines, Spark 46:25

1 #015 Building AI Agents for the Enterprise, Agent Cost Controls, Seamless UX 35:11

1 #014 Building Predictable Agents through Prompting, Compression, and Memory Strategies 32:13
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.