התחל במצב לא מקוון עם האפליקציה Player FM !
פודקאסטים ששווה להאזין
בחסות


1 When Killers Realize It's Over: Raw Police Interrogation Murderer Reaction Compilation 1:22:29
#046 Building a Search Database From First Principles
Manage episode 471129384 series 3585930
Modern search is broken. There are too many pieces that are glued together.
- Vector databases for semantic search
- Text engines for keywords
- Rerankers to fix the results
- LLMs to understand queries
- Metadata filters for precision
Each piece works well alone.
Together, they often become a mess.
When you glue these systems together, you create:
- Data Consistency Gaps Your vector store knows about documents your text engine doesn't. Which is right?
- Timing Mismatches New content appears in one system before another. Users see different results depending on which path their query takes.
- Complexity Explosion Every new component doubles your integration points. Three components means three connections. Five means ten.
- Performance Bottlenecks Each hop between systems adds latency. A 200ms search becomes 800ms after passing through four components.
- Brittle Chains When one system fails, your entire search breaks. More pieces mean more breaking points.
I recently built a system where we had query specific post-filters but the requirement to deliver a fixed number of results to the user.
A lot of times, the query had to be run multiple times to achieve the desired amount.
So we had an unpredictable latency. A high load on the backend, where some queries hammered the database 10+ times. A relevance cliff, where results 1-6 look great, but the later ones were poor matches.
Today on How AI Is Built, we are talking to Marek Galovic from TopK.
We talk about how they built a new search database with modern components. "How would search work if we built it today?”
Cloud storage is cheap. Compute is fast. Memory is plentiful.
One system that handles vectors, text, and filters together - not three systems duct-taped into one.
One pass handles everything:
Vector search + Text search + Filters → Single sorted resultBuilt with hand-optimized Rust kernels for both x86 and ARM, the system scales to 100M documents with 200ms P99 latency.
The goal is to do search in 5 lines of code.
Marek Galovic:
Nicolay Gerold:
00:00 Introduction to TopK and Snowflake Comparison
00:35 Architectural Patterns and Custom Formats
01:30 Query Execution Engine Explained
02:56 Distributed Systems and Rust
04:12 Query Execution Process
06:56 Custom File Formats for Search
11:45 Handling Distributed Queries
16:28 Consistency Models and Use Cases
26:47 Exploring Database Versioning and Snapshots
27:27 Performance Benchmarks: Rust vs. C/C++
29:02 Scaling and Latency in Large Datasets
29:39 GPU Acceleration and Use Cases
31:04 Optimizing Search Relevance and Hybrid Search
34:39 Advanced Search Features and Custom Scoring
38:43 Future Directions and Research in AI
47:11 Takeaways for Building AI Applications
60 פרקים
Manage episode 471129384 series 3585930
Modern search is broken. There are too many pieces that are glued together.
- Vector databases for semantic search
- Text engines for keywords
- Rerankers to fix the results
- LLMs to understand queries
- Metadata filters for precision
Each piece works well alone.
Together, they often become a mess.
When you glue these systems together, you create:
- Data Consistency Gaps Your vector store knows about documents your text engine doesn't. Which is right?
- Timing Mismatches New content appears in one system before another. Users see different results depending on which path their query takes.
- Complexity Explosion Every new component doubles your integration points. Three components means three connections. Five means ten.
- Performance Bottlenecks Each hop between systems adds latency. A 200ms search becomes 800ms after passing through four components.
- Brittle Chains When one system fails, your entire search breaks. More pieces mean more breaking points.
I recently built a system where we had query specific post-filters but the requirement to deliver a fixed number of results to the user.
A lot of times, the query had to be run multiple times to achieve the desired amount.
So we had an unpredictable latency. A high load on the backend, where some queries hammered the database 10+ times. A relevance cliff, where results 1-6 look great, but the later ones were poor matches.
Today on How AI Is Built, we are talking to Marek Galovic from TopK.
We talk about how they built a new search database with modern components. "How would search work if we built it today?”
Cloud storage is cheap. Compute is fast. Memory is plentiful.
One system that handles vectors, text, and filters together - not three systems duct-taped into one.
One pass handles everything:
Vector search + Text search + Filters → Single sorted resultBuilt with hand-optimized Rust kernels for both x86 and ARM, the system scales to 100M documents with 200ms P99 latency.
The goal is to do search in 5 lines of code.
Marek Galovic:
Nicolay Gerold:
00:00 Introduction to TopK and Snowflake Comparison
00:35 Architectural Patterns and Custom Formats
01:30 Query Execution Engine Explained
02:56 Distributed Systems and Rust
04:12 Query Execution Process
06:56 Custom File Formats for Search
11:45 Handling Distributed Queries
16:28 Consistency Models and Use Cases
26:47 Exploring Database Versioning and Snapshots
27:27 Performance Benchmarks: Rust vs. C/C++
29:02 Scaling and Latency in Large Datasets
29:39 GPU Acceleration and Use Cases
31:04 Optimizing Search Relevance and Hybrid Search
34:39 Advanced Search Features and Custom Scoring
38:43 Future Directions and Research in AI
47:11 Takeaways for Building AI Applications
60 פרקים
כל הפרקים
×
1 AI in the Terminal: Enhancing Coding with Warp 1:04:30

1 #052 Don't Build Models, Build Systems That Build Models 59:22

1 #051 Build systems that can be debugged at 4am by tired humans with no context 1:05:51

1 #050 Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 1:06:57

1 #050 TAKEAWAYS Bringing LLMs to Production: Delete Frameworks, Avoid Finetuning, Ship Faster 11:00

1 #049 BAML: The Programming Language That Turns LLMs into Predictable Functions 1:02:38

1 #049 TAKEAWAYS BAML: The Programming Language That Turns LLMs into Predictable Functions 1:12:34

1 #048 Why Your AI Agents Need Permission to Act, Not Just Read 57:02

1 #047 Architecting Information for Search, Humans, and Artificial Intelligence 57:21

1 #046 Building a Search Database From First Principles 53:28

1 #045 RAG As Two Things - Prompt Engineering and Search 1:02:43

1 #044 Graphs Aren't Just For Specialists Anymore 1:03:34

1 #043 Knowledge Graphs Won't Fix Bad Data 1:10:58

1 #042 Temporal RAG, Embracing Time for Smarter, Reliable Knowledge Graphs 1:33:43
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.