Artwork

תוכן מסופק על ידי Nicolay Gerold. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Nicolay Gerold או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !

#040 Vector Database Quantization, Product, Binary, and Scalar

52:11
 
שתפו
 

Manage episode 464211047 series 3585930
תוכן מסופק על ידי Nicolay Gerold. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Nicolay Gerold או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

When you store vectors, each number takes up 32 bits.

With 1000 numbers per vector and millions of vectors, costs explode.

A simple chatbot can cost thousands per month just to store and search through vectors.

The Fix: Quantization

Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors.

Today we are back continuing our series on search with Zain Hasan, a former ML engineer at Weaviate and now a Senior AI/ ML Engineer at Together. We talk about the different types of quantization, when to use them, how to use them, and their tradeoff.

Three Ways to Quantize:

  1. Binary Quantization
    • Turn each number into just 0 or 1
    • Ask: "Is this dimension positive or negative?"
    • Works great for 1000+ dimensions
    • Cuts memory by 97%
    • Best for normally distributed data
  2. Product Quantization
    • Split vector into chunks
    • Group similar chunks
    • Store cluster IDs instead of full numbers
    • Good when binary quantization fails
    • More complex but flexible
  3. Scalar Quantization
    • Use 8 bits instead of 32
    • Simple middle ground
    • Keeps more precision than binary
    • Less savings than binary

Key Quotes:

  • "Vector databases are pretty much the commercialization and the productization of representation learning."
  • "I think quantization, it builds on the assumption that there is still noise in the embeddings. And if I'm looking, it's pretty similar as well to the thought of Matryoshka embeddings that I can reduce the dimensionality."
  • "Going from text to multimedia in vector databases is really simple."
  • "Vector databases allow you to take all the advances that are happening in machine learning and now just simply turn a switch and use them for your application."

Zain Hasan:

Nicolay Gerold:

vector databases, quantization, hybrid search, multi-vector support, representation learning, cost reduction, memory optimization, multimodal recommender systems, brain-computer interfaces, weather prediction models, AI applications

  continue reading

59 פרקים

Artwork
iconשתפו
 
Manage episode 464211047 series 3585930
תוכן מסופק על ידי Nicolay Gerold. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Nicolay Gerold או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

When you store vectors, each number takes up 32 bits.

With 1000 numbers per vector and millions of vectors, costs explode.

A simple chatbot can cost thousands per month just to store and search through vectors.

The Fix: Quantization

Think of it like image compression. JPEGs look almost as good as raw photos but take up far less space. Quantization does the same for vectors.

Today we are back continuing our series on search with Zain Hasan, a former ML engineer at Weaviate and now a Senior AI/ ML Engineer at Together. We talk about the different types of quantization, when to use them, how to use them, and their tradeoff.

Three Ways to Quantize:

  1. Binary Quantization
    • Turn each number into just 0 or 1
    • Ask: "Is this dimension positive or negative?"
    • Works great for 1000+ dimensions
    • Cuts memory by 97%
    • Best for normally distributed data
  2. Product Quantization
    • Split vector into chunks
    • Group similar chunks
    • Store cluster IDs instead of full numbers
    • Good when binary quantization fails
    • More complex but flexible
  3. Scalar Quantization
    • Use 8 bits instead of 32
    • Simple middle ground
    • Keeps more precision than binary
    • Less savings than binary

Key Quotes:

  • "Vector databases are pretty much the commercialization and the productization of representation learning."
  • "I think quantization, it builds on the assumption that there is still noise in the embeddings. And if I'm looking, it's pretty similar as well to the thought of Matryoshka embeddings that I can reduce the dimensionality."
  • "Going from text to multimedia in vector databases is really simple."
  • "Vector databases allow you to take all the advances that are happening in machine learning and now just simply turn a switch and use them for your application."

Zain Hasan:

Nicolay Gerold:

vector databases, quantization, hybrid search, multi-vector support, representation learning, cost reduction, memory optimization, multimodal recommender systems, brain-computer interfaces, weather prediction models, AI applications

  continue reading

59 פרקים

כל הפרקים

×
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה