HomeAboutExperienceProjectsTestimonialsBlog
ProjectsTestimonialsBlog
DIVYANSHUSONI

Building products that feel right!

Interested in working together? Let's talk.

hello[at]divyanshusoni[dot]com
GitHubGitHub
LinkedInLinkedIn
XX
DEVDev.to
RSSRSS
© 2026 Divyanshu Soni

The views expressed here are my own and do not reflect those of my employer.

Back to Blogs
February 16, 2026

Vector Databases: Beyond Keyword Matching

Traditional databases rely on exact keyword matches. Vector Databases understand context and intent. This blog explains the architecture that makes modern AI retrieval possible.

On this page

  • The Brains Behind the Bots
  • The Core Magic - Vector Embeddings
  • The "Grocery Store" Analogy
  • The Math of Meaning
  • Vector Database vs. Vector Library
  • The Search Architectures - How It Actually Works
  • 1. HNSW - The Multi-Layer Graph Approach
  • 2. IVF - The Cluster Partitioning Approach
  • 3. Quantization - The Compression Approach
  • Measuring "Closeness" - Distance Metrics
  • 1. Cosine Similarity
  • 2. Euclidean Distance (L2)
  • 3. Dot Product (Inner Product)
  • Hybrid Search - The Best of Both Worlds
  • Pre-Filtering vs. Post-Filtering
  • Why We Need This - Real-World Use Cases
  • 1. RAG - Retrieval Augmented Generation
  • 2. Recommendation Engines
  • 3. Anomaly Detection
  • 4. Multi-Modal Search
  • The Toolbox
  • Getting Started: A Practical Example
  • Initialize the vector database
  • Embed and store a document chunk
  • Query: find relevant chunks for a user question
  • Key Takeaways

The Brains Behind the Bots#

If you've been hanging around the AI apps lately, you've probably heard the term Vector Database thrown around with the same reverence as "Generative AI" or "LLM." But what exactly are they? Are they just another place to store Excel sheets? (Spoiler: absolutely not.)

 

Traditional databases are great at being rigid accountants. You ask for WHERE name = 'John Smith', and they find "John Smith." But if you ask for "that tall guy who likes sci-fi and works in marketing," a traditional database throws up its hands. There's no column for "vibes."

 

This is where Vector Databases come in. They are the infrastructure that allows machines to understand context, meaning, and relationships - essentially giving AI its long-term, semantic memory. And they're the backbone of nearly every modern AI application you use, from ChatGPT's retrieval pipeline to Spotify's recommendation engine.

 

Let’s break down how we turn vague human thoughts into clear machine math.


The Core Magic - Vector Embeddings#

Before we talk about the database, we need to talk about the data.

 

Computers don’t inherently understand English, interpret a JPEG, or make sense of an MP3 - they operate on numerical data. To bridge this gap, we use specialized AI systems called encoder models that convert unstructured inputs like text, images, and audio into dense, fixed-length arrays of floating-point numbers. These numerical representations are known as vector embeddings, and they capture the semantic or perceptual meaning of the original data in a form machines can process and compare efficiently.

 

📌 Important

A vector embedding is a numerical representation of data in a high-dimensional space, where the geometric position encodes semantic meaning. Think of it as translating "meaning" into coordinates on a map that has hundreds or thousands of axes.

 

The process works like this: we feed raw data (say, a sentence) into a pre-trained encoder model like OpenAI's text-embedding-3-large, Google's Gecko, or the open-source all-MiniLM-L6-v2 from Sentence Transformers. The model's neural network processes the input through its layers and outputs a fixed-size vector - typically 384, 768, or 1,536 dimensions - that captures the semantic fingerprint of that input.

blog image

The "Grocery Store" Analogy#

Imagine a massive grocery store where items are shelved by meaning, not by brand.

  • Apples and Bananas are physically close (Fruit aisle).
  • Milk is in a different zone (Dairy aisle).
  • Shampoo is even further away (Personal Care).

In a vector database, we map data points in the same way:

  • The vector for "Puppy" will be mathematically very close to the vector for "Dog."
  • It will be somewhat close to "Cat" (both are pets).
  • It will be astronomically far from "Toaster."

The Math of Meaning#

These vectors aren't just 2D points on a graph; they live in high-dimensional space. A single word might be represented by 1,536 different numbers (dimensions). Each dimension captures a tiny, learned slice of meaning — gender, plurality, emotional tone, contextual usage, temporal association, and hundreds of other latent features that the model discovered during training.

 

The Classic Example: If you take the numbers representing King, subtract the numbers for Man, and add the numbers for Woman, the resulting vector lands almost exactly on Queen. Vector(King) - Vector(Man) + Vector(Woman) ≈ Vector(Queen)

This is called a linear analogy in embedding space, and it's one of the earliest and most famous demonstrations that these models genuinely capture relational meaning, not just pattern-matching strings.


Vector Database vs. Vector Library#

"Wait," I hear you ask, "Can't I just store these numbers in a NumPy array or a standard database?"

You could, but it won't scale — and in production, it'll become a nightmare.

 

Let's draw a clear line between the two:

FeatureVector Library (FAISS, ScaNN, Annoy)Vector Database (Milvus, Pinecone, Weaviate, Qdrant)
StorageIn-memory onlyPersistent (disk + memory)
CRUD OperationsLimited - often requires full re-index on insertFull CRUD with real-time upserts
ScalingSingle machineDistributed sharding across clusters
Fault ToleranceNone — crashes lose everythingReplication with automatic failover
Metadata FilteringBasic or noneRich, pre-filtered queries
Access ControlNoneRBAC, API keys, tenant isolation
ℹ️ Note

If you're prototyping in a Jupyter notebook, a library like FAISS is perfect. But the moment you're building a production app that needs to handle real-time updates, concurrent users, and fault tolerance, you need a proper Vector Database.

blog image

 

A vector library usually runs inside a single machine - your app loads embeddings into memory and queries them locally. But a vector database operates as a distributed production system. The diagram above shows how embeddings are split across multiple database nodes (shards), traffic is routed through a load balancer, and replicas ensure the system stays online even if a node fails.

This architecture is what enables vector databases to handle massive datasets, real-time updates, and concurrent users - capabilities that simple in-memory libraries can’t support at production scale.


The Search Architectures - How It Actually Works#

In a standard database, we search for exact matches (WHERE id = 123). In a vector database, we perform Similarity Search — also known as Semantic Search. We're looking for the Nearest Neighbors to our query vector in high-dimensional space.

 

If you have 10 data points, you can compare your query against all 10. This is called kNN (k-Nearest Neighbors) - an exhaustive, brute-force comparison. But what if you have 100 million vectors? Comparing your query to every single one would take seconds or even minutes. That's unacceptable for real-time AI.

 

To fix this, we use ANN (Approximate Nearest Neighbors). The key word is approximate: we trade a tiny, often imperceptible amount of accuracy (called recall) for massive speed improvements - turning multi-second queries into sub-millisecond ones.

 

ℹ️ Note

Recall in this context is the percentage of true nearest neighbors that the algorithm actually returns. An ANN algorithm with 98% recall finds 98 of the actual 100 closest vectors - and does it 1,000x faster than brute force.

 

Let's look at how the three major indexing strategies work:

 

1. HNSW - The Multi-Layer Graph Approach#

Hierarchical Navigable Small World (HNSW) is widely considered the gold standard for balancing speed, accuracy, and memory usage. It's the default index in Qdrant, Weaviate, and pgvector.

 

How it works under the hood:

HNSW builds a multi-layered proximity graph inspired by skip lists. During index construction:

  1. Each new vector is assigned a random maximum layer (higher layers have exponentially fewer nodes).
  2. Starting from the top layer, the algorithm greedily navigates to find the closest node, then descends to the layer below.
  3. At each layer, it connects the new node to its nearest neighbors using a configurable parameter M (the max number of connections per node).
  4. At query time, the same top-down, greedy traversal finds the approximate nearest neighbors in O(log N) time.

 

 

 

blog image

Think of it like navigating a city: Layer 2 is the highway (big jumps to the right region), Layer 1 is the local streets (navigating the neighborhood), and Layer 0 is walking door-to-door (finding the exact house). The key tuning parameters are:

 

  • M - max connections per node (higher = more accurate but more memory)
  • efConstruction - how many candidates to evaluate during build time
  • efSearch - how many candidates to evaluate during query time

 

🔴 Caution

HNSW is fast and accurate, but it requires the entire graph to fit in memory, making it memory-intensive for very large datasets (billions of vectors).

2. IVF - The Cluster Partitioning Approach#

Inverted File Index (IVF) takes a different approach: it pre-partitions the vector space into clusters using k-means clustering, then searches only the relevant clusters at query time.

 

How it works under the hood:

  1. Training phase: The algorithm runs k-means on a sample of your data to find nlist centroids (cluster centers). These centroids divide the entire vector space into Voronoi cells - regions where every point is closer to that centroid than to any other.
  2. Indexing phase: Each vector is assigned to its nearest centroid's partition.
  3. Query phase: The query vector is compared against all centroids. Only the top nprobe closest partitions are searched exhaustively.

 

If you search for "Strawberry," the query vector lands near the "Fruit" centroid. The database searches that cluster (and maybe one or two neighboring clusters) and completely ignores the "Cars," "Office Supplies," and "Planets" partitions - drastically reducing computation.

 

🔴 Caution

IVF is more memory-efficient than HNSW and works well with quantization (see below), but can suffer from boundary effects where the true nearest neighbor sits in an adjacent cluster that wasn't probed.

 

3. Quantization - The Compression Approach#

Vectors are heavy. A single 1,536-dimensional float32 vector is 6 KB. At a billion vectors, that's 6 TB of raw data just for the embeddings. We need compression.

 

  • Product Quantization (PQ): This is the workhorse of vector compression. It breaks each high-dimensional vector into m smaller sub-vectors, then uses a learned codebook (via k-means on each sub-space) to approximate each sub-vector with a compact code - typically a single byte. This is conceptually similar to how lossy compression works in JPEG images.

A 1,536-dim float32 vector (6,144 bytes) can be compressed to just 96-192 bytes - a 32x–64x reduction with minimal recall loss.

 

  • Scalar Quantization (SQ): A simpler approach that reduces the precision of each dimension from float32 to int8 or float16, cutting memory by 2x–4x with negligible quality loss.

  • Binary Quantization: The most aggressive compression, reducing each dimension to a single bit. This enables blazing-fast Hamming distance comparisons but works best as a first-pass re-ranking step.

 

ℹ️ Note

A codebook in quantization is like a dictionary of pre-computed "representative" vectors. Instead of storing the actual vector, you store which "word" in the codebook best approximates each sub-vector.

 

 

blog image


Measuring "Closeness" - Distance Metrics#

How does the database calculate "similarity"? It uses specific mathematical formulas called distance metrics. The choice of metric depends on how the embedding model was trained - using the wrong one can significantly degrade your search quality.

 

1. Cosine Similarity#

Measures the angle between two vectors, ignoring their magnitude (length). Two vectors pointing in the same direction have a cosine similarity of 1.0, regardless of how "long" they are.

  • Great for: Text search, document similarity - because document length shouldn't bias results. A 500-word abstract and a 50,000-word paper about the same topic should match equally well.
  • Under the hood: cos(θ) = (A · B) / (‖A‖ × ‖B‖)

 

2. Euclidean Distance (L2)#

Measures the straight-line distance between two points in the vector space.

  • Great for: Image similarity, spatial data, or any case where the magnitude of the vector carries meaning (e.g., signal strength, confidence scores).
  • Under the hood: d = √(Σ(aᵢ - bᵢ)²)

 

3. Dot Product (Inner Product)#

Measures both the alignment (direction) and the magnitude of two vectors. Unlike cosine similarity, the dot product rewards larger vectors.

  • Great for: Recommendation systems - a user with strong preferences (large vector magnitude) matched with a highly relevant item (large vector magnitude) produces a higher score.
  • Under the hood: A · B = Σ(aᵢ × bᵢ)

Pro Tip: Most modern embedding models (OpenAI, Cohere models) are trained with cosine similarity as the objective. If you're unsure, cosine similarity is the safest default. Check your model's documentation to be certain.


Hybrid Search - The Best of Both Worlds#

Sometimes, pure vector search isn't enough.

 

  • Vector (dense) search is excellent at understanding intent: it finds "shoes" when you search for "boots."
  • Keyword (sparse) search is excellent at finding exact matches: it finds shoes that are precisely "Size 10" or brand "Nike."

If you rely only on vectors, you might get a perfect style match in the wrong size. If you rely only on keywords, you'll miss semantically relevant results.

 

Hybrid Search combines dense vectors (semantic meaning) with sparse vectors (keyword signals, typically using the BM25 algorithm) and merges the results using a fusion strategy.

ℹ️ Note

- BM25 (Best Matching 25) is the algorithm that powers traditional search engines like Elasticsearch. It scores documents based on term frequency, document length, and corpus statistics - it's "smart keyword matching." Reciprocal Rank Fusion (RRF) is a popular strategy for combining ranked lists from different search methods. It merges the dense and sparse result lists by giving each result a score based on its rank position, then re-sorting. It's simple, effective, and doesn't require tuning.

 

blog image

Pre-Filtering vs. Post-Filtering#

When combining vector search with metadata constraints (price, size, category), the order of operations matters enormously:

 

  • Pre-filtering: "Only look at Size 10 shoes, THEN find the cutest ones." This narrows the search space first, making vector search faster. But if the filter is too aggressive, you might have too few candidates for a good semantic match.
  • Post-filtering: "Find the cutest shoes, THEN throw away the ones that aren't Size 10." This gives vector search the full dataset but can be wasteful - you might retrieve 100 results only to discard 90 of them.

In practice, most modern vector databases (Qdrant, Weaviate, Milvus) implement filtered HNSW or pre-filtered IVF so the filtering and vector search happen simultaneously during graph traversal, avoiding both extremes. This is a significant engineering advantage over naive two-stage approaches.


Why We Need This - Real-World Use Cases#

Let's move from theory to practice. Here's where vector databases are making a real impact:

 

1. RAG - Retrieval Augmented Generation#

This is the killer app for vector databases, and it's the reason most teams are adopting them today. LLMs like ChatGPT are powerful, but they have two critical weaknesses: they hallucinate (make up facts), and they don't know your private data.

How RAG solves this:

  1. Ingestion: We chunk your company's PDFs, wikis, Slack threads, and docs into passages, embed each chunk, and store them in a vector database.
  2. Retrieval: When a user asks a question, we embed the query and perform a similarity search to find the most relevant chunks.
  3. Generation: We inject only those relevant chunks into the LLM's context window as grounding evidence.
  4. Response: The LLM answers based on your facts, not its training data, dramatically reducing hallucinations.

 

📌 Important

Chunking is the process of splitting large documents into smaller, overlapping segments (typically 200-500 tokens). This is critical because embedding models have a fixed context window, and smaller chunks produce more precise embeddings. The overlap ensures we don't lose context at boundary points.

RAG Pipeline

 

2. Recommendation Engines#

Netflix and Amazon don't just guess. They turn your behavioral history (watch history, purchase history, click patterns) into a user embedding vector. They also embed every movie, product, or song.

 

  • Query: "Find product vectors closest to User X's behavior vector."
  • Result: "You bought a tent? You might like this camping stove." (Because they are mathematically close in the learned preference space.)

 

ℹ️ Note

This approach is called collaborative filtering via embeddings. Unlike traditional item-to-item similarity, embeddings capture latent factors - hidden patterns like "this user prefers minimalist design" even if they never explicitly said so.

 

3. Anomaly Detection#

In a vector space, "normal" transactions form tight clusters. If a credit card transaction appears way out in the empty space of the vector map - far from your usual spending patterns - the database flags it immediately.

 

This is called out-of-distribution detection: we're looking for data points that don't fit any known cluster. Vector databases are uniquely suited for this because they can answer the question "how far is this point from the nearest known pattern?" in milliseconds.

 

4. Multi-Modal Search#

With models like CLIP (Contrastive Language-Image Pretraining), we can embed text and images into the same vector space. This means you can search for images using text queries or find similar images across a catalog - powering visual search features in e-commerce, stock photo libraries, and content moderation.


The Toolbox#

If you're ready to build, here's a practical comparison of the leading vector database solutions:

DatabaseLanguageOpen SourceHostingBest For
Pinecone—❌Fully managedGetting started fast, zero-ops teams
Milvus / ZillizGo + C++✅Self-hosted + CloudEnterprise scale (billions of vectors)
WeaviateGo✅Self-hosted + CloudObject-aware search with built-in vectorizers
QdrantRust✅Self-hosted + CloudHigh-performance filtering, edge deployment
ChromaPython✅Self-hosted + CloudQuick prototyping, LangChain/LlamaIndex integration
pgvectorC (Postgres ext)✅Any Postgres hostTeams already using Postgres, moderate scale
💡 Tip

Don't sleep on pgvector.** If your application already runs on PostgreSQL, adding pgvector lets you keep your stack simple without spinning up a separate database. It supports HNSW and IVF indexing and handles millions of vectors comfortably. You sacrifice some performance at massive scale, but operational simplicity is a feature.


Getting Started: A Practical Example#

Let's make this concrete. Here's what a minimal RAG ingestion pipeline looks like using Python, OpenAI embeddings, and Qdrant:

from qdrant_client import QdrantClient
from qdrant_client.models import VectorParams, Distance, PointStruct
import openai

# Initialize the vector database
client = QdrantClient(url="http://localhost:6333")
client.create_collection(
    collection_name="company_docs",
    vectors_config=VectorParams(
        size=1536,             # Matches text-embedding-3-small output
        distance=Distance.COSINE  # Cosine similarity for text
    )
)

# Embed and store a document chunk
text_chunk = "Our refund policy allows returns within 30 days of purchase."
response = openai.embeddings.create(
    input=text_chunk,
    model="text-embedding-3-small"
)
embedding = response.data[0].embedding  # 1536-dim float vector

client.upsert(
    collection_name="company_docs",
    points=[PointStruct(
        id=1,
        vector=embedding,
        payload={"text": text_chunk, "source": "policy.pdf", "page": 12}
    )]
)

# Query: find relevant chunks for a user question
query = "Can I return a product?"
query_vector = openai.embeddings.create(
    input=query,
    model="text-embedding-3-small"
).data[0].embedding

results = client.query_points(
    collection_name="company_docs",
    query=query_vector,
    limit=3  # Top-3 nearest neighbors
)

for point in results.points:
    print(f"Score: {point.score:.4f} | {point.payload['text']}")

What's happening here: We create a collection (like a table) with cosine distance, embed a chunk of text, store it with metadata (payload), then query it with a natural language question. The database returns the most semantically similar chunks — even though "return a product" and "refund policy" share zero keywords. That's the power of semantic search.


Key Takeaways#

Let's recap the critical concepts we covered on this journey:

  1. Vector Embeddings are the foundation. They translate human-readable data into numerical representations where geometric proximity equals semantic similarity. Without good embeddings, nothing else works.

  2. ANN indexing makes vector search viable at scale. Algorithms like HNSW (graph-based, O(log N) traversal) and IVF (cluster partitioning) trade negligible accuracy for orders-of-magnitude speed improvements.

  3. Quantization is how we fit the world in memory. Techniques like Product Quantization compress vectors by 32x–64x, making billion-scale deployments feasible without requiring terabytes of RAM.

  4. Choose your distance metric wisely. Cosine similarity is the safe default for text. Euclidean for spatial data. Dot product for recommendations. Always match the metric to your embedding model's training objective.

  5. Hybrid Search is the production standard. Combining dense (semantic) and sparse (keyword/BM25) retrieval with fusion strategies like RRF gives you the best of both worlds.

  6. RAG is the killer app. Vector databases + LLMs = AI that answers from your data, not the internet. This pattern is rapidly becoming the standard architecture for enterprise AI.

  7. Pick the right tool for the job. Prototyping? Use Chroma or FAISS. Already on Postgres? Try pgvector. Need billion-scale production? Look at Milvus or Qdrant.


References

 

 

Support my work

If this post was useful, consider supporting my open source work and independent writing.

Sponsor on GitHubBuy me a coffee

Back to Blogs