Query Your Entire Email History with AI: Building RAG-Powered Email Memory

Query Your Entire Email History with AI: Building RAG-Powered Email Memory

You know the feeling. You vaguely remember your accountant mentioning something about 2024 tax deductions in an email last spring, but searching your inbox for “tax” returns 847 unrelated results. You spend twenty minutes scrolling before giving up. This is the fundamental failure of keyword search — and it’s a problem that Retrieval-Augmented Generation (RAG) can finally solve.

In this guide, you’ll learn how to build a conversational AI layer over your entire email archive, enabling natural language queries that actually understand what you mean, not just what you typed.


Why Keyword Search Fails Your Inbox

Traditional inbox search is lexical: it matches the characters you type against the characters in your emails. Ask for “tax advice” and it finds emails containing those exact words. But human communication doesn’t work that way. Your accountant may have written about “deductible home office expenses” or “Schedule C items” — neither phrase containing the word “tax.”

Natural language queries, powered by semantic embeddings, understand meaning. A query like “What did my accountant say about saving money on my 2024 return?” can surface that email because the underlying vector representation captures conceptual similarity, not character overlap. This is the core advantage RAG brings to email archives.


The RAG Pipeline Architecture

Building semantic email search requires a four-stage pipeline:

1. Email Ingestion

Pull emails from Gmail using the Gmail API (or IMAP for other providers). For each email, extract the sender, date, subject, and plain-text body. Strip HTML, quoted reply chains, and signatures to reduce noise.

2. Chunking

Long email threads need to be split into semantically coherent chunks — typically 300–500 tokens each — with overlapping windows to preserve context across chunk boundaries. A single thread might yield 3–5 chunks.

3. Embedding

Pass each chunk through an embedding model to generate a dense vector representation. OpenAI’s text-embedding-3-small is a cost-effective hosted option. For privacy-sensitive archives, local models like nomic-embed-text via Ollama keep data entirely on-device.

import openai

def embed_chunk(text: str) -> list[float]:
    response = openai.embeddings.create(
        model="text-embedding-3-small",
        input=text
    )
    return response.data[0].embedding

4. Vector Database Storage

Store each embedding alongside its metadata (email ID, sender, date, subject) in a vector database. Chroma is ideal for local development — zero infrastructure, runs in-process. Pinecone or Qdrant scale better for archives exceeding 100k emails.

import chromadb

client = chromadb.PersistentClient(path="./email_index")
collection = client.get_or_create_collection("emails")

collection.add(
    documents=[chunk_text],
    embeddings=[chunk_embedding],
    metadatas=[{"sender": sender, "date": date, "subject": subject}],
    ids=[chunk_id]
)

Building the Retrieval Layer

When a user submits a query, the retrieval layer does three things:

  • Embed the query using the same model used during ingestion.
  • Run similarity search against the vector DB to retrieve the top-k most semantically relevant chunks (typically k=10–20).
  • Rerank results using a cross-encoder model (e.g., cross-encoder/ms-marco-MiniLM-L-6-v2) to re-score the candidates with richer context awareness, then pass only the top 4–6 to your LLM.
  • Finally, those retrieved chunks become the context window for answer generation:

    context = "\n\n".join([chunk for chunk in top_chunks])
    
    prompt = f"""You are a helpful assistant with access to the user's email archive.
    Answer the question using only the emails provided below.
    
    Emails:
    {context}
    
    Question: {user_query}"""
    
    # Pass to Claude or any LLM for final answer generation

    This grounds the LLM’s response in real email content, preventing hallucination and providing citable sources.


    Practical Implementation: Incremental Sync with Gmail API

    Indexing your entire archive once is straightforward, but keeping it current requires delta syncing. Gmail’s API supports this via historyId — a cursor that tracks changes since your last sync.

    def sync_new_emails(service, last_history_id: str):
        results = service.users().history().list(
            userId='me',
            startHistoryId=last_history_id,
            historyTypes=['messageAdded']
        ).execute()
        
        for record in results.get('history', []):
            for msg in record.get('messagesAdded', []):
                process_and_index_email(msg['message']['id'])

    Run this sync on a schedule (e.g., every 15 minutes via a cron job) to keep your index fresh without re-processing the entire archive each time.


    Example Queries and Results

    Here’s where the magic becomes tangible. Compare these two experiences:

    Query Keyword Search RAG Email Memory
    *”What did my accountant say about 2024 taxes?”* Returns emails with literal word “tax” Surfaces the March email about Schedule C deductions
    *”Did anyone mention the rooftop venue for the wedding?”* Fails unless “rooftop” appears verbatim Finds the email calling it the “outdoor terrace at the top floor”
    *”What’s the status of the Johnson contract?”* Scatters results across 6 threads Synthesizes a timeline from 3 relevant threads

    The LLM doesn’t just retrieve — it synthesizes. Ask “Summarize all feedback I received on the Q3 report” and it compiles insights across multiple email threads into a coherent answer, complete with sender attribution.


    Getting Started

    The full stack is accessible to any developer comfortable with Python:

    • Gmail API — free, with OAuth 2.0 authentication
    • Chromapip install chromadb, runs locally
    • OpenAI embeddings — fractions of a cent per email chunk
    • Any capable LLM — Claude, GPT-4, or a local model via Ollama

    Your email archive is one of the richest personal knowledge bases you own. It contains years of decisions, relationships, and institutional memory — locked behind a search box that can’t understand you. RAG changes that. Build this once, and you’ll never lose an email insight again.

    Leave a Reply

    Your email address will not be published. Required fields are marked *