Cherry is an AI-powered knowledge engine that captures, embeds, clusters, and graphs every page you save. The model runs in your browser. The algorithms are visible.
Cherry's search isn't a black box. Every stage is documented, benchmarked, and inspectable on the /search?debug=1 page.
384-dim MiniLM embeddings indexed via pgvector's HNSW (m=16, ef_construction=64). Approximate kNN at near-logarithmic cost — p95 search latency under 100 ms over 10K memories.
Fuses BM25 + vector rankings into one. k=60, no learned parameters.
Postgres tsvector with weighted A/B/C fields, queried via websearch_to_tsquery.
Time-decay rank boost using the forgetting curve. λ tunable per user.
DBSCAN over multiple ε values, scored by stability. Labels via TF-IDF.
Edges precomputed by querying HNSW for top-8 nearest neighbours of each memory and keeping pairs with cosine similarity ≥ 0.55. Force-directed 2D layout, color-coded by cluster.
Paste a URL, drag the bookmarklet, upload your browser history, or install the 50-LOC companion extension. All four routes hit the same /api/capture endpoint.
A Web Worker loads Xenova/all-MiniLM-L6-v2 (~25 MB, cached forever) and computes a 384-dim vector. Your text never leaves your device unless you opt in.
Postgres runs HNSW + BM25 in parallel. Cherry fuses the rankings via Reciprocal Rank Fusion, then boosts recent memories with the Ebbinghaus curve.
HDBSCAN-like density clustering auto-derives topics. The knowledge graph wires up related memories so you can browse your second brain visually.
The demo account is pre-loaded with memories so you can search, cluster, and graph immediately.