Upload what you know. Let the system map it, synthesize it, and surface what you missed.
MindGraph is an iterative research system: ingest documents into an inspectable knowledge graph, chat with hybrid retrieval, browse auto-compiled wiki articles, and let proactive synthesis find cross-document connections you didn't know existed.
An iterative cycle that builds understanding over time
Not a one-shot upload-and-search tool. MindGraph runs a continuous loop — each cycle deepens your graph, compiles new articles, and surfaces what deserves your attention.
Upload documents, PDFs, and transcripts. Content is auto-chunked, embedded, and run through a six-pass LLM extraction pipeline that builds a typed knowledge graph — persons, organizations, claims, evidence, hypotheses — all linked back to source text.
Chat with AI that retrieves from text chunks, graph structure, and wiki articles simultaneously. Or browse your knowledge as an interactive force-directed graph — click any node to inspect properties, trace connections, and expand neighborhoods.
After ingestion, MindGraph auto-compiles wiki articles — synthesized markdown summaries for every document and key entity in your graph. Articles link to each other with [[wikilinks]], creating a browsable knowledge base that grows with every upload.
Proactive synthesis scans your project for cross-document patterns: entity bridges that connect separate works, claim pairs that reinforce or contradict each other, concept clusters with no covering article, and theories lacking supporting evidence. Candidates are ranked and turned into wiki articles tied back to the graph.
What happens at each stage
Every step produces inspectable, traceable artifacts in your knowledge graph. Nothing is a black box.
Ingest
Drop in a PDF, research paper, transcript, or article. MindGraph splits it into overlapping chunks, generates embeddings, and runs a six-pass LLM extraction pipeline across all cognitive layers.
- Auto-chunking with configurable size and overlap
- Embedding generation for semantic search
- Six-pass extraction: Reality, Epistemic, Intent, Action, Memory, Agent
- Every extracted node links back to its source chunk
- 60 node types: Person, Org, Claim, Evidence, Hypothesis, Goal, and more
Explore
Ask questions in natural language. The AI retrieves from three sources simultaneously: semantically matched text chunks, structured graph context, and wiki articles synthesized from your documents.
- Hybrid retrieval: chunks + graph + wiki articles in one query
- Every answer traces back to specific source documents
- Interactive graph explorer with force-directed visualization
- Click any node to inspect properties and expand neighborhoods
Synthesize
After ingestion, MindGraph automatically compiles wiki articles — one per document and one per major entity. Each article is a synthesized markdown summary that connects to related articles via [[wikilinks]].
- Auto-compiled after every document ingestion
- Document articles summarize the full source with key findings
- Entity articles synthesize everything known about a person, org, or concept
- [[Wikilinks]] create a browsable, interconnected knowledge base
- Edit any article inline, or recompile to incorporate new sources
Discover
Available nowProactive synthesis runs batch queries against your project's graph to surface non-obvious connections. Candidates are ranked by an LLM and become wiki articles covering each idea cluster — backed by the full epistemic provenance of the source graph.
- Entity bridges: people, orgs, or events that connect separate documents
- Claim pairs: assertions that reinforce or contradict each other across sources
- Concept clusters: ideas referenced across many documents with no covering article
- Theory gaps: hypotheses with downstream claims but weak supporting evidence
- Every article links back to the source claims in the graph — nothing is ungrounded
Static document stores vs. iterative research
Most tools stop at “upload and search.” MindGraph builds understanding that compounds over time.
Static document stores
- Upload documents and search. That's it.
- Returns text chunks ranked by similarity — no structure
- No awareness of contradictions across documents
- No mechanism to synthesize knowledge across sources
- You do all the synthesis work manually
MindGraph iterative research
- Upload, extract, explore, synthesize, discover — a continuous cycle
- Returns text chunks AND typed graph context AND wiki articles
- Auto-compiled wiki articles synthesize knowledge across documents
- Proactive signals surface contradictions, bridges, and gaps
- Every claim traces back to its source with confidence scores
Inspectable graph
Every entity, claim, and relationship is visible in a force-directed explorer. No opaque embeddings — you see what the system knows.
Source provenance
Every claim links to the chunk it was extracted from, which links to the document it came from. Trace any assertion to its origin.
Auto-compiled wiki
Documents and entities get synthesized markdown articles with [[wikilinks]] — a browsable knowledge base that grows with every upload.
Confidence tracking
Claims carry confidence scores that reflect evidence strength. Scores are capped by evidence quality — you can't have high confidence on thin evidence.
Cross-document discovery
Proactive synthesis finds entity bridges, dialectical pairs, and concept clusters across your documents — connections you didn't know existed.
Hybrid retrieval
Chat retrieves from chunks (semantic), graph (structured), and wiki articles (synthesized) simultaneously. Three retrieval modes in one query.