Upload what you know. Let the system map it, synthesize it, and surface what you missed.

MindGraph is an iterative research system: ingest documents into an inspectable knowledge graph, chat with hybrid retrieval, browse auto-compiled wiki articles, and let proactive synthesis find cross-document connections you didn't know existed.

Inspectable knowledge graphAuto-compiled wiki articlesCross-document synthesisSource provenance on every claim

An iterative cycle that builds understanding over time

Not a one-shot upload-and-search tool. MindGraph runs a continuous loop — each cycle deepens your graph, compiles new articles, and surfaces what deserves your attention.

1. Ingest

Upload documents, PDFs, and transcripts. Content is auto-chunked, embedded, and run through a six-pass LLM extraction pipeline that builds a typed knowledge graph — persons, organizations, claims, evidence, hypotheses — all linked back to source text.

2. Explore

Chat with AI that retrieves from text chunks, graph structure, and wiki articles simultaneously. Or browse your knowledge as an interactive force-directed graph — click any node to inspect properties, trace connections, and expand neighborhoods.

3. Synthesize

After ingestion, MindGraph auto-compiles wiki articles — synthesized markdown summaries for every document and key entity in your graph. Articles link to each other with [[wikilinks]], creating a browsable knowledge base that grows with every upload.

4. Discover

Proactive synthesis scans your project for cross-document patterns: entity bridges that connect separate works, claim pairs that reinforce or contradict each other, concept clusters with no covering article, and theories lacking supporting evidence. Candidates are ranked and turned into wiki articles tied back to the graph.

The cycle repeats — each iteration deepens your understanding

What happens at each stage

Every step produces inspectable, traceable artifacts in your knowledge graph. Nothing is a black box.

Ingest

Drop in a PDF, research paper, transcript, or article. MindGraph splits it into overlapping chunks, generates embeddings, and runs a six-pass LLM extraction pipeline across all cognitive layers.

  • Auto-chunking with configurable size and overlap
  • Embedding generation for semantic search
  • Six-pass extraction: Reality, Epistemic, Intent, Action, Memory, Agent
  • Every extracted node links back to its source chunk
  • 60 node types: Person, Org, Claim, Evidence, Hypothesis, Goal, and more
What gets extracted
Persons & Orgs
Named entities with aliases
Claims
Testable assertions with confidence
Evidence
Supporting data points
Hypotheses
Proposed explanations
Goals & Decisions
Intent and commitments
Theories & Patterns
Epistemic structures

Explore

Ask questions in natural language. The AI retrieves from three sources simultaneously: semantically matched text chunks, structured graph context, and wiki articles synthesized from your documents.

  • Hybrid retrieval: chunks + graph + wiki articles in one query
  • Every answer traces back to specific source documents
  • Interactive graph explorer with force-directed visualization
  • Click any node to inspect properties and expand neighborhoods
Retrieval sources
Text chunks
Semantic similarity search over document segments, scored and ranked
Graph structure
Entities, claims, evidence, and their typed relationships
Wiki articles
Synthesized summaries for documents and entities, with wikilinks

Synthesize

After ingestion, MindGraph automatically compiles wiki articles — one per document and one per major entity. Each article is a synthesized markdown summary that connects to related articles via [[wikilinks]].

  • Auto-compiled after every document ingestion
  • Document articles summarize the full source with key findings
  • Entity articles synthesize everything known about a person, org, or concept
  • [[Wikilinks]] create a browsable, interconnected knowledge base
  • Edit any article inline, or recompile to incorporate new sources
What synthesis produces
Document articles
A synthesized summary of "Q1 Research Report" with key findings, extracted entities, and follow-up questions
Entity articles
Everything known about "Deng Xiaoping" across all ingested documents — events, claims, relationships, timeline
Wikilink graph
Articles link to each other — click [[Cultural Revolution]] in Deng's article to jump to that entity's synthesized page

Discover

Available now

Proactive synthesis runs batch queries against your project's graph to surface non-obvious connections. Candidates are ranked by an LLM and become wiki articles covering each idea cluster — backed by the full epistemic provenance of the source graph.

  • Entity bridges: people, orgs, or events that connect separate documents
  • Claim pairs: assertions that reinforce or contradict each other across sources
  • Concept clusters: ideas referenced across many documents with no covering article
  • Theory gaps: hypotheses with downstream claims but weak supporting evidence
  • Every article links back to the source claims in the graph — nothing is ungrounded
Signal types
Entity bridges
"Henry Kissinger" appears in 9 of 10 project documents — key cross-document hub worth a deep article
Dialectical pairs
"Economic interdependence promotes peace" (Yergin) contradicts "Interdependence creates vulnerability" (Kissinger)
Concept clusters
"Deterrence" referenced across 6 documents with no covering article — synthesis candidate

Static document stores vs. iterative research

Most tools stop at “upload and search.” MindGraph builds understanding that compounds over time.

Static document stores

  • Upload documents and search. That's it.
  • Returns text chunks ranked by similarity — no structure
  • No awareness of contradictions across documents
  • No mechanism to synthesize knowledge across sources
  • You do all the synthesis work manually

MindGraph iterative research

  • Upload, extract, explore, synthesize, discover — a continuous cycle
  • Returns text chunks AND typed graph context AND wiki articles
  • Auto-compiled wiki articles synthesize knowledge across documents
  • Proactive signals surface contradictions, bridges, and gaps
  • Every claim traces back to its source with confidence scores

Inspectable graph

Every entity, claim, and relationship is visible in a force-directed explorer. No opaque embeddings — you see what the system knows.

Source provenance

Every claim links to the chunk it was extracted from, which links to the document it came from. Trace any assertion to its origin.

Auto-compiled wiki

Documents and entities get synthesized markdown articles with [[wikilinks]] — a browsable knowledge base that grows with every upload.

Confidence tracking

Claims carry confidence scores that reflect evidence strength. Scores are capped by evidence quality — you can't have high confidence on thin evidence.

Cross-document discovery

Proactive synthesis finds entity bridges, dialectical pairs, and concept clusters across your documents — connections you didn't know existed.

Hybrid retrieval

Chat retrieves from chunks (semantic), graph (structured), and wiki articles (synthesized) simultaneously. Three retrieval modes in one query.

Iterative research, grounded in an inspectable graph.

Ingestion, AI chat, wiki articles, graph exploration, project-scoped synthesis, and autonomous research agents are all available today.

Free tier: 500 credits/month, unlimited queries. No credit card required.