See your knowledge graph come alive
Chat with an AI that reasons over both text and graph structure, or explore entities and relationships in an interactive visual explorer.
What did we learn about customer retention from the Q1 report?
Based on your knowledge graph, the Q1 report identifies 3 key findings about customer retention:
1. NPS scores improved by 12 points after onboarding redesign
2. Churn rate dropped to 4.2% from 6.8% in Q4
3. User survey shows 89% satisfaction with new features
The AI uses hybrid retrieval — combining semantically matched text chunks with structured graph context — to give grounded, traceable answers.
Structure memory the way agents actually reason
MindGraph organizes agent knowledge into deterministic layers, mirroring how cognitive systems reason from raw observation to final action.
Vector stores retrieve text. MindGraph retrieves knowledge.
Most agent memory stops at embeddings and similarity search. MindGraph ingests your documents, extracts structured knowledge, and returns both chunks and typed graph context.
Typical vector memory
- Returns text chunks ranked by semantic similarity
- Great for basic context, but lacks ontological structure
- Requires the LLM to parse raw text and guess relationships
- Context strings can conflict without clear versioning or lineage
- Every new agent has to parse raw text logs to infer previous state
MindGraph
- Ingest documents and PDFs — auto-chunked, embedded, and extracted into typed nodes
- Retrieve chunks + the structured knowledge graph extracted from them in one call
- Every fact explicitly links to its source chunk and supporting evidence
- Traverse specific relationships (e.g., fetch tasks blocking a goal)
- Agents cleanly load context, avoiding prompt bloat and hallucination
[ "The user mentioned wanting to finish the report", "Weekly goals include shipping v2 and fixing bugs", "She said her goal is to improve onboarding flow", ]
{ chunks: [{ content: "...", score: 0.94 }],
graph: {
nodes: [
{ type: "Goal", label: "Ship v2", status: "active" },
{ type: "Evidence", label: "3 PRs merged this week" },
],
edges: [{ type: "Supports", from: "Evidence", to: "Goal" }]
}
}How an agent uses structured memory
A typical write → read → query cycle. This is what a single task looks like.
graph.retrieve(action="active_goals")Before doing anything, the agent loads what it already knows: goals, open questions, recent findings
graph.session(action="open", label="Research task")Everything the agent writes during this task is grouped together for later review
graph.argue(claim={...}, evidence=[...])The agent stores a conclusion with the evidence that supports it — not just raw text
graph.deliberate(action="open_decision", label="Option A or B?")Instead of burying a choice in a chat log, the agent records it as a queryable decision
graph.distill(label="Session summary", sources=[...])The session is compressed into a summary that any future agent can pick up and continue from
Outcomes that transform agent reliability
These aren't hypothetical use cases. They are standard patterns unlocked immediately by having a distinct layer for persistent agent state.
AI Chat with Hybrid Retrieval
Ask questions in natural language. The AI retrieves both semantically matched text chunks and structured graph context — entities, claims, evidence — to give grounded, traceable answers.
Visual Knowledge Explorer
Browse your knowledge graph as an interactive force-directed visualization. Nodes colored by type, edges showing relationships. Click any node to inspect properties, expand neighborhoods, and trace connections.
Automatic Knowledge Extraction
Upload a PDF, article, or transcript. MindGraph chunks the text, embeds it, and runs a six-pass LLM pipeline to extract entities, claims, goals, and more — all linked back to source chunks.
Auto-Compiled Wiki Articles
After ingestion, MindGraph compiles wiki articles from your documents and entities — synthesized markdown summaries with wikilinks that connect your knowledge into a browsable, human-readable layer.
Autonomous Research Agents
Spin up scheduled research agents that read the web, reason over your graph, and write structured knowledge back — every node stamped with its author. Pause, budget, and audit them from the dashboard.
Projects & Cross-Document Synthesis
Group documents into projects, then let MindGraph mine cross-document signals — entity bridges, dialectical pairs, idea clusters — and auto-generate synthesis articles covering the most important threads.
Explainable Decisions
If an agent deletes a feature or buys a stock, you need to know why. MindGraph traces every decision node back to the exact evidence that supported it in the epistemic layer.
Built-in Hallucination Resistance
Agents are forced to pair claims with evidence nodes. By separating facts (Reality) from beliefs (Epistemic), your system maintains a verified source of truth.
Ingest documents. Retrieve structured knowledge.
Upload PDFs, articles, or transcripts. MindGraph chunks, embeds, and runs a six-layer extraction pipeline to build a typed knowledge graph automatically. Then retrieve both raw chunks and structured context in a single call.
- PDF, text, and transcript ingestion with async job tracking
- Six-pass LLM extraction: entities, claims, goals, and more
- Retrieve chunks + connected graph nodes in one call
- 60 node types, 95 edge types, semantic search, and graph traversal built in
import { MindGraph } from "mindgraph"
const graph = new MindGraph({
baseUrl: "https://api.mindgraph.cloud",
apiKey: process.env.MINDGRAPH_API_KEY!,
})
// Upload a document — auto-chunked, embedded, and extracted
const { job_id } = await graph.ingestDocument({
content: pdfText,
title: "Q1 Research Report",
layers: ["reality", "epistemic"],
})
// Poll until processing completes
let job = await graph.getJob(job_id)
while (job.status === "pending" || job.status === "processing") {
await new Promise(r => setTimeout(r, 2000))
job = await graph.getJob(job_id)
}
console.log(`Extracted ${job.progress.nodes_created} nodes`)
Hand this prompt to your AI coding assistant.
Click to copy an implementation prompt that gives Claude, Cursor, or Copilot exactly what it needs to integrate MindGraph.
Simple, credit-based pricing
All plans include unlimited CRUD, queries, retrieval, and embeddings. Credits are only consumed by ingestion, chat, and wiki compilation.
Free
- 500 credits/month
- 100 MB storage
- 10,000 API calls/month
- Unlimited CRUD & queries
- AI chat, graph explorer, wiki
- Projects & synthesis
- TypeScript & Python SDKs
Pro
- 2,000 credits/month
- 5 GB storage
- 1M API calls/month
- Autonomous research agents
- Credit top-ups ($10 / 500 cr)
- Up to 5 team members
- Up to 3 graphs
Credit costs: document ingestion 2/page, transcripts 4/page, chat 3/message, wiki articles 6/article. Unused credits roll over (up to 2x your monthly allowance).