Turn your documents into a knowledge graph that thinks with you.

Upload PDFs and papers, chat with AI that reasons over your knowledge, and explore connections visually — or build on it with a full API and TypeScript & Python SDKs.

Patent-pending cognitive knowledge graph. Cloud API available.

See your knowledge graph come alive

Chat with an AI that reasons over both text and graph structure, or explore entities and relationships in an interactive visual explorer.

MindGraph Chat

What did we learn about customer retention from the Q1 report?

Based on your knowledge graph, the Q1 report identifies 3 key findings about customer retention:

1. NPS scores improved by 12 points after onboarding redesign

2. Churn rate dropped to 4.2% from 6.8% in Q4

3. User survey shows 89% satisfaction with new features

Retrieved Graph Context
Q1 ReportRetention ↑Reduce ChurnNPS +12Churn 4.2%Onboarding
Person Claim Evidence Goal Concept
Ask about your knowledge graph...

The AI uses hybrid retrieval — combining semantically matched text chunks with structured graph context — to give grounded, traceable answers.

Structure memory the way agents actually reason

MindGraph organizes agent knowledge into deterministic layers, mirroring how cognitive systems reason from raw observation to final action.

ScraperAgent
Reality
Collects incoming facts and ground-truth observations.
SourceObservationEntity
PlanningAgent
Epistemic
Synthesizes facts into testable arguments and hypotheses.
ClaimEvidenceHypothesis
Agent Layer
Memory Layer
Action Layer
Intent Layer
Epistemic Layer
Reality Layer
Action
DevAgent
Executes concrete flows explicitly derived from intents.
FlowControlTask
Memory
ReviewAgent
Distills raw execution traces into permanent summaries.
SessionSummary

Vector stores retrieve text. MindGraph retrieves knowledge.

Most agent memory stops at embeddings and similarity search. MindGraph ingests your documents, extracts structured knowledge, and returns both chunks and typed graph context.

Typical vector memory

  • Returns text chunks ranked by semantic similarity
  • Great for basic context, but lacks ontological structure
  • Requires the LLM to parse raw text and guess relationships
  • Context strings can conflict without clear versioning or lineage
  • Every new agent has to parse raw text logs to infer previous state

MindGraph

  • Ingest documents and PDFs — auto-chunked, embedded, and extracted into typed nodes
  • Retrieve chunks + the structured knowledge graph extracted from them in one call
  • Every fact explicitly links to its source chunk and supporting evidence
  • Traverse specific relationships (e.g., fetch tasks blocking a goal)
  • Agents cleanly load context, avoiding prompt bloat and hallucination
vector_store.query("goals") →
[
  "The user mentioned wanting to finish the report",
  "Weekly goals include shipping v2 and fixing bugs",
  "She said her goal is to improve onboarding flow",
]
LLM Output (Uncertain)
Agent attempts to pursue all 3 vague goals simultaneously, guessing which is active, causing workflow drift.
graph.retrieveContext("goals") →
{ chunks: [{ content: "...", score: 0.94 }],
  graph: {
    nodes: [
      { type: "Goal", label: "Ship v2", status: "active" },
      { type: "Evidence", label: "3 PRs merged this week" },
    ],
    edges: [{ type: "Supports", from: "Evidence", to: "Goal" }]
  }
}
LLM Output (Deterministic)
Agent gets matched text chunks AND typed graph context — goals with supporting evidence and relationships — in a single call.

How an agent uses structured memory

A typical write → read → query cycle. This is what a single task looks like.

Load context
graph.retrieve(action="active_goals")

Before doing anything, the agent loads what it already knows: goals, open questions, recent findings

Open session
graph.session(action="open", label="Research task")

Everything the agent writes during this task is grouped together for later review

Record findings
graph.argue(claim={...}, evidence=[...])

The agent stores a conclusion with the evidence that supports it — not just raw text

Make a decision
graph.deliberate(action="open_decision", label="Option A or B?")

Instead of burying a choice in a chat log, the agent records it as a queryable decision

Distill & close
graph.distill(label="Session summary", sources=[...])

The session is compressed into a summary that any future agent can pick up and continue from

Next time any agent picks up this project, it can start from the distilled summary — not from zero.

Outcomes that transform agent reliability

These aren't hypothetical use cases. They are standard patterns unlocked immediately by having a distinct layer for persistent agent state.

AI Chat with Hybrid Retrieval

Ask questions in natural language. The AI retrieves both semantically matched text chunks and structured graph context — entities, claims, evidence — to give grounded, traceable answers.

Visual Knowledge Explorer

Browse your knowledge graph as an interactive force-directed visualization. Nodes colored by type, edges showing relationships. Click any node to inspect properties, expand neighborhoods, and trace connections.

Automatic Knowledge Extraction

Upload a PDF, article, or transcript. MindGraph chunks the text, embeds it, and runs a six-pass LLM pipeline to extract entities, claims, goals, and more — all linked back to source chunks.

Auto-Compiled Wiki Articles

After ingestion, MindGraph compiles wiki articles from your documents and entities — synthesized markdown summaries with wikilinks that connect your knowledge into a browsable, human-readable layer.

Autonomous Research Agents

Spin up scheduled research agents that read the web, reason over your graph, and write structured knowledge back — every node stamped with its author. Pause, budget, and audit them from the dashboard.

Projects & Cross-Document Synthesis

Group documents into projects, then let MindGraph mine cross-document signals — entity bridges, dialectical pairs, idea clusters — and auto-generate synthesis articles covering the most important threads.

Explainable Decisions

If an agent deletes a feature or buys a stock, you need to know why. MindGraph traces every decision node back to the exact evidence that supported it in the epistemic layer.

Built-in Hallucination Resistance

Agents are forced to pair claims with evidence nodes. By separating facts (Reality) from beliefs (Epistemic), your system maintains a verified source of truth.

Ingest documents. Retrieve structured knowledge.

Upload PDFs, articles, or transcripts. MindGraph chunks, embeds, and runs a six-layer extraction pipeline to build a typed knowledge graph automatically. Then retrieve both raw chunks and structured context in a single call.

  • PDF, text, and transcript ingestion with async job tracking
  • Six-pass LLM extraction: entities, claims, goals, and more
  • Retrieve chunks + connected graph nodes in one call
  • 60 node types, 95 edge types, semantic search, and graph traversal built in
import { MindGraph } from "mindgraph"

  const graph = new MindGraph({
    baseUrl: "https://api.mindgraph.cloud",
    apiKey: process.env.MINDGRAPH_API_KEY!,
  })

  // Upload a document — auto-chunked, embedded, and extracted
  const { job_id } = await graph.ingestDocument({
    content: pdfText,
    title: "Q1 Research Report",
    layers: ["reality", "epistemic"],
  })

  // Poll until processing completes
  let job = await graph.getJob(job_id)
  while (job.status === "pending" || job.status === "processing") {
    await new Promise(r => setTimeout(r, 2000))
    job = await graph.getJob(job_id)
  }

  console.log(`Extracted ${job.progress.nodes_created} nodes`)

Hand this prompt to your AI coding assistant.

Click to copy an implementation prompt that gives Claude, Cursor, or Copilot exactly what it needs to integrate MindGraph.

Simple, credit-based pricing

All plans include unlimited CRUD, queries, retrieval, and embeddings. Credits are only consumed by ingestion, chat, and wiki compilation.

Free

$0forever
500 credits/mo
  • 500 credits/month
  • 100 MB storage
  • 10,000 API calls/month
  • Unlimited CRUD & queries
  • AI chat, graph explorer, wiki
  • Projects & synthesis
  • TypeScript & Python SDKs
Most popular

Pro

$27/month
2,000 credits/mo
  • 2,000 credits/month
  • 5 GB storage
  • 1M API calls/month
  • Autonomous research agents
  • Credit top-ups ($10 / 500 cr)
  • Up to 5 team members
  • Up to 3 graphs

Team

$79/month
10,000 credits/mo
  • 10,000 credits/month
  • 50 GB storage
  • Unlimited API calls
  • Agents with scheduling
  • Credit top-ups ($10 / 600 cr)
  • Up to 25 team members
  • Unlimited graphs

Enterprise

Custom
Volume pricing
  • Custom credit allocation
  • Unlimited storage
  • Unlimited everything
  • On-premise deployment
  • Dedicated support
  • Custom SLA

Credit costs: document ingestion 2/page, transcripts 4/page, chat 3/message, wiki articles 6/article. Unused credits roll over (up to 2x your monthly allowance).

Your agents deserve to remember. You deserve to see what they know.

Start building in under 5 minutes. Ingest documents, chat with your knowledge graph, and explore everything visually. Free tier: 500 credits/month, unlimited queries. No credit card.