MindGraphDocs

Agent Memory Patterns

Practical patterns for using MindGraph as an AI agent's primary memory. Read and write — from quick unstructured notes to richly connected knowledge graphs.

Every pattern below shows the raw API call and the SDK equivalent. Start with journals (lowest friction), then layer in structured knowledge as your agent matures.

Writing Memory

Pattern 1: Quick Notes (Journal)

The lowest-friction way to write memory. Think of a journal entry as writing to a scratchpad — just record what you learned. Journal entries are searchable via full-text and semantic search, so you can always find them later.

When to use: Capturing a user preference, noting an observation, recording a conversation takeaway, logging a decision rationale — anything you want to remember but don't need to structure yet.

// Quick note — just label + content
await graph.journal("User prefers dark mode", {
  content: "User explicitly asked for dark theme across all devices",
})

// With metadata and session linking
await graph.journal("Deployment decision", {
  content: "Team chose Vercel over AWS for frontend hosting due to DX",
  journal_type: "decision",
  tags: ["infrastructure", "deployment"],
}, {
  session_uid: "ses_current",
  confidence: 0.95,
  agent_id: "project-agent",
})

Pattern 2: Ingesting Conversations

To ingest a conversation, treat the full conversation as a Source, extract key passages as Snippets, then create structured knowledge (entities, claims, observations) from those snippets. This gives you both the raw record and the structured graph.

// 1. Create a source for the conversation
const source = await graph.capture({
  action: "source",
  label: "Onboarding call with Acme Corp",
  props: {
    source_type: "conversation",
    content_hash: "sha256:...",
    mime_type: "text/plain",
  },
  agent_id: "ingestion-agent",
})

// 2. Extract key snippets from the conversation
const snippet1 = await graph.capture({
  action: "snippet",
  label: "Acme needs real-time sync",
  source_uid: source.uid,  // auto-creates DerivedFrom edge
  props: {
    content: "CTO mentioned they need sub-100ms sync between services",
    snippet_type: "requirement",
  },
})

const snippet2 = await graph.capture({
  action: "snippet",
  label: "Currently using Redis for caching",
  source_uid: source.uid,
  props: {
    content: "Their stack: Redis for caching, Postgres for persistence",
    snippet_type: "technical_context",
  },
})

// 3. Extract entities using first-class type methods
const acme = await graph.findOrCreateOrganization("Acme Corp", {
  description: "Enterprise client, Series B startup",
})

// 4. Create structured claims from the conversation
await graph.argue({
  claim: {
    label: "Acme needs real-time data sync",
    props: { content: "Sub-100ms sync is a hard requirement for Acme" },
  },
  evidence: [{
    label: "CTO statement",
    props: { description: "Direct requirement from CTO during onboarding call" },
  }],
  source_uids: [source.uid],
  agent_id: "ingestion-agent",
})

Pattern 3: Ingesting Scraped Content

Same pattern as conversations: create a Source for the page, extract Snippets for key passages, then structure the knowledge.

// 1. Register the source
const page = await graph.capture({
  action: "source",
  label: "CozoDB: Architecture Overview",
  props: {
    source_type: "webpage",
    uri: "https://docs.cozodb.org/en/latest/architecture.html",
    title: "CozoDB Architecture",
    domain: "docs.cozodb.org",
    fetched_at: Date.now() / 1000,
  },
})

// 2. Extract key passages
await graph.capture({
  action: "snippet",
  label: "CozoDB uses Datalog",
  source_uid: page.uid,
  props: {
    content: "CozoDB uses Datalog as its query language, compiled to relational algebra",
    snippet_type: "technical_fact",
  },
})

// 3. Create an observation from what you read
await graph.capture({
  action: "observation",
  label: "Datalog enables recursive queries natively",
  props: {
    content: "Unlike SQL, Datalog handles recursive graph traversal without CTEs",
    observation_type: "technical_insight",
    timestamp: Date.now() / 1000,
  },
  agent_id: "research-agent",
})

Pattern 4: Ingesting Documents

For bulk or long-form content, use the dedicated document ingestion endpoint. It automatically chunks, embeds, and extracts structured knowledge from the text in the background. You get a job ID to track progress.

When to use: Research papers, articles, meeting transcripts, documentation, or any text longer than a few paragraphs. Unlike the manual capture() approach (Patterns 2–3), this handles chunking, embedding, and multi-layer extraction automatically.

// 1. Start document ingestion
const { job_id, document_uid } = await graph.ingestDocument({
  content: documentText,
  title: "Research Paper",
  layers: ["reality", "epistemic"],
})

// 2. Poll until complete
let job = await graph.getJob(job_id)
while (job.status === "pending" || job.status === "processing") {
  await new Promise(r => setTimeout(r, 2000))
  job = await graph.getJob(job_id)
  console.log(`${job.progress.processed_chunks}/${job.progress.total_chunks} chunks`)
}

console.log(`Created ${job.progress.nodes_created} nodes`)
console.log(`Created ${job.progress.edges_created} edges`)

// The graph now contains entities, claims, observations, etc.
// extracted from the document, all linked back to source chunks
// via ExtractedFrom edges for provenance.
Note:The layers parameter controls which extraction passes run. Documents default to Reality + Epistemic (entities, claims, evidence). Add more layers like "intent" or "agent" if the content contains goals, decisions, or task planning. See the Ingestion & Retrieval guide for the full layer reference.

Pattern 5: Context Retrieval for RAG

After ingesting documents, use retrieveContext() to build rich context for RAG pipelines. It combines semantic chunk search with graph traversal — you get both the relevant text passages and the structured knowledge extracted from them.

When to use: Building context windows for LLM calls, answering questions over ingested documents, or any time you need both raw text and structured knowledge for a query.

const ctx = await graph.retrieveContext({
  query: "What are the key findings?",
  node_limit: 10,
  article_limit: 3,
})

// ctx.articles — synthesized wiki articles from your knowledge base
for (const article of ctx.articles ?? []) {
  console.log(`Article: ${article.label}`)
}

// ctx.graph.nodes — extracted entities, claims, etc. with source provenance
console.log(`${ctx.graph.nodes.length} related nodes`)

// ctx.graph.edges — relationships between extracted nodes
console.log(`${ctx.graph.edges.length} edges`)

// Feed articles and graph context to your LLM
const articles = (ctx.articles ?? []).map(a => a.content).join("\n\n---\n\n")
const nodes = ctx.graph.nodes.map(n => {
  const sources = (n.source_documents ?? []).map(d => d.title).join(", ")
  return `- ${n.label} (${n.node_type})${sources ? ` [from: ${sources}]` : ""}`
}).join("\n")

const prompt = `Knowledge articles:\n${articles}

Structured knowledge:\n${nodes}

Question: What are the key findings?`
Note:Use node_limit to control how many graph nodes are returned, and article_limit for wiki articles. Set chunk_limit > 0 to also include raw source text chunks. Each node includes source_documents for provenance.

Pattern 6: Building Structured Knowledge

Start with journals, then graduate to cognitive endpoints as patterns emerge. You don't have to structure everything upfront — capture first, structure later.

// Step 1: Quick journal note (day 1)
await graph.journal("Redis might be wrong tool for Acme", {
  content: "Redis pub/sub has no delivery guarantees. Acme needs exactly-once.",
})

// Step 2: Journal gains confidence (day 3, after research)
await graph.journal("NATS JetStream fits Acme requirements", {
  content: "JetStream provides exactly-once delivery, persistent streams, and sub-10ms latency",
  tags: ["acme", "architecture"],
})

// Step 3: Structure the knowledge (day 5, pattern is clear)
// Create a formal claim with evidence
await graph.argue({
  claim: {
    label: "NATS JetStream is better than Redis for Acme",
    confidence: 0.85,
    props: { content: "JetStream provides exactly-once delivery that Redis pub/sub lacks" },
  },
  evidence: [
    { label: "Redis pub/sub limitations", props: { description: "No delivery guarantees, at-most-once semantics" } },
    { label: "JetStream benchmarks", props: { description: "Sub-10ms p99 latency with exactly-once delivery" } },
  ],
  agent_id: "architecture-agent",
})

// Create a goal based on the decision
await graph.commit({
  action: "goal",
  label: "Migrate Acme to NATS JetStream",
  props: {
    description: "Replace Redis pub/sub with NATS JetStream for real-time sync",
    priority: "high",
    success_criteria: ["Sub-100ms sync latency", "Exactly-once delivery verified"],
  },
  agent_id: "project-agent",
})

Reading Memory

Pattern 7: Retrieving Context at Task Start

Before starting work, an agent should gather context: What goals are active? What questions are open? What does the graph already know about this topic? Use multiple retrieval calls to build a context window.

// 1. What am I working on?
const goals = await graph.retrieve({ action: "active_goals" })

// 2. What's unresolved?
const questions = await graph.retrieve({ action: "open_questions" })

// 3. Search for relevant knowledge about the current topic
const relevant = await graph.hybridSearch("NATS JetStream migration", {
  k: 10,
})

// 4. Explore the graph around the current goal
const goalContext = await graph.neighborhood(goals[0].uid, 2)

// 5. Any claims that need verification?
const weakClaims = await graph.retrieve({
  action: "weak_claims",
  threshold: 0.6,
})

// 6. What happened recently?
const recent = await graph.retrieve({
  action: "recent",
  limit: 20,
  salience_min: 0.5,
})

// Now you have a rich context window:
// - Active goals and their connected nodes
// - Open questions to address
// - Related knowledge from past sessions
// - Weak claims that need strengthening
// - Recent activity for continuity

All nodes — including journal entries — are searchable by meaning. Use full-text search for keyword matching, semantic search for meaning-based retrieval, or hybrid search for the best of both.

// Full-text search (keyword matching)
const results = await graph.search("deployment preferences")

// Hybrid search (FTS + semantic similarity)
const hybrid = await graph.hybridSearch("what infrastructure does the user prefer?", {
  k: 10,
  node_types: ["Journal", "Claim", "Preference"],
})

// Filter by layer — only Memory layer nodes
const memories = await graph.retrieve({
  action: "text",
  query: "user preferences",
  layer: "memory",
  limit: 20,
})

// Search within a specific time window using recent + filters
const lastWeek = await graph.retrieve({
  action: "recent",
  confidence_min: 0.7,
  limit: 50,
})

Pattern 9: Exploring the Knowledge Graph

Go beyond search — follow reasoning chains, explore neighborhoods, find connections between ideas, and extract subgraphs for context windows.

// Follow a reasoning chain from a claim
// (follows Supports, Refutes, HasPremise, etc.)
const chain = await graph.reasoningChain("clm_101", 5)
console.log("Reasoning:", chain.map(s => s.label).join(" -> "))

// Explore everything connected to a node (2 hops)
const neighbors = await graph.neighborhood("ent_acme", 2)

// Find the shortest path between two ideas
const path = await graph.traverse({
  action: "path",
  start_uid: "clm_101",
  end_uid: "goal_migrate",
  max_depth: 6,
})

// Extract a subgraph — all nodes and edges reachable from a starting point
// Great for building a focused context window
const subgraph = await graph.traverse({
  action: "subgraph",
  start_uid: "goal_migrate",
  max_depth: 3,
  edge_types: ["Supports", "MotivatedBy", "DependsOn", "RelevantTo"],
})
// Returns { nodes: [...], edges: [...] } — a self-contained knowledge graph

Session Management

Pattern 10: Session Lifecycle

Sessions group related work together. Open a session at the start of a task, record traces and journals during work, then close the session and distill key learnings into a summary.

// 1. Open a session
const session = await graph.session({
  action: "open",
  label: "Migrate Acme to JetStream",
  props: { focus_summary: "Planning and executing the NATS migration" },
  agent_id: "project-agent",
})
const sid = session.uid

// 2. Record execution traces (links to relevant nodes)
await graph.session({
  action: "trace",
  session_uid: sid,
  label: "Researched NATS JetStream docs",
  relevant_node_uids: ["src_nats_docs", "ent_acme"],
  props: { trace_type: "research" },
})

// 3. Write journal notes during the session
await graph.journal("JetStream consumer groups solve multi-service sync", {
  content: "Each service gets its own consumer group, enabling parallel processing",
  tags: ["nats", "architecture"],
}, { session_uid: sid })

// 4. Close the session
await graph.session({
  action: "close",
  session_uid: sid,
})

// 5. Distill the session into a summary
await graph.distill({
  label: "Acme JetStream migration - session summary",
  props: {
    content: "Confirmed JetStream consumer groups as solution for multi-service sync. Key finding: each service needs its own consumer group.",
    summary_type: "session_distillation",
  },
  session_uid: sid,
  summarizes_uids: [sid],
})

Complete Agent Loop

Here's a realistic agent loop that combines reading and writing. This is how an agent might use MindGraph as its primary memory across a multi-step task.

agent-loop.ts
import { MindGraph } from "mindgraph"

const graph = new MindGraph({
  baseUrl: "https://api.mindgraph.cloud",
  apiKey: process.env.MINDGRAPH_API_KEY!,
})

async function agentLoop(taskDescription: string) {
  // ── Phase 1: Retrieve context ──────────────────────────────
  const goals = await graph.retrieve({ action: "active_goals" })
  const questions = await graph.retrieve({ action: "open_questions" })
  const relevant = await graph.hybridSearch(taskDescription, { k: 10 })
  const weak = await graph.retrieve({ action: "weak_claims", threshold: 0.6 })

  // Build context from results
  const context = {
    activeGoals: goals,
    openQuestions: questions,
    relevantKnowledge: relevant,
    needsVerification: weak,
  }

  // ── Phase 2: Open a session ────────────────────────────────
  const session = await graph.session({
    action: "open",
    label: taskDescription,
    agent_id: "main-agent",
  })

  // ── Phase 3: Do work (your agent logic here) ──────────────
  // ... use context to inform decisions ...
  // ... call tools, make API calls, process data ...

  // ── Phase 4: Write what you learned ────────────────────────
  // Quick notes for observations
  await graph.journal("Discovered API rate limit is 1000 req/min", {
    content: "The external API has a 1000 req/min rate limit per API key",
    tags: ["api", "rate-limit"],
  }, { session_uid: session.uid })

  // Structured knowledge for confirmed facts
  await graph.argue({
    claim: {
      label: "External API rate limit is 1000/min",
      confidence: 0.99,
      props: { content: "Confirmed via API docs and testing" },
    },
    evidence: [{
      label: "API documentation",
      props: { description: "Rate limit section states 1000 requests per minute per key" },
    }],
    agent_id: "main-agent",
  })

  // Record a decision
  const decision = await graph.deliberate({
    action: "open_decision",
    label: "How to handle rate limiting?",
    props: { question: "Should we use request queuing or multiple API keys?" },
  })

  await graph.deliberate({
    action: "add_option",
    label: "Request queue with exponential backoff",
    decision_uid: decision.uid,
    props: {
      description: "Queue requests and retry with exponential backoff",
      pros: ["Simple to implement", "Respects rate limits"],
      cons: ["Adds latency", "Queue can grow unbounded"],
    },
  })

  // ── Phase 5: Close and distill ─────────────────────────────
  await graph.session({ action: "close", session_uid: session.uid })

  await graph.distill({
    label: `Session summary: ${taskDescription}`,
    props: {
      content: "Discovered API rate limit. Opened decision on handling strategy.",
      summary_type: "session_distillation",
    },
    session_uid: session.uid,
    summarizes_uids: [session.uid],
  })
}

When to Use What

SituationEndpointWhy
Just learned somethingjournal()Lowest friction — capture now, structure later
Have a confirmed fact with evidenceargue()Creates typed Claim + Evidence + Warrant with edges
Forming a hypothesisinquire()Track status, link to tests and anomalies
Setting a goalcommit()Track goals, projects, milestones with progress
Making a decisiondeliberate()Open decision → options → constraints → resolve
Ingesting a long documentingestDocument()Auto-chunks, embeds, and extracts structured knowledge
Ingesting a short passage manuallycapture()Source → Snippet chain preserves provenance
Building RAG contextretrieveContext()Semantic chunks + connected graph nodes in one call
Starting a work sessionsession()Group traces and journals under one session
Need to find relevant contextsearch() / hybridSearch()FTS or semantic similarity across all nodes
Need current statusretrieve()active_goals, open_questions, weak_claims, pending_approvals
Need to explore connectionstraverse()Reasoning chains, neighborhoods, paths, subgraphs
Note:All cognitive endpoints create typed nodes with automatic edges. Use generic POST /node and POST /link only when you need a node type or edge type that isn't covered by a cognitive endpoint.