Cognitive Endpoints
Cognitive endpoints are higher-level compositions of CRUD operations. Each endpoint accepts a JSON body with an action field that determines the operation. They are designed as MCP tool targets for agentic workflows, enabling agents to perform complex multi-step graph mutations with a single POST request.
All endpoints below use POST and accept/return JSON. Two of the 18 endpoints (argument and distill) are monolithic and do not use an action field.
API Design
Every cognitive endpoint follows a universal request shape. Top-level fields are either universal node metadata or operational controls. All node-type-specific properties go in props.
{
"action": "...",
"label": "Human-readable node name",
"summary": "Optional (auto-derived from props if omitted)",
"confidence": 0.8,
"salience": 0.9,
"props": { /* ALL node-type-specific properties */ },
"<edge_trigger>_uid": "target_node_uid",
"agent_id": "my-agent"
}
Top-level fields (universal node metadata): label, summary, confidence, salience.
Operational fields: action, edge-trigger UID fields (e.g. supersedes_uid), agent_id.
Node-type properties (everything specific to the node type): props. When summary is omitted, it is auto-derived from common props fields like content, description, or statement.
Props-primary example
// Create a goal — all goal-specific fields go in props
{
"action": "goal",
"label": "Ship MindGraph v1.0",
"confidence": 0.9,
"props": {
"description": "Release the first stable version",
"priority": "high",
"goal_type": "product_launch",
"success_criteria": ["All 18 cognitive endpoints stable", "SDK published", "Docs complete"],
"progress": 0.65,
"deadline": 1712016000
}
}
// Create a hypothesis — statement, status, predictions all in props
{
"action": "hypothesis",
"label": "Structured memory improves factual accuracy",
"confidence": 0.7,
"props": {
"statement": "LLMs with graph-structured memory produce fewer hallucinations",
"status": "proposed",
"hypothesis_type": "empirical",
"testability_score": 0.85,
"novelty": 0.6,
"predicted_observations": ["Lower hallucination rate on factual QA", "Higher citation accuracy"]
}
}
See the Node Props Reference for every available field per node type. For practical usage patterns, see Agent Memory Patterns.
Reality Layer
POST /reality/capture
Capture raw information into the Reality layer as a source, snippet, or observation.
| Action | Description |
|---|---|
source | Register an external information source (webpage, book, API, etc.) |
snippet | Extract a snippet from an existing source node |
observation | Record a direct observation or fact |
source - Request / Response
// Request
{
"action": "source",
"label": "Attention Is All You Need",
"summary": "Foundational transformer architecture paper by Vaswani et al.",
"confidence": 0.99,
"salience": 0.95,
"props": {
"content": "Foundational transformer architecture paper by Vaswani et al.",
"medium": "research_paper",
"url": "https://arxiv.org/abs/1706.03762",
"timestamp": "2017-06-12T00:00:00Z"
},
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "src_001",
"action": "source",
"label": "Attention Is All You Need",
"edges_created": [],
"version": 1
}
snippet - Request / Response
// Request
{
"action": "snippet",
"label": "Multi-head attention mechanism",
"source_uid": "src_001",
"confidence": 0.95,
"salience": 0.85,
"props": {
"content": "Multi-head attention allows the model to jointly attend to information from different representation subspaces at different positions."
},
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "snp_001",
"action": "snippet",
"label": "Multi-head attention mechanism",
"edges_created": ["extracted_from_src_001"],
"version": 1
}
observation - Request / Response
// Request
{
"action": "observation",
"label": "Transformer inference latency scales quadratically",
"confidence": 0.92,
"salience": 0.8,
"props": {
"content": "Self-attention has O(n^2) complexity in sequence length, observed 4x latency increase when doubling context from 4k to 8k tokens",
"timestamp": "2026-03-05T14:30:00Z"
},
"agent_id": "benchmark-agent"
}
// Response
{
"uid": "obs_001",
"action": "observation",
"label": "Transformer inference latency scales quadratically",
"edges_created": [],
"version": 1
}
Auto-created edges: DerivedFrom (snippet → source).
Fields: action (required), label (required), summary, source_uid (required for snippet), confidence (0-1), salience (0-1), props (object — all node-type properties: content, medium, url, timestamp, etc.), agent_id.
POST /reality/entity
Create, alias, resolve, fuzzy-resolve, merge, or relate entities in the graph. The Reality layer includes first-class types for Person, Organization, Nation, Event, Place, and Concept. When you pass an entity_type of person, organization, nation, event, place, or concept, the server automatically routes to the correct first-class type. Entity serves as a fallback for other named things (technologies, products, etc.).
| Action | Description |
|---|---|
create | Create a new entity node (routes to Person, Organization, etc. based on entity_type) |
alias | Register an alias string pointing to a canonical entity UID |
resolve | Exact-match resolve a text string to a canonical entity UID |
fuzzy_resolve | Fuzzy-match resolve text to a ranked list of entity candidates |
merge | Merge two entities, retargeting edges and tombstoning the duplicate |
relate | Create a typed edge between two entities |
create (Organization) - Request / Response
// Request — entity_type "organization" routes to Organization node type
{
"action": "create",
"label": "Acme Corp",
"props": {
"entity_type": "organization",
"org_type": "company",
"description": "Enterprise SaaS startup",
"identifiers": { "website": "acme.com" }
},
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "org_001",
"label": "Acme Corp",
"created": true
}
create (Person) - Request / Response
// Request — entity_type "person" routes to Person node type
{
"action": "create",
"label": "Ada Lovelace",
"props": {
"entity_type": "person",
"full_name": "Ada Lovelace",
"description": "Mathematician and first computer programmer"
},
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "per_001",
"label": "Ada Lovelace",
"created": true
}
create (fallback Entity) - Request / Response
// Request — entity_type "technology" stays as generic Entity
{
"action": "create",
"label": "TypeScript",
"props": {
"entity_type": "technology",
"description": "A typed superset of JavaScript",
"identifiers": { "website": "typescriptlang.org" },
"attributes": { "paradigm": "multi-paradigm", "first_appeared": 2012 }
},
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "ent_001",
"label": "TypeScript",
"created": true
}
alias - Request / Response
// Request
{
"action": "alias",
"text": "TS",
"canonical_uid": "ent_001",
"alias_score": 0.9
}
// Response
{
"status": "ok"
}
resolve - Request / Response
// Request
{
"action": "resolve",
"text": "TS"
}
// Response
{
"uid": "ent_001"
}
fuzzy_resolve - Request / Response
// Request
{
"action": "fuzzy_resolve",
"text": "Typescript",
"limit": 3
}
// Response
{
"matches": [
{ "uid": "ent_001", "label": "TypeScript", "score": 0.95 },
{ "uid": "ent_042", "label": "TypeSpec", "score": 0.61 }
]
}
merge - Request / Response
// Request
{
"action": "merge",
"keep_uid": "ent_001",
"merge_uid": "ent_099",
"agent_id": "dedup-agent"
}
// Response
{
"kept": "ent_001",
"merged": "ent_099",
"edges_retargeted": 5
}
relate - Request / Response
// Request
{
"action": "relate",
"source_uid": "ent_001",
"target_uid": "ent_002",
"edge_type": "RelatedTo",
"agent_id": "ingestion-agent"
}
// Response
{
"uid": "edge_501",
"edge_type": "RelatedTo",
"source_uid": "ent_001",
"target_uid": "ent_002"
}
Auto-created edges: Custom edge types (via "relate" action).
Fields: action (required), label, text, canonical_uid, keep_uid, merge_uid, source_uid, target_uid, edge_type, props (object — entity_type routes to first-class types; see Person, Organization, Nation, Event, Place, Concept, Entity props), agent_id.
Epistemic Layer
POST /epistemic/argument
Construct a full argument in a single call: claim + evidence + warrant + linking edges. This is a monolithic endpoint (no action field).
New argument - Request / Response
// Request
{
"claim": {
"label": "TypeScript reduces runtime errors",
"confidence": 0.85,
"props": {
"content": "Static type checking catches type errors at compile time, preventing entire classes of bugs from reaching production"
}
},
"evidence": [
{
"label": "Airbnb migration study",
"props": {
"description": "38% reduction in production bugs after TS adoption across 2M+ lines of code",
"evidence_type": "empirical",
"sample_size": 2000000,
"statistical_significance": 0.01
}
},
{
"label": "Bloomberg engineering report",
"props": {
"description": "Type-related incidents dropped 60% post-TypeScript migration",
"evidence_type": "empirical",
"quantitative_value": 0.6,
"unit": "reduction_fraction"
}
}
],
"warrant": {
"label": "Static analysis principle",
"props": {
"principle": "Compile-time checks prevent classes of runtime failures by catching type mismatches before execution"
}
},
"argument": {
"label": "TS type safety argument",
"props": {
"summary": "TypeScript reduces runtime errors via static type checking, with strong empirical evidence from industry migrations"
}
},
"source_uids": ["src_airbnb_blog", "src_bloomberg_eng"],
"agent_id": "research-agent"
}
// Response
{
"claim_uid": "clm_101",
"evidence_uids": ["ev_201", "ev_202"],
"warrant_uid": "war_301",
"argument_uid": "arg_401"
}
Extending an argument - Request / Response
// Request
{
"claim": {
"label": "Gradual typing maximizes adoption",
"confidence": 0.8,
"props": {
"content": "TypeScript's gradual type system allows incremental migration, reducing adoption friction"
}
},
"evidence": [
{
"label": "GitHub language trends 2025",
"props": {
"description": "TypeScript overtook Java as the third most-used language on GitHub",
"evidence_type": "statistical"
}
}
],
"warrant": {
"label": "Adoption friction principle",
"props": { "principle": "Technologies with lower migration barriers achieve broader adoption" }
},
"argument": {
"label": "Gradual typing adoption argument",
"props": { "summary": "TypeScript's gradual type system lowers migration barriers, driving widespread adoption" }
},
"extends_uid": "arg_401",
"agent_id": "research-agent"
}
// Response
{
"claim_uid": "clm_102",
"evidence_uids": ["ev_203"],
"warrant_uid": "war_302",
"argument_uid": "arg_402"
}
Refuting an argument - Request / Response
// Request
{
"claim": {
"label": "Type systems add development overhead",
"confidence": 0.6,
"props": {
"content": "Complex type annotations slow down initial development velocity and increase cognitive load"
}
},
"evidence": [
{
"label": "Startup velocity study",
"props": {
"description": "Teams reported 20% slower feature delivery in first 3 months of TS adoption",
"evidence_type": "empirical"
}
}
],
"warrant": {
"label": "Productivity tradeoff principle",
"props": { "principle": "Additional abstraction layers consume developer attention that could be spent on feature work" }
},
"argument": {
"label": "TS overhead counterargument",
"props": { "summary": "TypeScript's type system imposes meaningful development overhead that may offset safety benefits" }
},
"refutes_uid": "arg_401",
"agent_id": "devils-advocate-agent"
}
// Response
{
"claim_uid": "clm_103",
"evidence_uids": ["ev_204"],
"warrant_uid": "war_303",
"argument_uid": "arg_403"
}
Auto-created edges: Supports (evidence → claim), HasWarrant (claim → warrant), HasConclusion (argument → claim), HasPremise (argument → evidence), Refutes (claim → target), Extends (claim → target), Supersedes (claim → target), Contradicts (claim → target), ExtractedFrom (claim → sources).
Fields: claim (label, confidence?, props?), evidence (array; label, props?), warrant (label, props?), argument (label, props?), refutes_uid, extends_uid, supersedes_uid, contradicts_uid, source_uids, props (overrides Claim node properties), agent_id. Each sub-item (claim, evidence, warrant, argument) accepts its own props for all node-type-specific properties.
POST /epistemic/inquiry
Add hypotheses, theories, paradigms, anomalies, assumptions, questions, and open questions to the Epistemic layer.
| Action | Description |
|---|---|
hypothesis | Propose a testable hypothesis |
theory | Register a theory that organizes multiple hypotheses/claims |
paradigm | Define a paradigm or overarching framework |
anomaly | Record an anomaly that contradicts an existing theory/paradigm |
assumption | Declare an assumption that underlies reasoning |
question | Pose a specific question linked to existing nodes |
open_question | Register a broad open question for future investigation |
hypothesis - Request / Response
// Request
{
"action": "hypothesis",
"label": "LLMs benefit from structured memory",
"confidence": 0.7,
"salience": 0.9,
"props": {
"statement": "Providing LLMs with a structured knowledge graph improves factual accuracy by at least 15% on benchmark tasks.",
"status": "proposed",
"hypothesis_type": "empirical",
"testability_score": 0.85,
"predicted_observations": ["RAG with graph context outperforms flat document retrieval", "Multi-hop questions see the largest improvement"]
},
"tests_uid": "clm_101",
"related_uids": ["ent_001", "con_801"],
"agent_id": "research-agent"
}
// Response
{
"uid": "hyp_501",
"action": "hypothesis",
"label": "LLMs benefit from structured memory",
"created_edges": 3
}
theory - Request / Response
// Request
{
"action": "theory",
"label": "Cognitive graph theory of LLM memory",
"confidence": 0.65,
"salience": 0.95,
"props": {
"content": "LLMs equipped with a layered cognitive graph can maintain coherent long-term reasoning by grounding generation in structured, version-tracked knowledge rather than raw context windows.",
"status": "developing",
"theory_type": "computational"
},
"related_uids": ["hyp_501", "clm_101", "clm_102"],
"agent_id": "research-agent"
}
// Response
{
"uid": "thy_001",
"action": "theory",
"label": "Cognitive graph theory of LLM memory",
"created_edges": 3
}
paradigm - Request / Response
// Request
{
"action": "paradigm",
"label": "Neuro-symbolic AI",
"confidence": 0.8,
"salience": 0.9,
"props": {
"content": "The paradigm that combines neural network learning with symbolic reasoning structures. Neural components handle perception and pattern recognition while symbolic components handle logical inference, planning, and explainability."
},
"related_uids": ["thy_001", "con_801"],
"agent_id": "research-agent"
}
// Response
{
"uid": "par_001",
"action": "paradigm",
"label": "Neuro-symbolic AI",
"created_edges": 2
}
anomaly - Request / Response
// Request
{
"action": "anomaly",
"label": "GPT-4 hallucination despite RAG",
"confidence": 0.88,
"salience": 0.9,
"props": {
"description": "Model hallucinated facts even with relevant context in prompt window -- contradicts the hypothesis that structured retrieval eliminates confabulation"
},
"anomalous_to_uid": "hyp_501",
"agent_id": "eval-agent"
}
// Response
{
"uid": "ano_601",
"action": "anomaly",
"label": "GPT-4 hallucination despite RAG",
"created_edges": 1
}
assumption - Request / Response
// Request
{
"action": "assumption",
"label": "Embedding similarity approximates semantic relevance",
"confidence": 0.75,
"salience": 0.8,
"props": {
"content": "We assume that cosine similarity in the embedding space is a reliable proxy for semantic relevance when retrieving context for LLM generation. This assumption underlies all hybrid search ranking."
},
"assumes_uid": ["hyp_501", "thy_001"],
"agent_id": "research-agent"
}
// Response
{
"uid": "asm_001",
"action": "assumption",
"label": "Embedding similarity approximates semantic relevance",
"created_edges": 2
}
open_question - Request / Response
// Request
{
"action": "open_question",
"label": "How to measure memory retrieval quality?",
"salience": 0.85,
"props": {
"text": "What metrics best capture whether retrieved context improves generation? F1 on downstream QA, human preference ratings, or faithfulness scores?"
},
"addresses_uid": "hyp_501",
"related_uids": ["ano_601"],
"agent_id": "research-agent"
}
// Response
{
"uid": "oq_701",
"action": "open_question",
"label": "How to measure memory retrieval quality?",
"created_edges": 2
}
Auto-created edges: AnomalousTo (anomaly → target), Tests (hypothesis → target), Assumes (assumption → targets), Addresses (question → target), Supersedes (node → target), Produces (node → target), RelevantTo (node → related).
Fields: action (required), label, summary, confidence, salience, anomalous_to_uid, assumes_uid (array), tests_uid, addresses_uid, supersedes_uid, produces_uid, related_uids (array), props (object — statement, status, content, text, description, etc.), agent_id.
POST /epistemic/structure
Add structural knowledge elements: concepts, patterns, mechanisms, models, analogies, inference chains, theorems, and more.
| Action | Description |
|---|---|
concept | Define a concept node |
pattern | Record a recurring pattern |
mechanism | Describe a causal mechanism |
model | Register a formal or mental model |
model_evaluation | Evaluate a model against criteria |
analogy | Draw an analogy between two domains |
inference_chain | Build a chain of inference steps |
reasoning_strategy | Document a reasoning strategy or heuristic |
sensitivity_analysis | Record sensitivity analysis results |
theorem | State a theorem with optional proof reference |
equation | Capture a mathematical equation or formula |
method | Document a research or analytical method |
experiment | Record an experiment with design, variables, and controls |
concept - Request / Response
// Request
{
"action": "concept",
"label": "Knowledge Graph",
"confidence": 0.95,
"props": {
"definition": "A graph-structured knowledge base that stores entities as nodes and relationships as typed, weighted edges, enabling multi-hop traversal and structured reasoning"
},
"related_uids": ["ent_001"],
"agent_id": "research-agent"
}
// Response
{
"uid": "con_801",
"action": "concept",
"label": "Knowledge Graph",
"created_edges": 1
}
pattern - Request / Response
// Request
{
"action": "pattern",
"label": "Retrieval-then-generate",
"confidence": 0.88,
"props": {
"description": "A recurring architectural pattern in LLM systems: first retrieve relevant context from an external store, then condition generation on that context. Observed in RAG, REALM, and Atlas architectures."
},
"related_uids": ["con_801", "hyp_501"],
"agent_id": "research-agent"
}
// Response
{
"uid": "pat_001",
"action": "pattern",
"label": "Retrieval-then-generate",
"created_edges": 2
}
mechanism - Request / Response
// Request
{
"action": "mechanism",
"label": "Attention-based context integration",
"confidence": 0.78,
"props": {
"description": "The causal mechanism by which retrieved knowledge improves generation: retrieved passages occupy positions in the attention window, allowing cross-attention heads to attend to factual content rather than relying solely on parametric memory."
},
"related_uids": ["pat_001", "snp_001"],
"agent_id": "research-agent"
}
// Response
{
"uid": "mech_001",
"action": "mechanism",
"label": "Attention-based context integration",
"created_edges": 2
}
model - Request / Response
// Request
{
"action": "model",
"label": "Cognitive layer stack model",
"summary": "Five-layer cognitive architecture for agent knowledge management",
"confidence": 0.82,
"props": {
"content": "A five-layer model of agent cognition: Reality (raw data) -> Epistemic (knowledge and reasoning) -> Intent (goals and decisions) -> Action (procedures and execution) -> Memory (sessions and summaries). Information flows upward through refinement and downward through grounding."
},
"related_uids": ["thy_001", "par_001"],
"agent_id": "architect-agent"
}
// Response
{
"uid": "mod_001",
"action": "model",
"label": "Cognitive layer stack model",
"created_edges": 2
}
analogy - Request / Response
// Request
{
"action": "analogy",
"label": "Knowledge graph as library catalog",
"props": {
"content": "Structured memory for LLMs is analogous to a library catalog system: nodes are books, edges are cross-references, layers are sections, and retrieval is the act of a librarian navigating the catalog to find relevant sources"
},
"analogous_to_uid": "con_801",
"transfers_to_uids": ["mod_001", "pat_001"],
"confidence": 0.7,
"agent_id": "research-agent"
}
// Response
{
"uid": "ana_901",
"action": "analogy",
"label": "Knowledge graph as library catalog",
"created_edges": 3
}
inference_chain - Request / Response
// Request
{
"action": "inference_chain",
"label": "RAG improves accuracy chain",
"props": {
"content": "Step-by-step reasoning: (1) LLMs hallucinate due to stale parametric memory, (2) retrieval provides fresh factual context, (3) attention mechanisms integrate retrieved facts, (4) therefore RAG reduces hallucination rates"
},
"chain_steps": ["clm_101", "ev_201", "mech_001", "hyp_501"],
"confidence": 0.75,
"derived_from_uids": ["thy_001"],
"agent_id": "reasoning-agent"
}
// Response
{
"uid": "ic_1001",
"action": "inference_chain",
"label": "RAG improves accuracy chain",
"created_edges": 5
}
theorem - Request / Response
// Request
{
"action": "theorem",
"label": "Graph completeness theorem",
"props": {
"content": "A knowledge graph with N entity types requires at most N*(N-1)/2 edge types to be fully expressive, assuming edges are undirected and no self-loops"
},
"proven_by_uid": "arg_401",
"derived_from_uids": ["con_801"],
"confidence": 0.9,
"agent_id": "math-agent"
}
// Response
{
"uid": "thm_1101",
"action": "theorem",
"label": "Graph completeness theorem",
"created_edges": 2
}
equation - Request / Response
// Request
{
"action": "equation",
"label": "Salience decay function",
"summary": "Exponential decay formula used by the /evolve decay action",
"props": {
"expression": "s(t) = s_0 * 0.5^(t / t_half), where s_0 is initial salience, t is elapsed time in seconds, and t_half is the configurable half-life"
},
"derived_from_uids": ["mod_001"],
"confidence": 1.0,
"agent_id": "math-agent"
}
// Response
{
"uid": "eq_001",
"action": "equation",
"label": "Salience decay function",
"created_edges": 1
}
method - Request / Response
// Request
{
"action": "method",
"label": "Retrieval-Augmented Generation",
"props": {
"description": "Augment LLM generation with retrieved context from an external knowledge base",
"method_type": "generation",
"domain": "NLP",
"limitations": ["Retrieval quality bottleneck", "Context window limits"],
"validity_conditions": ["Relevant documents exist in corpus", "Embedding model matches domain"],
"parameters": ["chunk_size", "top_k", "reranking_model"]
},
"agent_id": "research-agent"
}
// Response
{
"uid": "mth_001",
"action": "method",
"label": "Retrieval-Augmented Generation",
"created_edges": 0
}
experiment - Request / Response
// Request
{
"action": "experiment",
"label": "RAG vs fine-tuning comparison",
"props": {
"description": "Compare RAG and fine-tuning approaches for domain-specific QA accuracy",
"design_type": "controlled_comparison",
"variables_manipulated": ["knowledge_source_method"],
"variables_measured": ["accuracy", "hallucination_rate", "latency"],
"controls": ["Same base model", "Same evaluation dataset", "Same compute budget"],
"sample_description": "1000 domain-specific questions from medical literature"
},
"related_uids": ["mth_001"],
"agent_id": "research-agent"
}
// Response
{
"uid": "exp_001",
"action": "experiment",
"label": "RAG vs fine-tuning comparison",
"created_edges": 1
}
Auto-created edges: AnalogousTo (analogy → target), TransfersTo (analogy → targets), Evaluates (eval → model), Outperforms (eval → model), HasChainStep (chain → steps), DerivedFrom (theorem/equation → sources), ProvenBy (theorem → proof), UsesMethod (experiment → method), Describes (concept/model → target), PartOf (component → parent), Supersedes (node → target), Produces (experiment → result), RelevantTo (node → related).
Fields: action (required), label, summary, confidence, salience, analogous_to_uid, transfers_to_uids (array), evaluates_uid, outperforms_uid, chain_steps (array), derived_from_uids (array), proven_by_uid, method_uid, describes_uid, part_of_uid, supersedes_uid, produces_uid, related_uids (array), props (object — definition, content, description, expression, etc.), agent_id.
Intent Layer
POST /intent/commitment
Create goals, projects, and milestones with optional parent hierarchy and motivation edges.
| Action | Description |
|---|---|
goal | Create a goal node with priority and status tracking |
project | Create a project that decomposes into milestones/tasks |
milestone | Create a milestone within a project with an optional due date |
goal - Request / Response
// Request
{
"action": "goal",
"label": "Ship MindGraph v1.0",
"confidence": 0.9,
"props": {
"description": "Complete and release the first stable version of MindGraph with all 18 cognitive endpoints, full test coverage, and production-ready documentation",
"priority": "high",
"status": "active",
"due_date": "2026-06-01",
"goal_type": "product_launch"
},
"motivated_by_uids": ["clm_101", "thy_001"],
"agent_id": "planning-agent"
}
// Response
{
"uid": "goal_001",
"action": "goal",
"label": "Ship MindGraph v1.0"
}
project - Request / Response
// Request
{
"action": "project",
"label": "Cognitive API implementation",
"props": {
"description": "Build all cognitive endpoint handlers with full request validation, edge creation, and embedding generation",
"priority": "high",
"status": "active"
},
"parent_uid": "goal_001",
"motivated_by_uids": ["goal_001"],
"agent_id": "planning-agent"
}
// Response
{
"uid": "proj_001",
"action": "project",
"label": "Cognitive API implementation"
}
milestone - Request / Response
// Request
{
"action": "milestone",
"label": "API feature-complete",
"props": {
"description": "All 18 cognitive endpoints implemented and tested with >90% coverage",
"due_date": "2026-04-01",
"status": "active",
"priority": "high"
},
"parent_uid": "proj_001",
"agent_id": "planning-agent"
}
// Response
{
"uid": "ms_001",
"action": "milestone",
"label": "API feature-complete"
}
Auto-created edges: DecomposesInto (parent → child), MotivatedBy (node → motivators).
Fields: action (required), label, summary, confidence, salience, parent_uid, motivated_by_uids (array), props (object — description, priority, status, due_date, goal_type, etc.), agent_id.
POST /intent/deliberation
Structured decision-making: open a decision, add options and constraints, then resolve it.
| Action | Description |
|---|---|
open_decision | Open a new decision that needs to be resolved |
add_option | Add a candidate option to an open decision |
add_constraint | Add a constraint that limits available options |
resolve | Resolve a decision by choosing an option |
get_open | List all currently open (unresolved) decisions |
open_decision - Request / Response
// Request
{
"action": "open_decision",
"label": "Choose embedding provider",
"props": {
"description": "Select the vector embedding provider for semantic search across all cognitive layers"
},
"agent_id": "architect-agent"
}
// Response
{
"uid": "dec_001",
"action": "open_decision",
"label": "Choose embedding provider"
}
add_option - Request / Response
// Request
{
"action": "add_option",
"decision_uid": "dec_001",
"label": "OpenAI text-embedding-3-large",
"props": {
"description": "3072-dim embeddings, highest benchmark quality, $0.13/1M tokens, API dependency"
},
"informs_uids": ["goal_001", "proj_001"],
"agent_id": "architect-agent"
}
// Response
{
"uid": "opt_001",
"action": "add_option",
"label": "OpenAI text-embedding-3-large"
}
add_constraint - Request / Response
// Request
{
"action": "add_constraint",
"decision_uid": "dec_001",
"label": "Budget limit $50/month",
"props": {
"description": "Monthly embedding API costs must stay under $50 to remain within infrastructure budget",
"constraint_type": "budget"
},
"blocks_uid": "opt_003",
"agent_id": "architect-agent"
}
// Response
{
"uid": "cst_001",
"action": "add_constraint",
"label": "Budget limit $50/month"
}
resolve - Request / Response
// Request
{
"action": "resolve",
"decision_uid": "dec_001",
"chosen_option_uid": "opt_001",
"props": {
"decision_rationale": "Best quality-to-cost ratio for our scale. At ~1M tokens/month, cost is well within budget and quality scores are 12% higher than alternatives."
},
"agent_id": "architect-agent"
}
// Response
{
"uid": "dec_001",
"action": "resolve",
"version": 2
}
get_open - Request / Response
// Request
{
"action": "get_open"
}
// Response
[
{
"uid": "dec_002",
"label": "Choose database backend",
"status": "open",
"options_count": 3
}
]
Auto-created edges: HasOption (decision → option), Informs (option → targets), ConstrainedBy (decision → constraint), Blocks (constraint → option), DecidedOn (decision → chosen option).
Fields: action (required), label, summary, confidence, salience, decision_uid, informs_uids (array), blocks_uid, chosen_option_uid, props (object — description, constraint_type, decision_rationale, etc.), agent_id.
Action Layer
POST /action/procedure
Build procedural flows: create workflows, add ordered steps, register affordances, and define control nodes.
| Action | Description |
|---|---|
create_flow | Create a new procedural flow (workflow) |
add_step | Add a step to an existing flow |
add_affordance | Register an affordance (tool, capability, or resource) |
add_control | Add a control node (conditional, loop, etc.) to a flow |
create_flow - Request / Response
// Request
{
"action": "create_flow",
"label": "Knowledge ingestion pipeline",
"props": {
"description": "End-to-end pipeline for ingesting research papers: fetch, extract, chunk, embed, and store in the cognitive graph"
},
"goal_uid": "goal_001",
"agent_id": "architect-agent"
}
// Response
{
"uid": "flow_001",
"action": "create_flow",
"label": "Knowledge ingestion pipeline"
}
add_step - Request / Response
// Request
{
"action": "add_step",
"label": "Extract entities and claims",
"flow_uid": "flow_001",
"previous_step_uid": "step_001",
"props": {
"description": "Use LLM to extract named entities, claims, and relationships from chunked text",
"order": 2
},
"uses_affordance_uids": ["aff_001", "aff_002"],
"agent_id": "architect-agent"
}
// Response
{
"uid": "step_002",
"action": "add_step",
"order": 2
}
add_affordance - Request / Response
// Request
{
"action": "add_affordance",
"label": "Claude API (Sonnet)",
"props": {
"description": "LLM inference endpoint for entity extraction and claim identification",
"affordance_type": "api"
},
"agent_id": "architect-agent"
}
// Response
{
"uid": "aff_001",
"action": "add_affordance",
"label": "Claude API (Sonnet)"
}
add_control - Request / Response
// Request
{
"action": "add_control",
"label": "Retry on rate limit",
"flow_uid": "flow_001",
"props": {
"control_type": "loop",
"description": "Retry the current step up to 3 times with exponential backoff if the embedding API returns 429"
},
"agent_id": "architect-agent"
}
// Response
{
"uid": "ctrl_001",
"action": "add_control",
"label": "Retry on rate limit"
}
Auto-created edges: RelevantTo (flow → goal), ComposedOf (flow → step), DependsOn (prev step → step), Follows (prev step → step), StepUses (step → affordances), Controls (flow → control).
Fields: action (required), label, summary, confidence, salience, flow_uid, previous_step_uid, uses_affordance_uids (array), goal_uid, props (object — description, order, affordance_type, control_type, etc.), agent_id.
POST /action/risk
Assess risks associated with nodes or retrieve existing risk assessments.
| Action | Description |
|---|---|
assess | Create a risk assessment for a node |
get_assessments | Retrieve risk assessments, optionally filtered by node |
assess - Request / Response
// Request
{
"action": "assess",
"label": "Data loss during migration",
"assessed_uid": "flow_001",
"props": {
"description": "Risk of losing graph data during database migration from SQLite to PostgreSQL, including edge metadata and embedding vectors",
"severity": "high",
"likelihood": 0.15,
"mitigations": [
"Full backup before migration",
"Staged rollout: migrate read replicas first",
"Checksum verification on all migrated nodes"
],
"residual_risk": 0.03
},
"agent_id": "risk-agent"
}
// Response
{
"uid": "risk_001",
"action": "assess",
"label": "Data loss during migration"
}
get_assessments - Request / Response
// Request
{
"action": "get_assessments",
"filter_uid": "flow_001"
}
// Response
[
{
"uid": "risk_001",
"label": "Data loss during migration",
"severity": "high",
"likelihood": 0.15,
"residual_risk": 0.03
}
]
Auto-created edges: RiskAssessedBy (target → assessment).
Fields: action (required), label, summary, confidence, salience, assessed_uid, filter_uid, props (object — description, severity, likelihood, mitigations, residual_risk, etc.), agent_id.
Memory Layer
POST /memory/session
Manage agent sessions, record traces, write journal entries, and close sessions. Use journal for narrative, unstructured memory — like notes, reflections, or context that doesn't map cleanly to structured graph nodes.
| Action | Description |
|---|---|
open | Open a new session with an optional focus area |
trace | Record a trace entry within an active session |
journal | Write a narrative memory entry (notes, reflections, context) |
close | Close a session, finalizing its trace |
open - Request / Response
// Request
{
"action": "open",
"label": "Embedding strategy research",
"props": {
"focus_summary": "Investigating embedding providers and indexing strategies for MindGraph semantic search"
},
"relevant_node_uids": ["goal_001", "dec_001", "con_801"],
"agent_id": "research-agent"
}
// Response
{
"uid": "sess_001",
"action": "open",
"label": "Embedding strategy research"
}
trace - Request / Response
// Request
{
"action": "trace",
"label": "HNSW vs IVF-PQ benchmark",
"session_uid": "sess_001",
"props": {
"content": "Benchmarked HNSW vs IVF-PQ indexing on 100k node graph. HNSW: 99.2% recall at 2ms p50 latency. IVF-PQ: 94.1% recall at 0.8ms p50 latency. Conclusion: HNSW is the right choice given our scale and accuracy requirements.",
"trace_type": "finding"
},
"relevant_node_uids": ["con_801", "hyp_501", "opt_001"],
"agent_id": "research-agent"
}
// Response
{
"uid": "trace_001",
"action": "trace"
}
journal - Request / Response
// Request
{
"action": "journal",
"label": "Embedding provider evaluation notes",
"session_uid": "sess_001",
"props": {
"content": "After testing OpenAI, Cohere, and Voyage embeddings on our dataset:\n\n- OpenAI text-embedding-3-large gives best recall (0.94) but costs 3x more\n- Voyage-3 is nearly as good (0.91 recall) at 1/3 the cost\n- Cohere v3 has the fastest inference but lower recall (0.85)\n\nLeaning toward Voyage-3 as the default, with OpenAI as an option for users who need maximum quality.",
"journal_type": "research_notes",
"tags": ["embeddings", "evaluation", "decision-pending"]
},
"relevant_node_uids": ["dec_001", "hyp_501"],
"agent_id": "research-agent"
}
// Response
{
"uid": "jrn_001",
"action": "journal",
"label": "Embedding provider evaluation notes"
}
close - Request / Response
// Request
{
"action": "close",
"session_uid": "sess_001",
"agent_id": "research-agent"
}
// Response
{
"uid": "sess_001",
"action": "close",
"version": 2
}
Auto-created edges: CapturedIn (trace/journal → session), TraceEntry (trace → relevant nodes), RelevantTo (journal → relevant nodes).
Fields: action (required), label, summary, confidence, salience, session_uid, relevant_node_uids (array), props (object — focus_summary, content, trace_type, journal_type, tags, etc.), agent_id.
POST /memory/distill
Create a summary node that distills multiple source nodes into a concise form. This is a monolithic endpoint (no action field).
Request / Response
// Request
{
"label": "Embedding research summary",
"summary": "HNSW + OpenAI optimal for MindGraph scale within $50/month budget",
"summarizes_uids": ["trace_001", "con_801", "hyp_501", "opt_001"],
"session_uid": "sess_001",
"props": {
"content": "HNSW provides the best recall/speed tradeoff for graphs under 1M nodes. OpenAI text-embedding-3-large leads in quality but costs $0.13/1M tokens. For MindGraph's scale, HNSW + OpenAI is optimal and fits within the $50/month budget at projected usage.",
"importance": 0.85
},
"agent_id": "research-agent"
}
// Response
{
"uid": "sum_001",
"label": "Embedding research summary"
}
Auto-created edges: CapturedIn (summary → session), Summarizes (summary → source nodes).
Fields: label (required), summary, confidence, salience, summarizes_uids (array), session_uid, props (object — content, importance, etc.), agent_id.
POST /memory/config
Manage agent preferences and memory policies.
| Action | Description |
|---|---|
set_preference | Set a key-value preference for the agent |
get_preferences | Retrieve all stored preferences |
set_policy | Define a memory policy (retention, decay rules, etc.) |
get_policies | Retrieve all stored memory policies |
set_preference - Request / Response
// Request
{
"action": "set_preference",
"label": "Response verbosity",
"props": {
"key": "verbosity",
"value": "concise"
},
"agent_id": "research-agent"
}
// Response
{
"uid": "pref_001",
"label": "Response verbosity"
}
set_policy - Request / Response
// Request
{
"action": "set_policy",
"label": "Session retention policy",
"props": {
"policy_content": "Auto-tombstone sessions older than 30 days with salience below 0.2. Preserve sessions linked to active goals regardless of age."
},
"agent_id": "admin-agent"
}
// Response
{
"uid": "pol_001",
"label": "Session retention policy"
}
get_preferences - Request / Response
// Request
{
"action": "get_preferences"
}
// Response
[
{
"uid": "pref_001",
"label": "Response verbosity",
"key": "verbosity",
"value": "concise"
}
]
get_policies - Request / Response
// Request
{
"action": "get_policies"
}
// Response
[
{
"uid": "pol_001",
"label": "Session retention policy",
"policy_content": "Auto-tombstone sessions older than 30 days with salience below 0.2. Preserve sessions linked to active goals regardless of age."
}
]
Auto-created edges: None.
Fields: action (required), label, summary, confidence, salience, props (object — key, value, policy_content, etc.), agent_id.
Agent Layer
These endpoints create graph primitives for planning, governance, and execution — usable by any agent. For managed autonomous agents hosted by MindGraph Cloud, see the Agents guide.
POST /agent/plan
Create tasks, build plans with ordered steps, update statuses, and query plan details.
| Action | Description |
|---|---|
create_task | Create a task node, optionally linked to a goal |
create_plan | Create a plan for a task or goal |
add_step | Add an ordered step to a plan |
update_status | Update the status of a task, plan, or step |
get_plan | Retrieve a plan and all its steps |
create_task - Request / Response
// Request
{
"action": "create_task",
"label": "Implement /retrieve endpoint",
"props": {
"description": "Build unified retrieval endpoint with text, semantic, and hybrid search modes, including pagination and layer filtering"
},
"goal_uid": "goal_001",
"related_uids": ["con_801", "dec_001"],
"agent_id": "planning-agent"
}
// Response
{
"uid": "task_001",
"action": "create_task",
"label": "Implement /retrieve endpoint",
"created_edges": 3
}
create_plan - Request / Response
// Request
{
"action": "create_plan",
"label": "Retrieve endpoint implementation plan",
"props": {
"description": "Step-by-step plan for building the /retrieve endpoint with all search modes"
},
"task_uid": "task_001",
"goal_uid": "goal_001",
"agent_id": "planning-agent"
}
// Response
{
"uid": "plan_001",
"action": "create_plan",
"label": "Retrieve endpoint implementation plan",
"created_edges": 2
}
add_step - Request / Response
// Request
{
"action": "add_step",
"plan_uid": "plan_001",
"label": "Implement text search handler",
"props": {
"description": "Build BM25 full-text search across labels and summaries with node_type and layer filtering",
"order": 2
},
"depends_on_uids": ["ps_001"],
"agent_id": "planning-agent"
}
// Response
{
"uid": "ps_002",
"action": "add_step",
"order": 2
}
update_status - Request / Response
// Request
{
"action": "update_status",
"target_uid": "task_001",
"status": "in_progress",
"agent_id": "execution-agent"
}
// Response
{
"uid": "task_001",
"status": "in_progress",
"version": 2
}
get_plan - Request / Response
// Request
{
"action": "get_plan",
"plan_uid": "plan_001"
}
// Response
{
"plan": {
"uid": "plan_001",
"label": "Retrieve endpoint implementation plan",
"status": "active"
},
"steps": [
{ "uid": "ps_001", "label": "Define request schema", "order": 1, "status": "complete" },
{ "uid": "ps_002", "label": "Implement text search handler", "order": 2, "status": "in_progress" },
{ "uid": "ps_003", "label": "Implement semantic search handler", "order": 3, "status": "pending" }
]
}
Auto-created edges: Targets (task/plan → goal and related), PlannedBy (plan → task), HasStep (plan → step), DependsOn (step → deps), Follows (dep → step).
Fields: action (required), label, summary, confidence, salience, goal_uid, task_uid, plan_uid, depends_on_uids (array), target_uid, status, related_uids (array), props (object — description, order, etc.), agent_id.
POST /agent/governance
Govern agent behavior: create policies, set safety budgets, and manage approval workflows.
| Action | Description |
|---|---|
create_policy | Create a governance policy |
set_budget | Set a safety budget (token, cost, or time limit) |
request_approval | Request human or policy approval before proceeding |
resolve_approval | Approve or reject a pending approval request |
get_pending | List all pending approval requests |
create_policy - Request / Response
// Request
{
"action": "create_policy",
"label": "No unsupervised deletions",
"props": {
"policy_content": "Agents must request approval before tombstoning more than 10 nodes in a single session. Bulk deletions require a plan with explicit human sign-off."
},
"agent_id": "governance-agent"
}
// Response
{
"uid": "gpol_001",
"label": "No unsupervised deletions"
}
set_budget - Request / Response
// Request
{
"action": "set_budget",
"label": "Daily API token budget",
"governed_uid": "agent_001",
"props": {
"budget_type": "tokens",
"budget_limit": 500000
},
"agent_id": "governance-agent"
}
// Response
{
"uid": "bud_001",
"label": "Daily API token budget"
}
request_approval - Request / Response
// Request
{
"action": "request_approval",
"label": "Approve bulk entity merge",
"governed_uid": "agent_001",
"props": {
"approval_request": "Request to merge 47 duplicate entity nodes identified by the dedup-agent. Affected entities span the Reality and Epistemic layers."
},
"requires_plan_uid": "plan_001",
"agent_id": "dedup-agent"
}
// Response
{
"uid": "apr_001",
"action": "request_approval"
}
resolve_approval - Request / Response
// Request
{
"action": "resolve_approval",
"approval_uid": "apr_001",
"approved": true,
"props": {
"resolution_note": "Reviewed merge plan -- all 47 duplicates confirmed via fuzzy_resolve with score > 0.95. Proceed."
},
"agent_id": "admin"
}
// Response
{
"uid": "apr_001",
"approved": true,
"version": 2
}
get_pending - Request / Response
// Request
{
"action": "get_pending"
}
// Response
[
{
"uid": "apr_002",
"label": "Approve production deployment",
"status": "pending",
"requires_plan_uid": "plan_002"
}
]
Auto-created edges: BudgetFor (budget → governed), RequiresApproval (plan → approval).
Fields: action (required), label, summary, confidence, salience, governed_uid, requires_plan_uid, approval_uid, approved (boolean), props (object — policy_content, budget_type, budget_limit, approval_request, resolution_note, etc.), agent_id.
POST /agent/execution
Track execution lifecycle: start runs, mark them as complete or failed, register agents, and query executions.
| Action | Description |
|---|---|
start | Start a new execution run |
complete | Mark an execution as successfully completed |
fail | Mark an execution as failed with an error description |
register_agent | Register a new agent identity |
get_executions | List executions, optionally filtered by plan |
start - Request / Response
// Request
{
"action": "start",
"label": "Execute retrieval pipeline build",
"plan_uid": "plan_001",
"executor_uid": "agent_001",
"related_uids": ["task_001", "goal_001"],
"agent_id": "execution-agent"
}
// Response
{
"uid": "exec_001",
"action": "start",
"created_edges": 4
}
complete - Request / Response
// Request
{
"action": "complete",
"execution_uid": "exec_001",
"produces_node_uid": "sum_001",
"props": {
"outcome": "All three search modes (text, semantic, hybrid) implemented and passing 47/47 tests. Latency: text 12ms p99, semantic 45ms p99, hybrid 52ms p99."
},
"agent_id": "execution-agent"
}
// Response
{
"uid": "exec_001",
"action": "complete",
"version": 2
}
fail - Request / Response
// Request
{
"action": "fail",
"execution_uid": "exec_002",
"props": {
"error_description": "Embedding provider returned 503 during bulk indexing of 10k nodes. 3,247 nodes indexed before failure. Partial state needs cleanup."
},
"agent_id": "execution-agent"
}
// Response
{
"uid": "exec_002",
"action": "fail",
"version": 2
}
register_agent - Request / Response
// Request
{
"action": "register_agent",
"label": "research-agent",
"props": {
"agent_type": "Autonomous research and knowledge extraction from papers, documentation, and web sources"
},
"agent_id": "admin"
}
// Response
{
"uid": "agent_002",
"name": "research-agent"
}
get_executions - Request / Response
// Request
{
"action": "get_executions",
"filter_plan_uid": "plan_001"
}
// Response
[
{
"uid": "exec_001",
"label": "Execute retrieval pipeline build",
"status": "complete",
"executor_uid": "agent_001"
}
]
Auto-created edges: ExecutionOf (execution → plan), ExecutedBy (execution → executor), Targets (execution → related), ProducesNode (execution → output).
Fields: action (required), label, summary, confidence, salience, plan_uid, executor_uid, execution_uid, produces_node_uid, related_uids (array), filter_plan_uid, props (object — outcome, error_description, agent_type, etc.), agent_id.
Cross-cutting Endpoints
POST /retrieve
Unified retrieval endpoint supporting text search, semantic search, hybrid search, and several pre-built queries for common agent needs.
| Action | Description |
|---|---|
text | Full-text BM25 search across labels and summaries |
semantic | Vector similarity search (requires embedding provider) |
hybrid | Combined BM25 + vector search with reciprocal rank fusion |
active_goals | Retrieve all goals with an active status |
open_questions | Retrieve all open questions awaiting answers |
weak_claims | Retrieve claims below a confidence threshold |
pending_approvals | Retrieve all pending approval requests |
unresolved_contradictions | Retrieve pairs of contradictory claims |
layer | Paginate all nodes in a specific cognitive layer |
recent | Paginate recently modified nodes with optional filters |
text - Request / Response
// Request
{
"action": "text",
"query": "knowledge graph embedding",
"node_types": ["Concept", "Claim"],
"layer": "epistemic",
"confidence_min": 0.5,
"limit": 10,
"offset": 0
}
// Response
[
{
"uid": "con_801",
"label": "Knowledge Graph",
"score": 0.89,
"node_type": "Concept"
}
]
semantic - Request / Response
// Request
{
"action": "semantic",
"query": "how do transformers integrate external knowledge",
"k": 5,
"threshold": 0.7,
"node_types": ["Hypothesis", "Mechanism", "Pattern"],
"layer": "epistemic"
}
// Response
[
{
"uid": "mech_001",
"label": "Attention-based context integration",
"score": 0.91,
"node_type": "Mechanism"
},
{
"uid": "pat_001",
"label": "Retrieval-then-generate",
"score": 0.87,
"node_type": "Pattern"
}
]
hybrid - Request / Response
// Request
{
"action": "hybrid",
"query": "structured memory for LLMs",
"k": 5,
"node_types": ["Concept", "Hypothesis", "Claim", "Theory"],
"layer": "epistemic",
"confidence_min": 0.5,
"salience_min": 0.3
}
// Response
[
{
"uid": "hyp_501",
"label": "LLMs benefit from structured memory",
"score": 0.92,
"node_type": "Hypothesis"
},
{
"uid": "thy_001",
"label": "Cognitive graph theory of LLM memory",
"score": 0.88,
"node_type": "Theory"
}
]
active_goals - Request / Response
// Request
{
"action": "active_goals"
}
// Response
[
{
"uid": "goal_001",
"label": "Ship MindGraph v1.0",
"status": "active",
"priority": "high"
}
]
weak_claims - Request / Response
// Request
{
"action": "weak_claims",
"threshold": 0.5
}
// Response
[
{
"uid": "clm_042",
"label": "Graph databases outperform relational for traversals",
"confidence": 0.45
}
]
layer - Request / Response
// Request
{
"action": "layer",
"layer": "epistemic",
"limit": 20,
"offset": 0
}
// Response
[
{ "uid": "clm_101", "label": "TypeScript reduces runtime errors", "node_type": "Claim" },
{ "uid": "hyp_501", "label": "LLMs benefit from structured memory", "node_type": "Hypothesis" },
{ "uid": "thy_001", "label": "Cognitive graph theory of LLM memory", "node_type": "Theory" }
]
recent - Request / Response
// Request
{
"action": "recent",
"limit": 10,
"offset": 0,
"node_types": ["Summary", "Observation", "Claim"],
"confidence_min": 0.5,
"salience_min": 0.3
}
// Response
[
{
"uid": "sum_001",
"label": "Embedding research summary",
"node_type": "Summary",
"updated_at": "2026-03-06T12:00:00Z"
},
{
"uid": "obs_001",
"label": "Transformer inference latency scales quadratically",
"node_type": "Observation",
"updated_at": "2026-03-05T14:30:00Z"
}
]
Fields: action (required), query (for text/semantic/hybrid), k, threshold, layer, node_types (array), confidence_min, salience_min, limit, offset.
POST /traverse
Graph traversal operations: follow reasoning chains, explore neighborhoods, find paths between nodes, and extract subgraphs.
| Action | Description |
|---|---|
chain | Follow epistemic edges from a starting node (reasoning chain) |
neighborhood | BFS traversal around a node with direction and edge filters |
path | Find the shortest path between two nodes |
subgraph | Extract all reachable nodes and edges from a starting point |
chain - Request / Response
// Request
{
"action": "chain",
"start_uid": "clm_101",
"max_depth": 5,
"edge_types": ["Supports", "Justifies", "DerivedFrom"],
"direction": "both"
}
// Response
{
"mode": "chain",
"start_uid": "clm_101",
"steps": [
{ "uid": "ev_201", "label": "Airbnb migration study", "edge_type": "Supports", "depth": 1 },
{ "uid": "arg_401", "label": "TS type safety argument", "edge_type": "Justifies", "depth": 2 },
{ "uid": "ic_1001", "label": "RAG improves accuracy chain", "edge_type": "DerivedFrom", "depth": 3 }
]
}
neighborhood - Request / Response
// Request
{
"action": "neighborhood",
"start_uid": "con_801",
"max_depth": 2,
"direction": "both",
"edge_types": ["RelatedTo", "InstanceOf", "AnalogousTo"],
"weight_threshold": 0.5
}
// Response
{
"mode": "neighborhood",
"start_uid": "con_801",
"steps": [
{ "uid": "ent_001", "label": "TypeScript", "edge_type": "RelatedTo", "depth": 1 },
{ "uid": "ana_901", "label": "Knowledge graph as library catalog", "edge_type": "AnalogousTo", "depth": 1 },
{ "uid": "pat_001", "label": "Retrieval-then-generate", "edge_type": "RelatedTo", "depth": 1 }
]
}
path - Request / Response
// Request
{
"action": "path",
"start_uid": "obs_001",
"end_uid": "goal_001",
"max_depth": 6,
"direction": "both"
}
// Response
{
"mode": "path",
"start_uid": "obs_001",
"end_uid": "goal_001",
"steps": [
{ "uid": "obs_001", "label": "Transformer inference latency scales quadratically", "depth": 0 },
{ "uid": "hyp_501", "label": "LLMs benefit from structured memory", "depth": 1 },
{ "uid": "task_001", "label": "Implement /retrieve endpoint", "depth": 2 },
{ "uid": "goal_001", "label": "Ship MindGraph v1.0", "depth": 3 }
]
}
subgraph - Request / Response
// Request
{
"action": "subgraph",
"start_uid": "arg_401",
"max_depth": 3,
"direction": "outgoing",
"edge_types": ["HasConclusion", "HasPremise", "Supports"]
}
// Response
{
"mode": "subgraph",
"start_uid": "arg_401",
"nodes": [
{ "uid": "arg_401", "label": "TS type safety argument", "node_type": "Argument" },
{ "uid": "clm_101", "label": "TypeScript reduces runtime errors", "node_type": "Claim" },
{ "uid": "ev_201", "label": "Airbnb migration study", "node_type": "Evidence" },
{ "uid": "ev_202", "label": "Bloomberg engineering report", "node_type": "Evidence" }
],
"edges": [
{ "uid": "e_001", "source_uid": "arg_401", "target_uid": "clm_101", "edge_type": "HasConclusion" },
{ "uid": "e_002", "source_uid": "arg_401", "target_uid": "ev_201", "edge_type": "HasPremise" },
{ "uid": "e_003", "source_uid": "arg_401", "target_uid": "ev_202", "edge_type": "HasPremise" }
]
}
Fields: action (required), start_uid, end_uid (for path), max_depth, direction, edge_types (array), weight_threshold.
POST /evolve
Lifecycle mutations: update node fields, tombstone or restore nodes, apply salience decay, and access version history.
| Action | Description |
|---|---|
update | Patch node fields (label, summary, confidence, salience, props) |
tombstone | Soft-delete a node (optionally cascade to connected edges) |
restore | Restore a previously tombstoned node |
decay | Apply exponential salience decay with configurable half-life |
history | Retrieve the full version history of a node |
snapshot | Retrieve a node at a specific version number |
update - Request / Response
// Request
{
"action": "update",
"uid": "clm_101",
"label": "TypeScript significantly reduces runtime errors",
"confidence": 0.92,
"salience": 0.85,
"summary": "Strong empirical evidence from Airbnb and Bloomberg confirms 38-60% bug reduction",
"props_patch": {
"reviewed": true,
"reviewer": "senior-research-agent",
"review_date": "2026-03-05"
},
"reason": "Updated confidence after peer review with additional Bloomberg evidence",
"agent_id": "review-agent"
}
// Response
{
"uid": "clm_101",
"label": "TypeScript significantly reduces runtime errors",
"confidence": 0.92,
"salience": 0.85,
"version": 3
}
tombstone - Request / Response
// Request
{
"action": "tombstone",
"uid": "ent_099",
"reason": "Duplicate entity, merged into ent_001 via fuzzy_resolve (score: 0.97)",
"cascade": true,
"agent_id": "cleanup-agent"
}
// Response
{
"uid": "ent_099",
"action": "tombstone",
"edges_tombstoned": 3
}
restore - Request / Response
// Request
{
"action": "restore",
"uid": "ent_099",
"reason": "Merge was incorrect -- entities are distinct after manual review",
"agent_id": "admin"
}
// Response
{
"uid": "ent_099",
"action": "restore"
}
decay - Request / Response
// Request
{
"action": "decay",
"half_life_secs": 86400,
"min_salience": 0.1,
"min_age_secs": 3600,
"agent_id": "maintenance-agent"
}
// Response
{
"nodes_decayed": 142,
"below_threshold": 12,
"auto_tombstoned": 3
}
history - Request / Response
// Request
{
"action": "history",
"uid": "clm_101"
}
// Response
[
{ "version": 1, "changed_by": "research-agent", "changed_at": "2026-03-01T10:00:00Z", "reason": "Created via /epistemic/argument" },
{ "version": 2, "changed_by": "research-agent", "changed_at": "2026-03-03T14:30:00Z", "reason": "Added Bloomberg evidence" },
{ "version": 3, "changed_by": "review-agent", "changed_at": "2026-03-05T09:00:00Z", "reason": "Updated confidence after peer review with additional Bloomberg evidence" }
]
snapshot - Request / Response
// Request
{
"action": "snapshot",
"uid": "clm_101",
"version": 1
}
// Response
{
"uid": "clm_101",
"label": "TypeScript reduces runtime errors",
"confidence": 0.85,
"salience": 1.0,
"version": 1,
"node_type": "Claim",
"props": {
"content": "Static type checking catches type errors at compile time"
},
"created_at": "2026-03-01T10:00:00Z"
}
Fields: action (required), uid, label, summary, confidence, salience, props_patch (object), reason, cascade (boolean), half_life_secs, min_salience, min_age_secs, version (for snapshot), agent_id.
agent_id is omitted from any request, it defaults to the server's MINDGRAPH_DEFAULT_AGENT environment variable (typically "system"). Set this variable to change the default agent identity for all unauthenticated or agent-unspecified operations.