Learn API
The Learn endpoint triggers the Memory Compiler, which processes accumulated events into structured knowledge artifacts. This is the step where raw interactions become usable memory.
The compiler works without any LLM calls. It uses frequency analysis, co-occurrence detection, and sequence extraction to identify patterns in your agent's event history. The output is deterministic: the same events always produce the same artifacts. Every artifact traces back to its source events, creating a full provenance chain.
How Compilation Works
When you call Learn, the compiler goes through these stages:
- Event retrieval. Fetches unprocessed events (incremental) or all events (full).
- Pattern extraction. Identifies recurring patterns across events.
- Artifact generation. Creates structured knowledge objects from detected patterns.
- Confidence scoring. Assigns confidence based on evidence count, extraction method, and recency.
- Contradiction detection. Identifies and supersedes outdated knowledge automatically.
The compiler produces four artifact types:
| Type | What It Extracts |
|---|---|
| Task Schema | Step-by-step procedures from successful task completions |
| Failure Playbook | Failure modes with symptoms, root causes, and recovery steps |
| Causal Pattern | Cause-and-effect relationships observed across events |
| Decision Policy | Conditional rules extracted from decision patterns |
POST /v1/learn
curl -X POST https://api.ynfra.ai/v1/learn \
-H "Authorization: Bearer hx_live_..." \
-H "Content-Type: application/json" \
-d '{
"scope": "incremental",
"options": {
"minPatternStrength": 0.7,
"artifactTypes": ["task_schema", "failure_playbook"]
}
}'
Request Body
| Field | Type | Default | Description |
|---|---|---|---|
scope | string | "incremental" | "full" or "incremental" |
options.minPatternStrength | number | 0.5 | Minimum pattern strength (0-1) |
options.artifactTypes | string[] | all types | Filter to specific artifact types |
Incremental processes only events since the last compilation. This is fast and is the recommended default.
Full reprocesses all events from scratch. Use this periodically (once per day or week) to ensure consistency.
Response
{
"ok": true,
"data": {
"runId": "run-abc123",
"status": "completed",
"artifacts": {
"created": 5,
"updated": 3,
"unchanged": 12,
"byType": {
"task_schema": 3,
"failure_playbook": 2,
"causal_pattern": 1,
"decision_policy": 2
}
},
"stats": {
"memoriesProcessed": 150,
"patternsFound": 23,
"compilationMs": 1245
}
}
}
| Status | Description |
|---|---|
completed | All events processed, all artifacts generated |
partial | Some events processed, partial results |
failed | Compilation failed (check error details) |
Best Practices
- Use incremental mode by default. It is faster and processes only new data.
- Run full compilation periodically. Once per day or week to catch any missed patterns.
- Do not call learn() after every capture(). Batch events first, then compile. A good default is every 100 events or every hour.
- Use artifact type filters when you only need specific knowledge types.
- Monitor compilation time. Large event histories take longer for full recompilation.