Ynfra - Quickstart Guide

Get your AI agent learning from experience in under 5 minutes.


1. Get Your API Key

  1. Go to dashboard.ynfra.ai
  2. Sign up (free tier, no credit card required)
  3. Copy your API key (hx_test_...)

2. Install the SDK

JavaScript/TypeScript:

npm install @ynfra/sdk

Python:

pip install ynfra

3. Set Up Config (Choose One)

Option A: Environment variable

export YNFRA_API_KEY=hx_test_...

Option B: Manual config file

// .ynfra.json (in your project root)
{
  "apiKey": "hx_test_your_key_here"
}

4. Add Memory to Your Agent

Choose the method that fits your workflow. All four are production-ready.

Gateway (Fastest, Zero Code)

Change your base URL. No SDK needed. Works with any OpenAI-compatible provider.

Python:

from openai import OpenAI

client = OpenAI(
    base_url="https://api.ynfra.ai/v1",
    api_key="hx_live_...",
    default_headers={
        "X-LLM-API-Key": "sk-...",   # Your OpenAI key
    },
)

# Use exactly as before. Memory is automatic.
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)

Works with OpenAI, Anthropic, Groq, Together, Ollama, Mistral, and any OpenAI-compatible endpoint. See the Gateway guide for all provider examples.

Auto-Instrumentation (Easiest, 1 Line)

One import. Every OpenAI/Anthropic call gets memory. Zero config.

TypeScript:

import '@ynfra/sdk/auto'
import OpenAI from 'openai'

const openai = new OpenAI()

// This call now has persistent memory:
// - Past context is synthesized and injected
// - The conversation is captured for future learning
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Deploy the payment service to staging' }]
})

Python:

import ynfra.auto
from openai import OpenAI

client = OpenAI()

# Every call now has memory, automatically
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)

wrap() (Recommended)

Explicit, typed, per-client control over which clients get memory.

TypeScript:

import { wrap } from '@ynfra/sdk'
import OpenAI from 'openai'

const openai = wrap(new OpenAI())

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Deploy the payment service to staging' }]
})
// Memory is injected and captured transparently

Python:

from ynfra import wrap
from openai import OpenAI

client = wrap(OpenAI())

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Deploy the payment service to staging"}]
)

Manual Client (Advanced, Full Control)

Use the Ynfra client directly for custom agent loops.

TypeScript:

import { Ynfra } from '@ynfra/sdk';

const hx = new Ynfra({ apiKey: process.env.YNFRA_API_KEY! });

// Capture a user message
await hx.capture({
  type: 'message',
  sessionId: 'session-001',
  payload: { role: 'user', content: 'Deploy the payment service to staging' }
});

// Capture the agent's tool call
await hx.capture({
  type: 'tool_call',
  sessionId: 'session-001',
  payload: { toolName: 'kubectl', arguments: { action: 'apply', file: 'staging.yaml' } }
});

// Capture the result
await hx.capture({
  type: 'tool_result',
  sessionId: 'session-001',
  payload: { toolName: 'kubectl', output: 'deployment.apps/payments created', success: true }
});

Python:

from ynfra import Ynfra, CaptureEvent
import os

hx = Ynfra(api_key=os.environ["YNFRA_API_KEY"])

# Capture a user message
await hx.capture(CaptureEvent(
    type="message",
    session_id="session-001",
    payload={"role": "user", "content": "Deploy the payment service to staging"}
))

# Capture a tool call
await hx.capture(CaptureEvent(
    type="tool_call",
    session_id="session-001",
    payload={"toolName": "kubectl", "arguments": {"action": "apply", "file": "staging.yaml"}}
))

# Capture the result
await hx.capture(CaptureEvent(
    type="tool_result",
    session_id="session-001",
    payload={"toolName": "kubectl", "output": "deployment.apps/payments created", "success": True}
))

5. Knowledge Compilation (Automatic)

The compilation pipeline runs automatically after every 10 captured events, with a 5-minute sweep to catch stragglers. You do not need to trigger it manually.

The compiler extracts:

  • Task schemas: recurring procedures ("how to deploy")
  • Failure playbooks: error patterns and resolutions
  • Causal patterns: cause-effect relationships
  • Decision policies: preferences and rules

If you want to trigger compilation manually (for testing or debugging), you can:

TypeScript:

const result = await hx.learn();
console.log(`Found ${result.stats.patternsFound} patterns`);
console.log(`Created ${result.artifacts.created} artifacts`);

Python:

result = await hx.learn()
print(f"Found {result.stats.patterns_found} patterns")
print(f"Created {result.artifacts.created} artifacts")

CLI:

ynfra compile

6. Synthesize Context

When your agent needs to make a decision, synthesize compressed context:

TypeScript:

const context = await hx.synthesize('deploy payment service to staging', {
  maxTokens: 2000
});

// Use in your LLM prompt
const systemPrompt = `You are a deployment assistant.

Relevant knowledge from past experience:
${context.entries.map(e => `[${e.section}] ${e.content}`).join('\n')}

Respond to the user's request.`;

Python:

context = await hx.synthesize("deploy payment service to staging", max_tokens=2000)

# Use in your LLM prompt
knowledge = "\n".join(
    f"[{e.section}] {e.content}" for e in context.entries
)
system_prompt = f"""You are a deployment assistant.

Relevant knowledge from past experience:
{knowledge}

Respond to the user's request."""

CLI:

ynfra synthesize "deploy payment service to staging"
ynfra synthesize "deploy payment service" --json  # raw JSON output

7. That's It!

Your agent now has memory that learns from experience. Every interaction gets captured, patterns get compiled, and context gets synthesized for future decisions.

What happens over time:

Session 1: Agent deploys service. Captures steps.
Session 2: Agent deploys again. Captures variations.
           learn() finds the "deploy" pattern.
Session 3: Agent asked to deploy. synthesize() returns:
           "Procedure: 1) Check CI 2) Run migrations 3) Deploy 4) Smoke test"
           "Known issue: skip migrations leads to 500 errors"

The more experience you capture, the better the synthesized context becomes.


Examples

Working code for common frameworks:

  • OpenAI Agents -- TypeScript deployment assistant with tool calls
  • LangGraph -- Python customer support agent with state graph
  • CrewAI -- Python code review crew with role-based agents
  • Agent Demo -- Internal architecture demo (full memory lifecycle)

Next Steps