API Reference
Complete reference documentation for the Dialetica AI API with examples in CURL and Python SDK.
Authenticate by providing your API key in the Authorization header:
Authorization: Bearer dai_your_api_key_hereGetting Your API Key:
- 1. Log in to your dashboard
- 2. Navigate to the API Keys page
- 3. Click "Create New API Key"
- 4. Copy the key (shown only once) and store it securely
Installation:
pip install dialeticaInitialize Client:
from dialetica import Dialetica
# Using environment variable (recommended)
# Set DIALETICA_AI_API_KEY in your environment
client = Dialetica()
# Or pass API key explicitly
client = Dialetica(api_key="dai_your_api_key_here")
Agents
Agents are AI personalities with unique instructions, model configurations, and tool access.
/v1/agentsExample Request
from dialetica import Dialetica, AgentRequest
client = Dialetica()
agent = client.agents.create(AgentRequest(
name="Scientific Researcher",
description="Analyzes data and formulates hypotheses",
instructions=[
"You are a scientific researcher",
"Analyze data objectively and propose testable hypotheses",
"Collaborate with other scientists to advance understanding"
],
model="auto/auto",
temperature=0.7,
max_tokens=1200,
tools=[],
verbosity="medium"
))
print(f"Created agent: {agent.id}")Request Body Fields:
- •
name(required): Agent name - •
description: Brief description - •
instructions: Array of system prompts - •
model: Model ID (default: "auto/auto") - •
temperature: 0.0 to 2.0 (default: 0.7) - •
max_tokens: Maximum response length (default: 1000) - •
tools: Array of tool configuration UUIDs
/v1/agents/bulkExample Request
from dialetica import Dialetica, AgentRequest
client = Dialetica()
# Create multiple agents at once
agents = client.agents.bulk_create([
AgentRequest(
name="Einstein",
description="Theoretical physicist exploring spacetime",
instructions=["You are Einstein. Discuss relativity and unified theories."],
model="openai/gpt-4o",
temperature=0.7,
max_tokens=1500,
verbosity="high"
),
AgentRequest(
name="Darwin",
description="Biologist studying evolution",
instructions=["You are Darwin. Explain natural selection and adaptation."],
model="anthropic/claude-3-5-sonnet-20241022",
temperature=0.6,
max_tokens=1400,
verbosity="medium"
)
])
print(f"✅ Created {len(agents)} agents in bulk")
for agent in agents:
print(f" - {agent.name} ({agent.id})")/v1/agentsExample Request
from dialetica import Dialetica
client = Dialetica()
agents = client.agents.list()
for agent in agents:
print(f"{agent.name} - {agent.model}")/v1/agents/{agent_id}Example Request
from dialetica import Dialetica
client = Dialetica()
agent = client.agents.get("agent_abc123")
if agent:
print(f"Agent: {agent.name}")
print(f"Model: {agent.model}")
print(f"Instructions: {agent.instructions}")/v1/agents/{agent_id}Example Request
from dialetica import Dialetica, AgentRequest
client = Dialetica()
updated_agent = client.agents.update(
"agent_abc123",
AgentRequest(
name="Updated Agent Name",
description="Updated description",
instructions=["New instruction 1", "New instruction 2"],
model="anthropic/claude-3-haiku-20240307",
temperature=0.8,
max_tokens=1500,
tools=[],
verbosity="low"
)
)
print(f"Updated: {updated_agent.name}")/v1/agents/{agent_id}Example Request
from dialetica import Dialetica
client = Dialetica()
success = client.agents.delete("agent_abc123")
if success:
print("Agent deleted successfully")Contexts
Contexts are conversation environments where multiple agents can interact and collaborate.
Context Window is an intelligent message management system that automatically handles conversation history to optimize performance and cost.
How It Works:
- • Monitors total token usage in conversations
- • Automatically compresses old messages into summaries
- • Maintains recent context while reducing costs
Configuration:
- •
context_window_size: Max tokens - • Default: 16,000 tokens (~48,000 chars)
- • Range: 0 to 200,000 tokens
- • Set to 0 to disable storage
💡 Stateless Mode (context_window_size = 0):
When set to 0, messages are not stored in the database. Each request is processed independently without conversation history. This is ideal for:
- • Single-turn interactions (no context needed)
- • Maximum privacy (no message persistence)
- • Reduced storage costs
- • Stateless API workflows
/v1/contextsExample Request (Standard)
from dialetica import Dialetica, ContextRequest
client = Dialetica()
# Create multi-agent scientific research collaboration
context = client.contexts.create(ContextRequest(
name="Scientific Discovery Lab",
description="Multi-agent scientific research collaboration",
instructions=[
"Agents collaborate on scientific hypotheses",
"Share insights and build upon each other's theories",
"Work together to advance scientific understanding"
],
context_window_size=16000,
agents=["researcher_id", "einstein_id", "darwin_id"],
users=[],
is_public=True
))
print(f"Created context: {context.id}")Example Request (Stateless Mode)
from dialetica import Dialetica, ContextRequest
client = Dialetica()
# Stateless context - no message storage
stateless_context = client.contexts.create(ContextRequest(
name="Stateless API Context",
description="No message storage",
context_window_size=0, # Disable storage
agents=["agent_abc123"],
is_public=False
))
print(f"Created stateless context: {stateless_context.id}")Request Body Fields:
- •
name(required): Context name - •
description: Brief description - •
instructions: Array of orchestration rules - •
context_window_size: Max tokens (default: 16000, 0 = stateless) - •
agents(required): Array of agent IDs - •
users: Array of user names (for multi-user contexts) - •
is_public: Boolean (default: false)
/v1/contexts/bulkExample Request
from dialetica import Dialetica, ContextRequest
client = Dialetica()
# Create multiple research contexts at once
contexts = client.contexts.bulk_create([
ContextRequest(
name="Physics Research",
description="Einstein explores theoretical physics",
instructions=["Focus on relativity and quantum mechanics"],
agents=["einstein_id", "researcher_id"],
context_window_size=8000,
is_public=True
),
ContextRequest(
name="Biology Research",
description="Darwin investigates evolution",
instructions=["Explore natural selection and adaptation"],
agents=["darwin_id", "researcher_id"],
context_window_size=6000,
is_public=True
)
])
print(f"✅ Created {len(contexts)} contexts in bulk")
for ctx in contexts:
print(f" - {ctx.name} ({ctx.id})")/v1/contextsExample Request
from dialetica import Dialetica
client = Dialetica()
contexts = client.contexts.list()
for context in contexts:
print(f"{context.name} - {len(context.agents)} agents")/v1/contexts/{context_id}Example Request
from dialetica import Dialetica
client = Dialetica()
context = client.contexts.get("ctx_abc123")
if context:
print(f"Context: {context.name}")
print(f"Agents: {context.agents}")/v1/contexts/{context_id}Example Request
from dialetica import Dialetica, ContextRequest
client = Dialetica()
# Update context and increase window size
updated_context = client.contexts.update(
"ctx_abc123",
ContextRequest(
name="Updated Context",
description="Updated description",
instructions=["New rule 1", "New rule 2"],
context_window_size=32000, # Increase to 32k tokens
agents=["agent_abc123", "agent_def456", "agent_ghi789"],
users=[],
is_public=True
)
)
print(f"Updated: {updated_context.name}")💡 Tip:
You can change the context_window_size at any time. Setting it to 0 will stop storing new messages, but existing messages remain in the database.
/v1/contexts/{context_id}Example Request
from dialetica import Dialetica
client = Dialetica()
success = client.contexts.delete("ctx_abc123")
if success:
print("Context deleted successfully")Messages
Send messages to contexts and get agent responses. Supports both synchronous and streaming modes.
/v1/contexts/{context_id}/runExample Request
from dialetica import Dialetica, MessageRequest
client = Dialetica()
# Send a scientific question to multi-agent context
message = MessageRequest(
role="user",
sender_name="Student",
content="How might quantum mechanics and evolution inform our understanding of consciousness?"
)
# ONE API CALL = Multi-agent automatic collaboration!
responses = client.contexts.run("ctx_abc123", [message])
print(f"🔬 {len(responses)} scientists responded:")
for response in responses:
print(f"\n{response.sender_name}: {response.content[:150]}...")💡 Response Behavior:
The API returns all agent responses triggered by your message. In multi-agent contexts, multiple agents may respond in sequence.
/v1/contexts/{context_id}/run/streamExample Request
from dialetica import Dialetica, MessageRequest
import asyncio
async def main():
client = Dialetica()
context = client.contexts.get("ctx_abc123")
message = MessageRequest(
role="user",
sender_name="Alice",
content="Explain quantum computing"
)
session_id = None
# Stream the response with new event structure
async for event in client.contexts.run_streamed(context, [message]):
event_type = event.get("type")
payload = event.get("payload", {})
# Session lifecycle events
if event_type == "session_started":
session_id = payload.get("session_id")
print(f"🎬 Session started: {session_id}\n")
# Turn events (context or agent starting/completing)
elif event_type == "turn_started":
kind = payload.get("kind")
name = payload.get("runnable_name")
if kind == "agent":
print(f"\n💭 {name} is thinking...\n")
elif kind == "context":
print(f"\n🎯 Context orchestrating...\n")
# Token-by-token streaming (new structure)
elif event_type == "token_delta" and payload.get("kind") == "agent":
print(payload.get("delta", ""), end="", flush=True)
# Turn completion
elif event_type == "turn_completed":
kind = payload.get("kind")
name = payload.get("runnable_name")
if kind == "agent":
print(f"\n✅ {name} completed\n")
# Session completion
elif event_type == "session_completed":
print(f"\n🎉 Session completed successfully!")
elif event_type == "session_failed":
print(f"\n❌ Session failed: {payload.get('reason')}")
# Tool events
elif event_type == "tool_event":
print(f"\n🔧 Tool: {payload.get('tool_name', 'unknown')}")
return session_id
session_id = asyncio.run(main())Event Structure:
All events follow this structure:
{
"type": "event_type_name",
"payload": {
"kind": "agent" or "context",
"runnable_name": "Agent Name",
"session_id": "session-uuid"
}
}Event Types:
- •
session_started: Session begins (includes session_id) - •
turn_started: Agent or context starts processing (kind: "agent" | "context") - •
token_delta: Token-by-token content (payload.kind: "agent", payload.delta: string) - •
tool_event: Tool calls and orchestration decisions - •
turn_completed: Agent or context finished (kind: "agent" | "context") - •
session_completed: Session finished successfully - •
session_failed: Session failed (payload.reason: string)
/v1/contexts/{context_id}/historyExample Request
from dialetica import Dialetica
client = Dialetica()
context = client.contexts.get("ctx_abc123")
# Get all messages
history = client.contexts.get_history(context)
for message in history:
print(f"[{message.timestamp}] {message.sender_name}: {message.content}")
# Filter by sender
alice_messages = client.contexts.get_history(context, sender_name="Alice")
print(f"Alice sent {len(alice_messages)} messages")/v1/contexts/{context_id}/compress_contextExample Request
from dialetica import Dialetica
client = Dialetica()
context_id = "ctx_abc123"
# Compress all messages in the context
resume = client.contexts.compress_context(context_id)
if resume:
print(f"✅ Context compressed successfully")
print(f"Compressed summary: {resume.compressed_summary[:200]}...")
print(f"Last message ID: {resume.last_message_id}")
print(f"Created at: {resume.created_at}")💡 Use Case:
Compressing a context reduces token usage for long conversations by summarizing all messages into a single compressed summary. This is useful when your conversation history becomes too large and you want to maintain context while reducing costs.
Response Fields:
- •
id: ID of the context resume - •
context_id: ID of the context - •
compressed_summary: The compressed summary text - •
last_message_id: ID of the last message considered in the compression - •
created_at: Timestamp when the resume was created
/v1/contexts/{context_id}/compress_last_n_messagesExample Request
from dialetica import Dialetica
client = Dialetica()
context_id = "ctx_abc123"
# Compress only the last 10 messages
resume = client.contexts.compress_last_n_messages(context_id, n=10)
if resume:
print(f"✅ Compressed last 10 messages")
print(f"Compressed summary: {resume.compressed_summary[:200]}...")
print(f"Last message ID: {resume.last_message_id}")
print(f"Created at: {resume.created_at}")💡 Use Case:
This endpoint compresses only the most recent n messages, which is useful when you want to compress recent conversation history without affecting older messages. This allows you to maintain granular history for older messages while reducing token usage for recent ones.
Query Parameters:
- •
n(required): Number of messages to compress from the end of the conversation
Response Fields:
- •
id: ID of the context resume - •
context_id: ID of the context - •
compressed_summary: The compressed summary text - •
last_message_id: ID of the last message considered in the compression - •
created_at: Timestamp when the resume was created
/v1/contexts/{context_id}/get_context_resumesExample Request
from dialetica import Dialetica
client = Dialetica()
context_id = "ctx_abc123"
# Get all context resumes (compression history)
resumes = client.contexts.get_context_resumes(context_id)
print(f"✅ Found {len(resumes)} context resumes")
for resume in resumes:
print(f"\nResume ID: {resume.id}")
print(f"Summary: {resume.compressed_summary[:200]}...")
print(f"Last message ID: {resume.last_message_id}")
print(f"Created at: {resume.created_at}")💡 Use Case:
This endpoint retrieves all the compressed summaries that have been created for a context. This is useful for viewing the history of context compressions, understanding how your conversation has been summarized over time, or accessing previous compression states.
Response:
Returns an array of ContextResumeResponse objects, ordered by creation date (newest first).
Response Fields:
- •
id: ID of the context resume - •
context_id: ID of the context - •
compressed_summary: The compressed summary text - •
last_message_id: ID of the last message considered in the compression - •
created_at: Timestamp when the resume was created
Session Management
Manage active execution sessions. Sessions are temporary execution instances created when running contexts. You can cancel active sessions or list all sessions for a context.
/v1/contexts/{context_id}/sessions/{session_id}/cancelExample Request
from dialetica import Dialetica
import asyncio
client = Dialetica()
context_id = "ctx_abc123"
async def stream_and_cancel():
from dialetica import MessageRequest
messages = [MessageRequest(
role="user",
sender_name="User",
content="Explain quantum mechanics in detail"
)]
session_id = None
# Start streaming
async for event in client.contexts.run_streamed(context_id, messages):
event_type = event.get("type")
payload = event.get("payload", {})
if event_type == "session_started":
session_id = payload.get("session_id")
print(f"Session started: {session_id}")
elif event_type == "token_delta" and payload.get("kind") == "agent":
print(payload.get("delta", ""), end="", flush=True)
# Example: Cancel after 100 characters
if len(payload.get("delta", "")) > 100:
print("\n\n⚠️ Cancelling session...")
success = client.contexts.cancel_session(context_id, session_id)
if success:
print("✅ Session cancelled")
break
asyncio.run(stream_and_cancel())💡 Getting Session ID:
The session_id is provided in the session_started event when you start streaming. Store it if you need to cancel the session later.
/v1/contexts/{context_id}/sessionsExample Request
from dialetica import Dialetica
client = Dialetica()
context_id = "ctx_abc123"
# List all active sessions
sessions = client.contexts.list_sessions(context_id)
if not sessions:
print("No active sessions")
else:
print(f"Active sessions: {len(sessions)}")
for session in sessions:
print(f" - {session['session_id']}")
print(f" Depth: {session['depth']}/{session['max_depth']}")
print(f" User: {session['user_id']}")
# Cancel all active sessions
for session in sessions:
success = client.contexts.cancel_session(context_id, session['session_id'])
if success:
print(f"✅ Cancelled {session['session_id']}")Session Response Fields:
- •
session_id: Unique session identifier - •
context_id: Context this session belongs to - •
user_id: User who started the session - •
depth: Current orchestration depth - •
max_depth: Maximum allowed depth - •
cancelled: Whether session is cancelled
Knowledge
Store and retrieve information using semantic search. Knowledge can be scoped to users, contexts, or specific agents.
/v1/knowledgeExample Request
from dialetica import Dialetica, KnowledgeRequest
client = Dialetica()
# Store private experimental results for Einstein to reference
knowledge = client.knowledge.create(KnowledgeRequest(
knowledge="Experiment #2847: Novel compound X-772 maintained quantum coherence for 47 seconds at room temperature. Breakthrough result - 10x previous record. Proprietary data.",
context_id="ctx_abc123",
agent_id="einstein_id", # Only Einstein can access this
metadata={
"type": "experimental_result",
"experiment_id": "2847",
"date": "2024-02-15",
"confidential": True
}
))
print(f"Created knowledge: {knowledge.id}")Knowledge Scopes:
- •
context_id=null, agent_id=null: User-level (global) - •
context_id=X, agent_id=null: Context-level - •
context_id=X, agent_id=Y: Agent-specific
/v1/knowledge/bulkExample Request
from dialetica import Dialetica, KnowledgeRequest
client = Dialetica()
# Store private research data for each scientist
knowledge_entries = client.knowledge.bulk_create([
KnowledgeRequest(
knowledge="Lab Result #2847: Compound X-772 achieved 47-second quantum coherence. Temperature: 23°C. Repeatability: 94% over 50 trials.",
context_id="ctx_abc123",
agent_id="einstein_id",
metadata={"type": "lab_result", "experiment": "2847"}
),
KnowledgeRequest(
knowledge="Field Study #891: New species in Amazon (3.4532°S, 62.2163°W) shows unique bioluminescence. DNA sample stored as BIO-891-A.",
context_id="ctx_abc123",
agent_id="darwin_id",
metadata={"type": "field_observation", "study": "891"}
),
KnowledgeRequest(
knowledge="Patient Cohort #445: 1,200 subjects with genetic marker YZ-443 show 89% disease resistance. IRB approved, anonymized data.",
context_id="ctx_abc123",
agent_id="researcher_id",
metadata={"type": "clinical_study", "cohort": "445"}
)
])
print(f"✅ Stored {len(knowledge_entries)} private research results")
for entry in knowledge_entries:
print(f" - {entry.knowledge[:60]}...")/v1/knowledgeExample Request
from dialetica import Dialetica
client = Dialetica()
knowledge_entries = client.knowledge.list()
for entry in knowledge_entries:
print(f"{entry.knowledge[:50]}... (ID: {entry.id})")/v1/knowledge/{knowledge_id}Example Request
from dialetica import Dialetica
client = Dialetica()
knowledge = client.knowledge.get("know_abc123")
if knowledge:
print(f"Knowledge: {knowledge.knowledge}")
print(f"Context: {knowledge.context_id}")
print(f"Metadata: {knowledge.metadata}")/v1/knowledge/{knowledge_id}Example Request
from dialetica import Dialetica, KnowledgeRequest
client = Dialetica()
updated = client.knowledge.update(
"know_abc123",
KnowledgeRequest(
knowledge="Patient Cohort #445: 1,200 subjects with genetic marker YZ-443 show 80% disease resistance. IRB approved, anonymized data.",
context_id="ctx_abc123",
agent_id=None,
metadata={
"category": "clinical_study",
"cohort": "445"
}
)
)
print(f"Updated: {updated.knowledge}")/v1/knowledge/{knowledge_id}Example Request
from dialetica import Dialetica
client = Dialetica()
success = client.knowledge.delete("know_abc123")
if success:
print("Knowledge deleted successfully")/v1/knowledge/queryExample Request
from dialetica import Dialetica
client = Dialetica()
# Search all knowledge in a context
results = client.knowledge.query(
context="ctx_abc123",
query="disease resistance",
limit=5
)
for result in results:
print(f"Match: {result.knowledge[:100]}...")
# Search knowledge visible to specific agent
agent_results = client.knowledge.query(
context="ctx_abc123",
query="disease resistance",
agent_id="agent_abc123",
limit=5
)💡 Semantic Search:
Results are ranked by semantic similarity, not exact keyword matching. The search understands context and finds relevant information even with different wording.
MCP Tools
Configure external tool integrations via Model Context Protocol (MCP). Connect agents to services like Notion, GitHub, databases, and more.
/v1/tool-configsExample Request
from dialetica import Dialetica, ToolConfigRequest
client = Dialetica()
tool_config = client.tools.create(ToolConfigRequest(
name="Exa.ai MCP",
description="Access Exa.ai MCP for AI search engine",
endpoint="https://mcp.exa.ai/mcp",
auth_token=None,
type="streamable_http"
))
print(f"Created tool config: {tool_config.id}")
print(f"Name: {tool_config.name}")
print(f"Endpoint: {tool_config.endpoint}")Connection Types:
- •
streamable_http: Standard HTTP with streaming support - •
sse: Server-Sent Events
/v1/tool-configsExample Request
from dialetica import Dialetica
client = Dialetica()
tools = client.tools.list()
for tool in tools:
print(f"{tool.name}: {tool.endpoint}")
print(f" Has auth: {tool.has_auth_token}")
print(f" Excluded tools: {tool.excluded_tools}/v1/tool-configs/{tool_config_id}Example Request
from dialetica import Dialetica
client = Dialetica()
tool = client.tools.get("tool_abc123")
if tool:
print(f"Tool: {tool.name}")
print(f"Endpoint: {tool.endpoint}")
print(f"Type: {tool.type}")/v1/tool-configs/{tool_config_id}Example Request
from dialetica import Dialetica, ToolConfigRequest
client = Dialetica()
updated_tool = client.tools.update(
"tool_abc123",
ToolConfigRequest(
name="Updated Exa Integration",
description="Updated description",
endpoint="https://mcp.exa.ai/mcp/v2",
auth_token="exa_secret_new_token...",
type="streamable_http"
)
)
print(f"Updated: {updated_tool.name}")/v1/tool-configs/{tool_config_id}Example Request
from dialetica import Dialetica
client = Dialetica()
success = client.tools.delete("tool_abc123")
if success:
print("Tool config deleted successfully")Cron Jobs
Schedule automated tasks that execute prompts within contexts at specific times or intervals.
/v1/cronsExample Request
from dialetica import Dialetica, CronRequest
from datetime import datetime, timedelta
client = Dialetica()
# Create recurring cron (daily at 9 AM)
daily_cron = client.crons.create(CronRequest(
name="Daily Report",
prompt="Generate a summary of today's activities",
context_id="ctx_abc123",
cron_expression="0 9 * * *"
))
print(f"Created cron: {daily_cron.id}")
print(f"Next run: {daily_cron.cron_next_run}")
# Create one-time execution
one_time = client.crons.create(CronRequest(
name="One-time Task",
prompt="Send reminder about meeting",
context_id="ctx_abc123",
scheduled_time=datetime.now() + timedelta(days=1)
))
print(f"One-time task scheduled for: {one_time.scheduled_time}")Common Cron Expressions:
- •
0 * * * *- Every hour - •
0 9 * * *- Daily at 9 AM - •
0 9 * * 1- Every Monday at 9 AM - •
0 0 1 * *- First day of each month - •
*/15 * * * *- Every 15 minutes
/v1/crons/bulkExample Request
from dialetica import Dialetica, CronRequest
client = Dialetica()
# Schedule automated scientific discussions
crons = client.crons.bulk_create([
CronRequest(
name="Daily Physics Discussion",
prompt="Einstein, share insights on quantum mechanics and relativity",
context_id="ctx_abc123",
cron_expression="0 9 * * *" # Daily at 9 AM
),
CronRequest(
name="Weekly Evolution Research",
prompt="Darwin and Researcher, collaborate on evolutionary biology findings",
context_id="ctx_def456",
cron_expression="0 10 * * 1" # Every Monday at 10 AM
)
])
print(f"✅ Scheduled {len(crons)} automated research sessions")
for cron in crons:
print(f" - {cron.name}: {cron.cron_expression}")/v1/cronsExample Request
from dialetica import Dialetica
client = Dialetica()
crons = client.crons.list()
for cron in crons:
print(f"{cron.name}")
print(f" Status: {cron.cron_status}")
print(f" Next run: {cron.cron_next_run}")
if cron.cron_last_run:
print(f" Last run: {cron.cron_last_run}")/v1/crons/{cron_id}Example Request
from dialetica import Dialetica
client = Dialetica()
cron = client.crons.get("cron_abc123")
if cron:
print(f"Cron: {cron.name}")
print(f"Prompt: {cron.prompt}")
print(f"Schedule: {cron.cron_expression}")
print(f"Status: {cron.cron_status}")/v1/crons/{cron_id}Example Request
from dialetica import Dialetica, CronRequest
client = Dialetica()
updated_cron = client.crons.update(
"cron_abc123",
CronRequest(
name="Updated Report",
prompt="Generate detailed summary",
context_id="ctx_abc123",
cron_expression="0 10 * * *"
)
)
print(f"Updated: {updated_cron.name}")
print(f"New schedule: {updated_cron.cron_expression}")/v1/crons/{cron_id}Example Request
from dialetica import Dialetica
client = Dialetica()
success = client.crons.delete("cron_abc123")
if success:
print("Cron job deleted successfully")Routing
Use the routing engine to determine which agent should respond next in a multi-agent conversation.
/v1/contexts/{context_id}/routeExample Request
from dialetica import Dialetica, MessageRequest
client = Dialetica()
messages = [
MessageRequest(
role="user",
sender_name="Alice",
content="I need help with my billing"
)
]
route_result = client.contexts.route("ctx_abc123", messages)
if route_result:
print(f"Next speaker: {route_result.next_speaker}")
# Output: "Next speaker: Billing Agent"💡 Use Case:
This endpoint is useful for implementing custom orchestration logic or debugging your multi-agent system. It analyzes the conversation and returns the name of the agent (or "user") who should speak next, without actually generating a response.
Models
Discover available AI models and their metadata. No authentication required, but recommended for rate limiting.
/v1/agents/modelsExample Request
from dialetica import Dialetica
client = Dialetica()
# Get all available models
models = client.models.list()
print("Available AI Models:")
print("=" * 70)
for model in models:
tier_badge = "🆓 FREE" if model['tier'] == 'free' else "💎 PRO"
print(f"{tier_badge:10} {model['label']:30} {model['model']}")
# Filter by tier
free_models = [m for m in models if m['tier'] == 'free']
pro_models = [m for m in models if m['tier'] == 'pro']
print(f"\n📊 Summary:")
print(f" Free models: {len(free_models)}")
print(f" Pro models: {len(pro_models)}")
print(f" Total: {len(models)}")
# Use a model in agent creation
from dialetica import AgentRequest
agent = client.agents.create(AgentRequest(
name="My Agent",
description="Uses a free tier model",
instructions=["You are helpful"],
model=free_models[0]['model'] # Use first free model
))Response Fields:
- •
label: Human-readable model name (e.g., "GPT-4o Mini") - •
model: Model identifier (e.g., "auto/auto") - •
tier: Pricing tier ("free" or "pro")
Completions
OpenAI-compatible completions endpoint that allows you to use Dialetica AI contexts as if they were standard language models. Perfect for evaluation frameworks, testing tools, and any OpenAI-compatible application.
The Completions endpoint provides full OpenAI API compatibility, enabling you to use any Dialetica AI context as a "model" by passing the context ID. This unlocks the power of multi-agent systems through a simple, standard API interface.
Key Features:
- • Full OpenAI API compatibility
- • Use contexts as models
- • Multi-agent support
- • Tool and knowledge integration
Use Cases:
- • Evaluation frameworks (LM Eval)
- • OpenAI-compatible tools
- • Testing multi-agent contexts
- • Standard API integrations
💡 How It Works:
When you send a prompt to the completions endpoint with a context ID as the "model", the system:
- 1. Resolves the context ID to your Dialetica AI context
- 2. Converts the prompt into a user message
- 3. Runs the context with all its agents, tools, and knowledge
- 4. Returns the combined response in OpenAI format
/v1/completionsExample Request
# Using OpenAI client library (recommended)
import openai
# Initialize OpenAI client with Dialetica AI endpoint
client = openai.OpenAI(
base_url="https://api.dialetica.ai/v1",
api_key="dai_your_api_key"
)
# Use a context as a model
response = client.completions.create(
model="550e8400-e29b-41d4-a716-446655440000", # Your context ID
prompt="What is the capital of France?"
)
print(response.choices[0].text)
# Alternative: Direct HTTP request
import requests
response = requests.post(
"https://api.dialetica.ai/v1/completions",
headers={
"Authorization": "Bearer dai_your_api_key",
"Content-Type": "application/json"
},
json={
"model": "550e8400-e29b-41d4-a716-446655440000",
"prompt": "Explain quantum computing"
}
)
result = response.json()
print(result["choices"][0]["text"])Request Body Fields:
- •
model(required): The context ID to use as the "model" (UUID format) - •
prompt(required): The text prompt to complete
📝 Note:
Other OpenAI parameters (like max_tokens, temperature, top_p) are currently ignored, as these are controlled by the context and agent configurations.
{
"id": "cmpl-1234567890",
"object": "text_completion",
"created": 1704067200,
"model": "550e8400-e29b-41d4-a716-446655440000",
"choices": [
{
"text": "The capital of France is Paris.",
"index": 0,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}Response Fields:
- •
id: Unique identifier (format:cmpl-{timestamp}) - •
object: Always"text_completion" - •
created: Unix timestamp - •
model: The context ID that was used - •
choices: Array with completion result- -
text: Combined text from all assistant messages - -
index: Always0 - -
finish_reason: Always"stop"
- -
- •
usage: Token usage (currently returns zeros)
You can use the completions endpoint with evaluation frameworks like LM Evaluation Harness to benchmark your contexts:
lm_eval --model local-completions \
--model_args model=550e8400-e29b-41d4-a716-446655440000,base_url=https://api.dialetica.ai/v1,tokenizer=gpt2 \
--tasks hellaswagParameters:
- •
model: Your context ID - •
base_url: Dialetica AI API base URL - •
tokenizer: Tokenizer to use for evaluation
Access Rules:
- • Private Contexts: Only the context creator can use them
- • Public Contexts: Anyone with a valid API key can use them
- • Error Responses:
- -
404: Context doesn't exist - -
403: User doesn't have access
- -
💡 Tip:
If you want others to use your context via this endpoint, make sure to set is_public: true when creating the context.
| Feature | OpenAI API | Dialetica AI Completions |
|---|---|---|
| Model Selection | Pre-trained model name | Context ID (custom multi-agent system) |
| Multi-Agent Support | ❌ | ✅ |
| Tool Integration | Limited | ✅ Full tool support |
| Knowledge Base | ❌ | ✅ Semantic search |
| Custom Instructions | Limited | ✅ Full context instructions |
| Response Format | OpenAI standard | OpenAI standard (same) |
Usage Tracking
Monitor your API usage, spending, and token consumption. Track usage over time and filter by date, model, or context.
/v1/usage/summaryExample Request
from dialetica import Dialetica
client = Dialetica()
# Get usage summary for last 30 days (default)
summary = client.usage.get_summary(days=30)
if summary:
print("📊 Usage Summary (Last 30 days):")
print(" Total Spend: $" + str(summary['total_spend']))
print(" Previous Period: $" + str(summary['previous_period_spend']))
print(" Total Tokens: " + str(summary['total_tokens']))
print(" Total Requests: " + str(summary['total_requests']))
print("\n📈 Daily Usage:")
for day in summary['daily_usage'][:7]: # Last 7 days
date = day.get('date')
spend = day.get('spend', 0)
tokens = day.get('tokens', 0)
print(" " + str(date) + ": $" + str(spend) + " (" + str(tokens) + " tokens)")
print("\n🔧 Capabilities:")
for cap in summary['capabilities']:
model = cap.get('model')
spend = cap.get('spend', 0)
print(" " + str(model) + ": $" + str(spend))Response Fields:
- •
total_spend: Total spending in the period - •
previous_period_spend: Spending in previous period (for comparison) - •
total_tokens: Total tokens used - •
total_requests: Total number of requests - •
daily_usage: List of daily usage records - •
capabilities: Breakdown by model/capability
/v1/usageExample Request
from dialetica import Dialetica
client = Dialetica()
# Get all usage records
all_usage = client.usage.get_detailed()
# Get usage for specific date range
usage_jan = client.usage.get_detailed(
start_date="2024-01-01",
end_date="2024-01-31"
)
# Get usage for specific model
gpt4_usage = client.usage.get_detailed(
model="openai/gpt-4o"
)
# Get usage for specific context
context_usage = client.usage.get_detailed(
context_id="ctx_abc123"
)
# Print usage details
for record in usage_jan[:10]: # First 10 records
created_at = record.get('created_at')
model = record.get('model')
tokens = record.get('total_tokens', 0)
cost = record.get('total_cost', 0)
print(str(created_at) + ": " + str(model))
print(" Tokens: " + str(tokens))
print(" Cost: $" + str(cost))
print()Query Parameters:
- •
start_date: Start date in ISO format (e.g., "2024-01-01") - •
end_date: End date in ISO format (e.g., "2024-01-31") - •
model: Filter by model (e.g., "openai/gpt-4o") - •
context_id: Filter by context ID
Error Response Format:
{
"detail": "Error message describing what went wrong",
"status_code": 400
}Common HTTP Status Codes:
- •
200- Success - •
400- Bad Request (invalid input) - •
401- Unauthorized (invalid API key) - •
402- Insufficient Credits - •
403- Forbidden (access denied) - •
404- Not Found - •
500- Internal Server Error
Best Practices:
- • Always check status codes
- • Log error details for debugging
- • Implement retry logic for 5xx errors
- • Handle 402 (insufficient credits) gracefully
- • Validate input before sending requests
Current Limits:
- • Free Tier: 1 USD/month
- • Pro Tier: Free Tier + Top Up Credits
Monitor your usage in the Usage Dashboard