Memory Plugin
Production-ready semantic memory for Daita agents with automatic local/cloud detection and intelligent curation.
#Quick Start
from daita import Agent
from daita.plugins import MemoryPlugin
# Create plugin (automatically project-scoped and isolated per agent)
memory = MemoryPlugin()
# Add to agent - memory is now persistent across runs
agent = Agent(
name="Research Assistant",
prompt="You are a research assistant. Use memory to track important findings.",
tools=[memory]
)
await agent.start()
# Agent can now remember and recall information autonomously
result = await agent.run("Remember that the user prefers Python over JavaScript")
# Later...
result = await agent.run("What programming language does the user prefer?")
await agent.stop()#Direct Usage
The plugin can be used directly without agents for programmatic memory management. However, the main value is agent integration - enabling LLMs to autonomously store and retrieve context across conversations using semantic search.
#Configuration Parameters
MemoryPlugin(
workspace: Optional[str] = None,
scope: str = "project",
auto_curate: str = "on_stop",
curation_provider: Optional[str] = None,
curation_model: Optional[str] = None,
curation_api_key: Optional[str] = None,
embedding_provider: str = "openai",
embedding_model: str = "text-embedding-3-small"
)#Parameters
workspace(str): Workspace name for memory isolation. Default: auto-generated from agent name for stable persistence across runsscope(str): Memory scope -"project"(default, stored in.daita/memory/) or"global"(stored in~/.daita/memory/)auto_curate(str): Curation trigger mode -"on_stop"(default) or"manual"curation_provider(str): LLM provider for curation ("openai","anthropic", etc.). Default:"openai"curation_model(str): LLM model for curation. Default:"gpt-4o-mini"curation_api_key(str): API key for curation LLM. Default: uses global settingsembedding_provider(str): Provider for semantic embeddings. Default:"openai"embedding_model(str): Model for embeddings. Default:"text-embedding-3-small"
#Memory Scopes & Workspaces
Scope controls where memory is stored:
# Project-scoped (default) - memory stays with this project
memory = MemoryPlugin(scope="project")
# Location: .daita/memory/workspaces/{workspace}/
# Global - memory accessible across all projects
memory = MemoryPlugin(scope="global")
# Location: ~/.daita/memory/workspaces/{workspace}/Workspace controls memory isolation:
# Isolated (default) - each agent has its own memory
agent = Agent("Researcher", tools=[MemoryPlugin()])
# Workspace: "researcher" (auto-generated from agent name)
# Shared - multiple agents share the same memory
shared_memory = MemoryPlugin(workspace="research_team")
agent1 = Agent("Researcher", tools=[shared_memory])
agent2 = Agent("Analyst", tools=[shared_memory])
# Both agents access workspace: "research_team"Cloud Deployment:
- Automatically detects cloud environment
- Uses AWS storage for persistence across serverless invocations
#Using with Agents
#Tool-Based Integration (Recommended)
Memory plugin exposes semantic memory operations as tools that agents can use autonomously:
from daita import Agent
from daita.plugins import MemoryPlugin
# Create memory plugin with custom configuration
memory = MemoryPlugin(
auto_curate="on_stop" # Curate when agent stops
)
# Pass plugin to agent - agent can now use memory tools autonomously
agent = Agent(
name="Personal Assistant",
prompt="""You are a personal assistant. Use your memory to:
- Remember user preferences and important facts
- Recall relevant context from past conversations
- Build a knowledge base over time""",
llm_provider="openai",
model="gpt-4",
tools=[memory]
)
await agent.start()
# Agent autonomously uses memory tools
result = await agent.run("Remember: I'm allergic to peanuts and prefer dark mode")
# Later conversation...
result = await agent.run("What are my dietary restrictions?")
# Agent uses recall() to find relevant memories
await agent.stop()#Available Tools
The Memory plugin exposes these tools to LLM agents:
| Tool | Description | Parameters |
|---|---|---|
| remember | Store information in long-term memory | content (required), importance (float: 0.5), category (optional) |
| recall | Search memories semantically | query (required), limit (int: 5), score_threshold (float: 0.6), importance filters |
| list_by_category | Enumerate all memories in a category | category (required), min_importance (float: 0.0), limit (int: 100) |
| update_memory | Replace an existing memory | query (required), new_content (required), importance (float: 0.5) |
| read_memory | Read complete memory file | file (default: "MEMORY.md", or "today") |
| list_memories | List all memory files | None |
Tool Categories: memory
Tool Source: plugin
#Tool Usage Example
from daita import Agent
from daita.plugins import MemoryPlugin
# Setup memory with custom curation
memory = MemoryPlugin(
workspace="project_alpha", # Shared workspace
auto_curate="on_stop" # Curate when agent stops
)
agent = Agent(
name="Project Manager",
prompt="You are a project manager. Track decisions, tasks, and key information.",
llm_provider="openai",
model="gpt-4",
tools=[memory]
)
await agent.start()
# Agent uses memory tools autonomously
result = await agent.run("""
Store project information:
- Client prefers weekly status updates on Mondays
- Budget approved: $50,000
- Deadline: March 15, 2024
- Tech stack: Python, FastAPI, PostgreSQL
""")
# Later, retrieve context
result = await agent.run("What's our project deadline and budget?")
# Agent uses recall() to find relevant information
# Check specific memory file
result = await agent.run("Show me the long-term memory file")
# Agent uses read_memory() to display full content
await agent.stop()#Direct Memory Operations (Scripts)
For scripts that need memory operations, use a lightweight agent:
import asyncio
from daita import Agent
from daita.plugins import MemoryPlugin
async def main():
memory = MemoryPlugin(workspace="analytics", auto_curate="manual")
agent = Agent(
name="Memory Manager",
model="gpt-4o-mini",
prompt="You are a memory manager. Store and retrieve information as instructed.",
tools=[memory]
)
await agent.start()
# Store information
await agent.run(
"Remember with importance 0.8 and category 'financial': "
"Q4 revenue exceeded projections by 15%"
)
# Search memories
result = await agent.run("What do you know about revenue projections?")
print(result)
await agent.stop()
asyncio.run(main())#Advanced Memory Management
#Programmatic Curation
from daita.plugins import MemoryPlugin
memory = MemoryPlugin(auto_curate="manual")
agent = Agent("Analyst", tools=[memory])
await agent.start()
# Run agent interactions...
result = await agent.run("Analyze today's data...")
# Manually trigger curation
curation_result = await memory.curate()
print(f"Added {curation_result.facts_added} facts")
print(f"Cost: ${curation_result.cost_usd:.4f}")
await agent.stop()#Importance Scoring
# Mark specific memories as important
result = await memory.mark_important(
query="project deadline",
importance=0.9,
source="user_explicit"
)
# Pin critical memories (never pruned)
result = await memory.pin(query="client password")
print(f"Pinned {result['updated']} memories")
# Remove outdated memories
result = await memory.forget(query="old API credentials")
print(f"Deleted {result['deleted']} memories")#Runtime Configuration
# Update configuration dynamically
memory.configure(auto_curate="manual") # Switch to manual curation
memory.configure(auto_curate="on_stop") # Switch back to automatic#Curation System
The Memory Plugin includes intelligent curation that extracts important facts from daily logs and stores them in long-term memory.
Curation Process:
- Analyzes daily conversation logs
- Extracts key facts, preferences, and decisions using LLM
- Assigns importance scores (0.0-1.0) to each fact
- Merges similar facts to prevent redundancy
- Stores in long-term memory with semantic embeddings
Curation Modes:
# Automatic on agent stop (default)
MemoryPlugin(auto_curate="on_stop")
# Manual trigger only
MemoryPlugin(auto_curate="manual")Curation Result:
curation_result = await memory.curate()
# Access results
print(f"Success: {curation_result.success}")
print(f"Facts extracted: {curation_result.facts_extracted}")
print(f"Facts added: {curation_result.facts_added}")
print(f"Memories updated: {curation_result.memories_updated}")
print(f"Memories pruned: {curation_result.memories_pruned}")
print(f"Tokens used: {curation_result.tokens_used}")
print(f"Cost: ${curation_result.cost_usd:.4f}")#Best Practices
Memory Organization:
- Use project scope for project-specific context (default)
- Use global scope for cross-project knowledge (user preferences, general facts)
- Create shared workspaces for team collaboration across agents
- Keep isolated workspaces (default) for independent agent tasks
Performance:
- Let auto-curation run on agent stop (default) - balances freshness and cost
- Use
auto_curate="manual"for long-running agents where you control timing - Set
score_thresholdin recall() to filter low-relevance results (default: 0.6) - Use importance filters to focus on high-value memories
Cost Management:
- Use
gpt-4o-minifor curation (default) - balances quality and cost - Manual curation mode gives full control over when LLM calls occur
- Monitor curation costs via
CurationResult.cost_usd
Security:
- Never store credentials or API keys in memory
- Use memory for context, decisions, and preferences only
- Pin critical business rules to prevent accidental pruning
#Common Patterns
Long-Running Agent with Shared Memory:
# Multiple agents share the same memory workspace
shared_memory = MemoryPlugin(workspace="support_team")
agent1 = Agent("Support Agent A", tools=[shared_memory])
agent2 = Agent("Support Agent B", tools=[shared_memory])
# Agent A stores customer context
await agent1.start()
await agent1.run("Customer prefers email communication over phone")
await agent1.stop()
# Agent B can recall that context later
await agent2.start()
result = await agent2.run("How does this customer prefer to be contacted?")
# Agent B finds the information stored by Agent A
await agent2.stop()Research Assistant with Global Knowledge:
# Global scope for persistent knowledge across all projects
memory = MemoryPlugin(
scope="global",
workspace="research_knowledge",
auto_curate="on_stop"
)
agent = Agent(
name="Research Assistant",
prompt="You are a research assistant. Build a knowledge base over time.",
tools=[memory]
)
await agent.start()
# Store research findings
await agent.run("Remember: The Pythagorean theorem applies to right triangles")
await agent.run("Remember: Python uses 0-based indexing for lists")
# Knowledge persists across projects and sessions
await agent.stop()Workflow Integration:
from daita.core import Workflow
from daita.plugins import MemoryPlugin
# Shared memory across workflow agents
memory = MemoryPlugin(workspace="data_pipeline")
# Each agent in workflow uses shared memory
data_agent = Agent("Data Collector", tools=[memory])
analyst_agent = Agent("Data Analyst", tools=[memory])
reporter_agent = Agent("Report Generator", tools=[memory])
workflow = Workflow("Analytics Pipeline")
workflow.add_agent(data_agent)
workflow.add_agent(analyst_agent)
workflow.add_agent(reporter_agent)
# Agents share context through memory as workflow executes
await workflow.run()#Error Handling
from daita.plugins import MemoryPlugin
try:
memory = MemoryPlugin(
workspace="my_workspace",
curation_provider="openai"
)
agent = Agent("Assistant", tools=[memory])
await agent.start()
result = await agent.run("Remember important information")
except RuntimeError as e:
if "Missing required environment variables" in str(e):
print("Set DAITA_ORG_ID and DAITA_PROJECT_NAME for cloud memory")
elif "not installed" in str(e):
print("Install embedding provider: pip install openai")
else:
print(f"Memory error: {e}")
finally:
await agent.stop()#Troubleshooting
| Issue | Solution |
|---|---|
openai not installed | pip install openai (or anthropic, for embeddings) |
| Cloud memory initialization fails | Set DAITA_ORG_ID and DAITA_PROJECT_NAME env vars |
| Empty recall results | Lower score_threshold or check if memories exist |
| High curation costs | Use auto_curate="manual" to control when curation runs |
| Memories not persisting | Check workspace and scope configuration |
| Shared memory not working | Ensure same workspace parameter across agents |
| Curation not running | Check auto_curate setting, verify LLM provider configured |
#Next Steps
- Agent Basics - Learn how to create agents with memory
- Workflows - Use shared memory in multi-agent workflows
- PostgreSQL Plugin - Combine memory with database access
- Plugin Overview - Explore other plugins