#Overview
Learn how to build sophisticated agents by combining multiple plugins and custom tools. This example demonstrates advanced architectural patterns for creating agents with diverse capabilities - from database operations to cloud storage to custom business logic - all working together seamlessly.
#What You'll Learn
- Combining multiple plugins in a single agent
- Mixing PostgreSQL, S3, and custom tools
- Tool organization by category and source
- Building comprehensive multi-capability agents
- Managing tool namespaces and conflicts
- Architecting complex agent systems
#Prerequisites
- Understanding of PostgreSQL Tools basics
- Familiarity with custom tool creation
- Experience with multiple data sources
- Advanced Python knowledge
#Why Multi-Plugin Architecture?
Real-world agents need diverse capabilities:
- Database operations: Query, analyze, and update data
- Cloud storage: Read/write files, manage assets
- Custom logic: Business rules, calculations, integrations
- API integrations: External services and webhooks
- Processing tools: Data transformation, validation
Combining plugins creates powerful, versatile agents that can handle complex workflows autonomously.
#Step 1: Understanding the Architecture
Multi-plugin architecture combines tools from different sources:
Agent
├── PostgreSQL Plugin
│ ├── list_tables
│ ├── get_table_schema
│ ├── query_database
│ └── execute_sql
├── S3 Plugin
│ ├── list_s3_objects
│ ├── get_s3_object
│ ├── put_s3_object
│ └── delete_s3_object
└── Custom Tools
├── calculate_metrics
├── send_notification
└── validate_dataKey concepts:
- Tool sources: Plugins vs custom tools
- Categories: Logical grouping (database, storage, analytics)
- Namespacing: Avoiding name conflicts
- Tool registry: Central management of all tools
#Step 2: Create Multiple Plugins
Set up PostgreSQL and S3 plugins:
from daita.agents.agent import Agent
from daita.plugins.postgresql import postgresql
from daita.plugins.s3 import s3
import asyncio
# Create PostgreSQL plugin
db = postgresql(
host="localhost",
database="app_db",
user="app_user",
password="app_pass"
)
# Create S3 plugin
storage = s3(
bucket="my-app-bucket",
region="us-east-1"
)
# Create agent with both plugins
agent = Agent(
name="multi_tool_agent",
tools=[db, storage] # Multiple plugins
)
print(f"Agent has {len(agent.tool_names)} tools")What happens:
- Each plugin registers its tools independently
- Tools from different plugins coexist in agent's tool registry
- Agent can call any tool from any plugin
- No conflicts if tools have unique names
#Step 3: Add Custom Tools
Mix plugins with custom tools:
from daita import tool
# Create custom tool using @tool decorator
async def calculate_metrics(args):
"""Calculate business metrics from data"""
metric_type = args.get("metric_type")
time_period = args.get("time_period", "7d")
# Simulated calculation
value = 12345.67
trend = "up"
change_percent = 15.2
return {
"metric": metric_type,
"period": time_period,
"value": value,
"trend": trend,
"change_percent": change_percent
}
# Register tool with metadata
metrics_tool = tool(
calculate_metrics,
name="calculate_business_metrics",
description="Calculate business metrics for a given time period",
parameters={
"metric_type": {
"type": "string",
"description": "Type of metric: revenue, users, engagement, retention",
"required": True
},
"time_period": {
"type": "string",
"description": "Time period: 7d, 30d, 90d",
"required": False
}
},
category="analytics"
)
# Create agent with all tool sources
agent = Agent(
name="multi_tool_agent",
tools=[db, storage, metrics_tool] # Mix plugins and custom tools
)#Step 4: Inspect Multi-Source Tools
See how tools are organized by source and category:
async def main():
db = postgresql(host="localhost", database="app_db")
storage = s3(bucket="my-app-bucket")
async def calculate_metrics(args):
metric_type = args.get("metric_type")
time_period = args.get("time_period", "7d")
return {
"metric": metric_type,
"value": 12345.67,
"trend": "up"
}
metrics_tool = tool(
calculate_metrics,
name="calculate_business_metrics",
description="Calculate business metrics",
parameters={
"metric_type": {"type": "string", "required": True},
"time_period": {"type": "string", "required": False}
},
category="analytics"
)
agent = Agent(
name="multi_tool_agent",
tools=[db, storage, metrics_tool]
)
print(f"\nAgent: {agent.name}")
print(f"Total tools: {len(agent.tool_names)}\n")
# Group tools by source and category
for tool_name in agent.tool_names:
tool_obj = agent.tool_registry.get(tool_name)
print(f" - {tool_name}")
print(f" Category: {tool_obj.category}")
print(f" Source: {tool_obj.source}")
print(f" Plugin: {tool_obj.plugin_name or 'N/A'}")
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())Output:
Agent: multi_tool_agent
Total tools: 9
- list_tables
Category: database
Source: plugin
Plugin: postgresql
- get_table_schema
Category: database
Source: plugin
Plugin: postgresql
- query_database
Category: database
Source: plugin
Plugin: postgresql
- execute_sql
Category: database
Source: plugin
Plugin: postgresql
- list_s3_objects
Category: storage
Source: plugin
Plugin: s3
- get_s3_object
Category: storage
Source: plugin
Plugin: s3
- put_s3_object
Category: storage
Source: plugin
Plugin: s3
- delete_s3_object
Category: storage
Source: plugin
Plugin: s3
- calculate_business_metrics
Category: analytics
Source: custom
Plugin: N/A#Step 5: Use Tools from Different Sources
Call tools from any plugin or custom source:
async def main():
db = postgresql(host="localhost", database="app_db")
storage = s3(bucket="my-app-bucket")
async def calculate_metrics(args):
return {
"metric": args.get("metric_type"),
"value": 12345.67,
"trend": "up",
"change_percent": 15.2
}
metrics_tool = tool(
calculate_metrics,
name="calculate_business_metrics",
description="Calculate business metrics",
parameters={
"metric_type": {"type": "string", "required": True},
"time_period": {"type": "string", "required": False}
},
category="analytics"
)
agent = Agent(
name="multi_tool_agent",
tools=[db, storage, metrics_tool]
)
# 1. Use database tool
print("\n1. Database Tool - List tables:")
db_result = await agent.call_tool("list_tables", {})
print(f" Tables: {db_result.get('tables', [])}")
# 2. Use storage tool
print("\n2. Storage Tool - List S3 objects:")
s3_result = await agent.call_tool("list_s3_objects", {
"prefix": "data/"
})
print(f" Objects: {s3_result.get('objects', [])[:3]}") # First 3
# 3. Use custom tool
print("\n3. Custom Tool - Calculate metrics:")
metrics_result = await agent.call_tool("calculate_business_metrics", {
"metric_type": "revenue",
"time_period": "30d"
})
print(f" Metric: {metrics_result['metric']}")
print(f" Value: ${metrics_result['value']:.2f}")
print(f" Trend: {metrics_result['trend']} ({metrics_result['change_percent']:+.1f}%)")
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())#Step 6: Build a Complex Workflow Handler
Create handlers that orchestrate tools from multiple sources:
async def generate_analytics_report(data, context, agent):
"""
Complex handler that uses database, storage, and analytics tools
to generate a comprehensive report
"""
report_date = data.get("date", "2026-01-08")
# Step 1: Query database for raw data
print("📊 Fetching data from database...")
sales_result = await agent.call_tool("query_database", {
"sql": """
SELECT category, SUM(revenue) as total_revenue, COUNT(*) as order_count
FROM sales
WHERE sale_date = $1
GROUP BY category
ORDER BY total_revenue DESC
""",
"params": [report_date]
})
if not sales_result["success"]:
return {"error": "Failed to fetch sales data"}
sales_data = sales_result["rows"]
# Step 2: Calculate metrics using custom tool
print("📈 Calculating business metrics...")
metrics = {}
for category_data in sales_data:
metric_result = await agent.call_tool("calculate_business_metrics", {
"metric_type": "revenue",
"time_period": "1d"
})
metrics[category_data["category"]] = metric_result
# Step 3: Generate report document
report_content = f"""
# Daily Analytics Report - {report_date}
## Sales Summary
"""
for category_data in sales_data:
category = category_data["category"]
revenue = category_data["total_revenue"]
orders = category_data["order_count"]
trend = metrics.get(category, {}).get("trend", "neutral")
report_content += f"""
### {category}
- Total Revenue: ${revenue:,.2f}
- Orders: {orders}
- Trend: {trend} {metrics.get(category, {}).get('change_percent', 0):.1f}%
"""
# Step 4: Save report to S3
print("💾 Saving report to S3...")
report_key = f"reports/daily/{report_date}/analytics.md"
save_result = await agent.call_tool("put_s3_object", {
"key": report_key,
"body": report_content,
"content_type": "text/markdown"
})
if not save_result["success"]:
return {"error": "Failed to save report"}
# Step 5: Return summary
return {
"success": True,
"report_date": report_date,
"categories_analyzed": len(sales_data),
"total_revenue": sum(row["total_revenue"] for row in sales_data),
"report_location": f"s3://my-app-bucket/{report_key}",
"metrics_calculated": len(metrics)
}#Step 7: Tool Categories and Organization
Organize tools by category for better management:
async def main():
db = postgresql(host="localhost", database="app_db")
storage = s3(bucket="my-app-bucket")
async def calculate_metrics(args):
return {"metric": args.get("metric_type"), "value": 12345.67}
async def validate_data(args):
return {"valid": True, "errors": []}
async def send_notification(args):
return {"sent": True, "message_id": "msg_123"}
metrics_tool = tool(calculate_metrics, name="calculate_metrics", category="analytics")
validation_tool = tool(validate_data, name="validate_data", category="processing")
notification_tool = tool(send_notification, name="send_notification", category="communication")
agent = Agent(
name="organized_agent",
tools=[db, storage, metrics_tool, validation_tool, notification_tool]
)
# Group tools by category
categories = {}
for tool_name in agent.tool_names:
tool_obj = agent.tool_registry.get(tool_name)
category = tool_obj.category
if category not in categories:
categories[category] = []
categories[category].append(tool_name)
print("\nTools organized by category:\n")
for category, tool_list in sorted(categories.items()):
print(f"{category.upper()}:")
for tool_name in tool_list:
print(f" - {tool_name}")
print()
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())Output:
Tools organized by category:
ANALYTICS:
- calculate_metrics
COMMUNICATION:
- send_notification
DATABASE:
- list_tables
- get_table_schema
- query_database
- execute_sql
PROCESSING:
- validate_data
STORAGE:
- list_s3_objects
- get_s3_object
- put_s3_object
- delete_s3_object#Complete Example
Full multi-plugin agent with complex workflow:
from daita.agents.agent import Agent
from daita.plugins.postgresql import postgresql
from daita.plugins.s3 import s3
from daita import tool
import asyncio
async def calculate_metrics(args):
"""Calculate business metrics"""
metric_type = args.get("metric_type")
time_period = args.get("time_period", "7d")
return {
"metric": metric_type,
"period": time_period,
"value": 12345.67,
"trend": "up",
"change_percent": 15.2
}
async def generate_report(data, context, agent):
"""Handler that uses multiple tool sources"""
print("\n🔄 Generating comprehensive report...\n")
# Database query
print("1️⃣ Querying database...")
db_result = await agent.call_tool("list_tables", {})
table_count = db_result.get("count", 0) if db_result["success"] else 0
print(f" Found {table_count} tables")
# S3 operations
print("2️⃣ Checking S3 storage...")
s3_result = await agent.call_tool("list_s3_objects", {"prefix": "data/"})
object_count = len(s3_result.get("objects", [])) if s3_result["success"] else 0
print(f" Found {object_count} objects")
# Custom metrics
print("3️⃣ Calculating metrics...")
metrics_result = await agent.call_tool("calculate_business_metrics", {
"metric_type": "revenue",
"time_period": "30d"
})
print(f" Revenue: ${metrics_result['value']:.2f} ({metrics_result['trend']})")
# Compile report
report = {
"database": {"tables": table_count},
"storage": {"objects": object_count},
"metrics": metrics_result,
"summary": f"Analyzed {table_count} tables and {object_count} storage objects"
}
print("\n✅ Report generated successfully!\n")
return report
async def main():
print("="*70)
print("MULTI-PLUGIN AGENT ARCHITECTURE")
print("="*70)
# Create plugins
db = postgresql(
host="localhost",
database="app_db",
user="app_user",
password="app_pass"
)
storage = s3(bucket="my-app-bucket")
# Create custom tool
metrics_tool = tool(
calculate_metrics,
name="calculate_business_metrics",
description="Calculate business metrics for a given time period",
parameters={
"metric_type": {
"type": "string",
"description": "Type of metric: revenue, users, engagement",
"required": True
},
"time_period": {
"type": "string",
"description": "Time period: 7d, 30d, 90d",
"required": False
}
},
category="analytics"
)
# Create agent with all tool sources
agent = Agent(
name="multi_tool_agent",
tools=[db, storage, metrics_tool],
handlers={
"generate_report": generate_report
}
)
print(f"\nAgent created with {len(agent.tool_names)} tools from multiple sources:")
for tool_name in agent.tool_names:
tool_obj = agent.tool_registry.get(tool_name)
print(f" - {tool_name} (category: {tool_obj.category}, source: {tool_obj.source})")
# Use different tools
print("\n" + "="*70)
print("TESTING INDIVIDUAL TOOLS")
print("="*70)
print("\n1. Database Tool - List tables:")
db_result = await agent.call_tool("list_tables", {})
print(f" Result: {db_result}")
print("\n2. Storage Tool - List S3 objects:")
s3_result = await agent.call_tool("list_s3_objects", {"prefix": "data/"})
print(f" Result: {s3_result}")
print("\n3. Custom Tool - Calculate metrics:")
metrics_result = await agent.call_tool("calculate_business_metrics", {
"metric_type": "revenue",
"time_period": "30d"
})
print(f" Metric: {metrics_result['metric']}")
print(f" Value: ${metrics_result['value']:.2f}")
print(f" Trend: {metrics_result['trend']} ({metrics_result['change_percent']:+.1f}%)")
# Use handler that orchestrates multiple tools
print("\n" + "="*70)
print("TESTING COMPLEX WORKFLOW")
print("="*70)
result = await agent.process("generate_report", data={})
report = result["result"]
print("Report Summary:")
print(f" Database Tables: {report['database']['tables']}")
print(f" Storage Objects: {report['storage']['objects']}")
print(f" Revenue Metric: ${report['metrics']['value']:.2f}")
print(f" Summary: {report['summary']}")
print("\n" + "="*70)
print("DONE")
print("="*70 + "\n")
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())#Architecture Best Practices
#1. Tool Naming Conventions
Avoid conflicts with clear naming:
# Good - clear, unique names
list_database_tables
get_s3_object
calculate_revenue_metrics
# Avoid - generic names that might conflict
list
get
calculate#2. Category Organization
Use consistent categories:
categories = {
"database": ["list_tables", "query_database"],
"storage": ["get_s3_object", "put_s3_object"],
"analytics": ["calculate_metrics"],
"communication": ["send_email", "post_webhook"],
"processing": ["validate_data", "transform_data"]
}#3. Tool Source Tracking
Always track where tools come from:
for tool_name in agent.tool_names:
tool = agent.tool_registry.get(tool_name)
print(f"{tool_name}: {tool.source} ({tool.plugin_name or 'custom'})")#4. Error Handling
Handle failures gracefully across sources:
async def robust_workflow(data, context, agent):
results = {}
# Try database operation
try:
db_result = await agent.call_tool("query_database", {...})
results["database"] = db_result
except Exception as e:
results["database"] = {"error": str(e)}
# Try storage operation
try:
s3_result = await agent.call_tool("get_s3_object", {...})
results["storage"] = s3_result
except Exception as e:
results["storage"] = {"error": str(e)}
return results#Framework Internals
Multi-plugin architecture internals:
- Tool Registry: Single registry holds tools from all sources
- Plugin Lifecycle: Each plugin manages its own connections
- Tool Namespacing: Framework ensures unique tool names
- Category System: Logical grouping independent of source
- Source Tracking: Metadata tracks origin of each tool
Tool Resolution Order:
- Agent receives tool call request
- Looks up tool in registry by name
- Retrieves tool from source (plugin or custom)
- Executes tool with provided arguments
- Returns result with source metadata
#Key Takeaways
- Mix plugins and custom tools freely - they coexist seamlessly
- Categories organize tools logically independent of source
- Each plugin manages its own lifecycle and connections
- Tool registry provides unified access to all capabilities
- Complex workflows combine tools from multiple sources
- Source tracking aids debugging and monitoring
#Next Steps
- PostgreSQL Custom Handlers to build complex logic
- Tool Introspection to explore capabilities
- Database Query Agent for LLM-powered operations