Tools
The Daita tools system provides a universal abstraction for LLM-callable functions. Tools can come from plugins, MCP servers, or custom Python functions, and are automatically integrated into agents for autonomous use.
#Overview
Tools are functions that agents can discover and execute to interact with external systems, process data, or perform operations. The unified tool system allows agents to:
- Discover tools from multiple sources (plugins, MCP, custom functions)
- Understand tool capabilities through structured schemas
- Execute tools with type-safe parameter validation
- Handle results consistently across all tool types
Key Features:
- Universal tool abstraction works with any source
- LLM-compatible schemas (OpenAI, Anthropic, etc.)
- Automatic type conversion and validation
- Async execution with timeout support
- Tool discovery and registration
- Provider-agnostic function calling format
#Core Concepts
#AgentTool Class
The AgentTool dataclass represents any callable function:
from daita.core.tools import AgentTool
tool = AgentTool(
name="search_database",
description="Search for records in the database",
parameters={
"query": {
"type": "string",
"description": "SQL query to execute",
"required": True
},
"limit": {
"type": "integer",
"description": "Maximum results to return",
"required": False
}
},
handler=async_search_function,
category="database",
source="plugin",
timeout_seconds=30
)#Tool Fields
| Field | Type | Required | Description |
|---|---|---|---|
name | str | Yes | Unique tool name |
description | str | Yes | Human-readable tool description |
parameters | Dict[str, Any] | Yes | JSON Schema parameter definition |
handler | Callable | Yes | Async function that executes the tool |
category | str | No | Tool category (database, storage, api, etc.) |
source | str | No | Source of tool (plugin, mcp, custom) - default: "custom" |
plugin_name | str | No | Plugin that provides this tool |
timeout_seconds | int | No | Execution timeout in seconds |
#Creating Tools
#Using the @tool Decorator
Create tools from any Python function using the @tool decorator:
from daita.core.tools import tool
# Simple function
@tool
async def calculate_total(price: float, quantity: int) -> float:
"""Calculate total cost of items."""
return price * quantity
# Sync functions work too
@tool
def add_numbers(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
# Execute tool
result = await calculate_total.execute({"price": 19.99, "quantity": 3})
print(result) # 59.97The @tool decorator automatically:
- Extracts parameter schemas from type hints and docstrings
- Handles both sync and async functions
- Converts functions to AgentTool instances
#With Decorator Options
Customize tool metadata using decorator parameters:
from daita.core.tools import tool
@tool(
name="product_search",
description="Search the product catalog with advanced filters",
timeout_seconds=10,
category="search"
)
async def search_products(query: str, category: str = "all", max_results: int = 10):
"""
Search for products in the catalog.
Args:
query: Search keywords or product name
category: Product category to filter by (all, electronics, clothing, etc.)
max_results: Maximum number of results to return (1-100)
"""
# Search implementation
return results
# Parameter schema is auto-extracted from type hints and docstringThe decorator automatically extracts parameter schemas from:
- Type hints (query: str, max_results: int)
- Default values (category: str = "all" makes it optional)
- Docstring (Args section provides descriptions)
#Sync Functions
Sync functions are automatically wrapped for async execution:
from daita.core.tools import tool
@tool
def simple_calculation(x: int, y: int) -> int:
"""Add two numbers."""
return x + y
# Automatically wrapped for async use
# Can be called with await
result = await simple_calculation.execute({"x": 5, "y": 3})#Tool Execution
#Basic Execution
Execute tools by calling the execute() method:
from daita.core.tools import tool
@tool
async def my_function(param1: str, param2: int) -> str:
"""Example function."""
return f"{param1}: {param2}"
# Execute with arguments
result = await my_function.execute({
"param1": "value1",
"param2": 42
})#Timeout Handling
Tools with timeouts raise RuntimeError if execution exceeds the limit:
import asyncio
from daita.core.tools import tool
@tool(timeout_seconds=5)
async def slow_operation(data: str) -> str:
"""Potentially slow operation."""
await asyncio.sleep(10) # Long operation
return "done"
try:
result = await slow_operation.execute({"data": "test"})
except RuntimeError as e:
print(f"Timeout: {e}")
# "Tool 'slow_operation' execution timed out after 5s"#Error Handling
Tool execution errors are propagated with context:
from daita.core.tools import tool
@tool
async def risky_operation(value: int) -> int:
"""Operation that might fail."""
if value < 0:
raise ValueError("Value must be positive")
return value * 2
try:
result = await risky_operation.execute({"value": -5})
except ValueError as e:
print(f"Validation error: {e}")#LLM Integration
Tools are automatically converted to the correct format for your LLM provider (OpenAI, Anthropic, etc.). Agents handle this conversion internally - you don't need to worry about format differences.
#Agent Integration
#Automatic Tool Registration
Tools from plugins and MCP servers are automatically registered:
from daita import Agent
from daita.plugins import PostgreSQLPlugin
# Tools from multiple sources
db_plugin = PostgreSQLPlugin(host="localhost", database="mydb")
agent = Agent(
name="multi_tool_agent",
tools=[db_plugin], # Plugin tools
mcp={ # MCP tools
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/data"]
}
)
# Start to initialize tools
await agent.start()
# All tools are now available
print(f"Available tools: {agent.tool_names}")
# Agent autonomously uses tools
answer = await agent.run("How many users are in the database?")#Manual Tool Registration
Register custom tools with an agent:
from daita import Agent
from daita.core.tools import tool
# Create custom tool
@tool
async def custom_operation(data: str) -> str:
"""Custom business logic."""
return data.upper()
@tool
async def another_operation(x: int, y: int) -> int:
"""Another custom operation."""
return x * y
agent = Agent(name="my_agent")
# Register single tool
agent.register_tool(custom_operation)
# Or register multiple
agent.register_tools([custom_operation, another_operation])#Autonomous Tool Usage
Once tools are registered, the agent autonomously decides when and how to use them:
from daita import Agent
from daita.core.tools import tool
# Create custom tools
@tool
async def fetch_data(source: str) -> dict:
"""Fetch data from external source."""
return {"users": [{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]}
@tool
async def analyze_data(data: dict) -> dict:
"""Analyze data and return insights."""
return {"count": len(data.get("users", [])), "insights": "Data looks good"}
agent = Agent(name="agent")
agent.register_tools([fetch_data, analyze_data])
await agent.start()
# Agent autonomously chains tools to answer the question
answer = await agent.run("Fetch data from 'api/users' and analyze it")
# Agent will:
# 1. Call fetch_data with source='api/users'
# 2. Call analyze_data with the fetched data
# 3. Provide a natural language answer
print(answer)#Streaming Tool Execution
Monitor tool execution in real-time using streaming events:
from daita.core.streaming import AgentEvent, EventType
def monitor_tools(event: AgentEvent):
if event.type == EventType.TOOL_CALL:
print(f"🔧 Calling: {event.tool_name}")
print(f" Args: {event.tool_args}")
elif event.type == EventType.TOOL_RESULT:
print(f" ✅ Result: {event.result}")
# Get real-time visibility into tool usage
answer = await agent.run(
"Fetch and analyze user data",
on_event=monitor_tools # See tools being called in real-time
)This provides transparency into which tools the agent is using, what arguments it's passing, and what results it receives - essential for debugging and understanding agent behavior.
#Tool Discovery
Discover what tools are available to an agent:
from daita.plugins import PostgreSQLPlugin
db_plugin = PostgreSQLPlugin(host="localhost", database="mydb")
agent = Agent(
name="agent",
tools=[db_plugin],
mcp={
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/data"]
}
)
# Initialize tools
await agent.start()
# List all available tools
tools = agent.available_tools
for tool in tools:
print(f"Tool: {tool.name}")
print(f" Source: {tool.source}")
print(f" Category: {tool.category}")
print(f" Description: {tool.description}")
print()
# Get just the names
tool_names = agent.tool_names
print(f"Tools: {', '.join(tool_names)}")#Plugin Tools
Plugins automatically expose their capabilities as tools when added to an agent. See Plugins documentation for creating custom plugins with tools.
#MCP Tools
MCP server tools are automatically discovered and registered when you configure MCP servers with an agent. See MCP documentation for details on using MCP tools.
#Advanced Example
Create a tool with validation and detailed schema:
from daita.core.tools import tool
@tool(timeout_seconds=30, category="business")
async def process_order(
order_id: str,
items: list,
customer_email: str,
priority: str = "normal"
) -> dict:
"""
Process a customer order.
Args:
order_id: Unique order identifier
items: List of items in the order
customer_email: Customer email for notifications
priority: Order priority level (low, normal, high)
"""
# Validation
if not order_id:
raise ValueError("order_id is required")
if not items or len(items) == 0:
raise ValueError("items cannot be empty")
if priority not in ["low", "normal", "high"]:
raise ValueError("priority must be low, normal, or high")
# Processing logic
return {
"order_id": order_id,
"status": "processing",
"item_count": len(items),
"priority": priority
}
# Register with agent
agent.register_tool(process_order)#Best Practices
Tool Design:
- Use clear, action-oriented names (e.g.,
search_users,calculate_total) - Write detailed descriptions to help LLMs understand when to use the tool
- Validate parameters early with clear error messages
- Design tools to be idempotent (safe to retry)
- Set appropriate timeouts based on expected operation duration
Parameter Schemas:
- Use JSON Schema types:
string,integer,number,boolean,array,object - Mark required fields explicitly with
"required": True - Include example values in descriptions
- Use
enumfor fixed option sets - Document valid ranges for numeric values
Error Handling:
- Validate inputs before expensive operations
- Provide clear error messages explaining what went wrong
- Use
ValueErrorfor validation errors,RuntimeErrorfor execution failures - Log errors for debugging without exposing internals to the LLM
Performance:
- Set timeouts to prevent hanging operations
- Use async/await for I/O-bound operations
- Cache expensive computations when appropriate
- Batch operations when possible to reduce overhead
#Troubleshooting
Tool Not Found:
- Verify tool is registered:
agent.register_tool(tool) - Check available tools:
print(agent.tool_names) - Ensure
await agent.start()was called before using tools
Parameter Validation Errors:
- Ensure parameter names in schema match function arguments exactly
- Check that required parameters are marked correctly
- Verify type hints match parameter schema types
- Test tool execution directly:
await tool.execute({...})
Timeout Issues:
- Increase timeout:
@tool(timeout_seconds=60) - Remove timeout for variable-duration operations:
timeout_seconds=None - Check if operation is actually hanging (review implementation)
- Consider breaking long operations into smaller tools