#Overview
This example demonstrates how to create custom tools that agents can use autonomously. You'll build a sentiment analysis agent that can analyze text and categorize it as positive, negative, or neutral.
#What You'll Learn
- Creating custom tools with the
@tooldecorator - How agents autonomously decide when to use tools
- Tool parameters and return values
- Agent-tool interaction patterns
#Prerequisites
- Understanding of Python async functions
- API key for your LLM provider
#Step 1: Define a Custom Tool
Tools are Python functions that agents can call. Let's create a sentiment analysis tool:
python
from daita import Agent
from daita.core.tools import tool
import asyncio
@tool
async def analyze_sentiment(text: str) -> dict:
"""Analyze the sentiment of text and return a score.
Args:
text: The text to analyze
Returns:
Dictionary with sentiment classification and confidence
"""
# Simple keyword-based sentiment analysis
positive_words = ['love', 'great', 'awesome', 'excellent', 'amazing']
negative_words = ['hate', 'terrible', 'awful', 'bad', 'horrible']
text_lower = text.lower()
positive_count = sum(1 for word in positive_words if word in text_lower)
negative_count = sum(1 for word in negative_words if word in text_lower)
if positive_count > negative_count:
sentiment = "positive"
confidence = min(positive_count / (positive_count + negative_count + 1), 0.95)
elif negative_count > positive_count:
sentiment = "negative"
confidence = min(negative_count / (positive_count + negative_count + 1), 0.95)
else:
sentiment = "neutral"
confidence = 0.5
return {
"text": text,
"sentiment": sentiment,
"confidence": confidence,
"positive_indicators": positive_count,
"negative_indicators": negative_count
}Key points:
- The
@tooldecorator registers the function as a tool - Docstring is crucial - the LLM reads it to understand when to use the tool
- Type hints help the framework validate parameters
- Return structured data (dicts) for consistent results
#Step 2: Register the Tool with Your Agent
Now let's create an agent and give it access to the tool:
python
async def main():
# Create the agent
agent = Agent(
name="Sentiment Analyzer",
prompt="""You are a sentiment analysis expert. When given text,
analyze its emotional tone and categorize it as positive, negative,
or neutral. Provide confidence scores and reasoning.""",
llm_provider="openai",
model="gpt-4"
)
# Register the tool
agent.register_tool(analyze_sentiment)
await agent.start()What happens here:
register_tool()adds the function to the agent's available tools- The agent can now see the tool's name, description, and parameters
- The LLM will autonomously decide when to call it
#Step 3: Let the Agent Use the Tool Autonomously
Here's where the magic happens - the agent decides when to use the tool:
python
async def main():
agent = Agent(
name="Sentiment Analyzer",
prompt="You are a sentiment analysis expert.",
llm_provider="openai",
model="gpt-4"
)
agent.register_tool(analyze_sentiment)
await agent.start()
try:
# The agent will autonomously call analyze_sentiment()
result = await agent.run(
"Analyze the sentiment: I love Daita! It's an amazing framework."
)
print(result)
# The agent uses the tool, gets results, and provides analysis
finally:
await agent.stop()Autonomous tool calling:
- Agent receives the task
- LLM recognizes it needs sentiment analysis
- Agent calls
analyze_sentiment()automatically - Agent receives tool results
- Agent formulates a final response with the data
#Step 4: Multiple Tool Calls
Agents can call tools multiple times in one task:
python
async def main():
agent = Agent(
name="Sentiment Analyzer",
prompt="You are a sentiment analysis expert.",
llm_provider="openai",
model="gpt-4"
)
agent.register_tool(analyze_sentiment)
await agent.start()
try:
# Agent will call analyze_sentiment() for each text
result = await agent.run("""
Compare the sentiment of these two reviews:
1. "This product is absolutely terrible and I hate it."
2. "I love this product! It's excellent and works great."
""")
print(result)
# Agent calls the tool twice, compares results, provides analysis
finally:
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())#Step 5: Monitoring Tool Usage
Use run_detailed() to see exactly which tools were called:
python
async def main():
agent = Agent(
name="Sentiment Analyzer",
prompt="You are a sentiment analysis expert.",
llm_provider="openai",
model="gpt-4"
)
agent.register_tool(analyze_sentiment)
await agent.start()
try:
result = await agent.run_detailed(
"Analyze: I'm really excited about this new framework!"
)
print(f"Final answer: {result['result']}\n")
print(f"Tools called: {len(result['tool_calls'])}")
for call in result['tool_calls']:
print(f" - {call['tool']}: {call['args']}")
print(f" Result: {call['result']}")
finally:
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())#Complete Example
python
from daita import Agent
from daita.core.tools import tool
import asyncio
@tool
async def analyze_sentiment(text: str) -> dict:
"""Analyze the sentiment of text and return a score."""
positive_words = ['love', 'great', 'awesome', 'excellent', 'amazing']
negative_words = ['hate', 'terrible', 'awful', 'bad', 'horrible']
text_lower = text.lower()
positive_count = sum(1 for word in positive_words if word in text_lower)
negative_count = sum(1 for word in negative_words if word in text_lower)
if positive_count > negative_count:
sentiment = "positive"
confidence = min(positive_count / (positive_count + negative_count + 1), 0.95)
elif negative_count > positive_count:
sentiment = "negative"
confidence = min(negative_count / (positive_count + negative_count + 1), 0.95)
else:
sentiment = "neutral"
confidence = 0.5
return {
"text": text,
"sentiment": sentiment,
"confidence": confidence
}
async def main():
agent = Agent(
name="Sentiment Analyzer",
prompt="You are a sentiment analysis expert.",
llm_provider="openai",
model="gpt-4"
)
agent.register_tool(analyze_sentiment)
await agent.start()
try:
result = await agent.run(
"Analyze the sentiment: I love using Daita!"
)
print(result)
finally:
await agent.stop()
if __name__ == "__main__":
asyncio.run(main())#Framework Internals
How tool calling works:
- Tool Registration:
@tooldecorator extracts function metadata (name, params, docstring) - LLM Context: Framework sends tool definitions to the LLM in the system prompt
- Decision Making: LLM decides based on the task whether to call a tool
- Execution: Framework executes the tool with validated parameters
- Result Handling: Tool results are sent back to the LLM for final response
#Key Takeaways
- Simple decoration: Just use
@toolto make any function available - Autonomous execution: Agents decide when to call tools
- Good docstrings matter: They help the LLM understand when to use tools
- Structured returns: Return dicts or objects for consistent handling
- Multiple calls: Agents can call tools multiple times per task
#Next Steps
- Database integration with PostgreSQL plugin
- Multi-agent workflows for complex orchestration
- Streaming tool execution for real-time updates