Back to Examples
Beginner

Hello World Agent

Your first agent - a simple introduction to creating and running agents with the Daita framework

AgentsGetting Started

#Overview

This example walks you through creating your first agent with Daita. You'll learn the basic structure of an agent, how to configure it, and how to run simple tasks.

#What You'll Learn

  • How to import and create an Agent
  • Basic agent configuration
  • Running simple tasks
  • Understanding agent responses

#Prerequisites

  • Python 3.8+
  • Daita framework installed (pip install daita)
  • LLM API key

#Step 1: Import and Configure

First, let's import the Agent class and set up a basic agent:

python
from daita import Agent
import asyncio
 
# Create a simple agent with a clear identity
agent = Agent(
    name="Hello Agent",
    prompt="You are a friendly assistant that helps users get started with Daita.",
    llm_provider="openai",
    model="gpt-4"
)

What's happening here:

  • name: Identifies your agent in logs and traces
  • prompt: Defines the agent's role and behavior
  • llm_provider: Specifies which LLM to use (OpenAI, Anthropic, etc.)
  • model: The specific model to use

#Step 2: Start the Agent

Before running tasks, you need to start the agent to initialize connections:

python
async def main():
    # Start the agent
    await agent.start()
 
    # Agent is now ready to process tasks
    print("Agent started successfully!")

Framework internals:

  • start() initializes the LLM connection
  • Sets up automatic tracing
  • Registers any tools or plugins
  • Prepares the agent for execution

#Step 3: Run Your First Task

Now let's ask the agent to do something:

python
async def main():
    await agent.start()
 
    # Run a simple task
    response = await agent.run("Say hello and introduce yourself!")
 
    print(response)
    # Output: "Hello! I'm Hello Agent, a friendly assistant..."

How it works:

  • run() sends your prompt to the LLM
  • The agent uses its configured identity to respond
  • Returns the final answer as a string

#Step 4: Get Detailed Information

Want to see more than just the response? Use run_detailed():

python
async def main():
    await agent.start()
 
    # Get detailed execution information
    result = await agent.run_detailed("What is 2 + 2?")
 
    print(f"Answer: {result['result']}")
    print(f"Processing time: {result['processing_time_ms']}ms")
    print(f"Cost: ${result['cost']}")
    print(f"Tokens used: {result['tokens']['total_tokens']}")

Result contains:

  • result: The agent's answer
  • processing_time_ms: How long it took
  • cost: Estimated API cost
  • tokens: Token usage breakdown
  • iterations: Number of reasoning loops

#Step 5: Clean Up

Always stop the agent when you're done:

python
async def main():
    await agent.start()
 
    try:
        response = await agent.run("Hello!")
        print(response)
    finally:
        # Clean up resources
        await agent.stop()
 
# Run the async function
if __name__ == "__main__":
    asyncio.run(main())

#Complete Example

Here's the full working code:

python
from daita import Agent
import asyncio
 
async def main():
    # Create agent
    agent = Agent(
        name="Hello Agent",
        prompt="You are a friendly assistant.",
        llm_provider="openai",
        model="gpt-4"
    )
 
    # Start agent
    await agent.start()
 
    try:
        # Run task
        response = await agent.run("Say hello!")
        print(response)
 
        # Get detailed info
        result = await agent.run_detailed("What is the capital of France?")
        print(f"\nAnswer: {result['result']}")
        print(f"Tokens: {result['tokens']['total_tokens']}")
    finally:
        # Clean up
        await agent.stop()
 
if __name__ == "__main__":
    asyncio.run(main())

#Key Takeaways

  1. Agent creation is simple: Just specify name, prompt, and LLM
  2. Always start/stop: Use start() before tasks, stop() for cleanup
  3. Two run methods: run() for simple responses, run_detailed() for metadata
  4. Automatic tracing: All operations are automatically traced for observability

#Next Steps