Getting Started

This guide will walk you through creating your first AI agent project from scratch. In just a few minutes, you'll have a working agent that you can customize and deploy.

#Prerequisites

Before you begin, make sure you have:

  • Python 3.11+ installed on your system
  • pip package manager
  • An OpenAI API key (or another supported LLM provider)
  • Basic familiarity with Python and command line

#Installation

Install Daita using pip:

bash
pip install daita-agents

Verify the installation:

bash
daita --version

You should see the Daita CLI version information.

#Step 1: Initialize Your First Project

Create a new Daita project using the CLI:

bash
# Create a new project
daita init my-first-agent
 
# Navigate to your project
cd my-first-agent

#Step 2: Set Up Your Environment

#Configure Your API Key

Set your OpenAI API key as an environment variable:

bash
# On macOS/Linux
export OPENAI_API_KEY=your_api_key_here
 
# On Windows
set OPENAI_API_KEY=your_api_key_here

For permanent setup, add this to your shell profile (.bashrc, .zshrc, etc.).

#Install Dependencies

Install the project dependencies:

bash
pip install -r requirements.txt

#Step 3: Create Your First Custom Agent

Let's create a custom agent that analyzes text sentiment with custom tools:

bash
daita create agent sentiment_analyzer

This creates agents/sentiment_analyzer.py. Edit it to add custom tools:

python
from daita import Agent
from daita.core.tools import tool
import asyncio
 
# Define a custom tool for sentiment analysis
@tool
async def analyze_sentiment(text: str) -> dict:
    # You can add custom logic here, like calling an API
    return {
        "text": text,
        "length": len(text),
        "ready_for_analysis": True
    }
 
# Create the agent with a clear identity
agent = Agent(
    name="Sentiment Analyzer",
    prompt="""You are a sentiment analysis expert. When given text, analyze its emotional tone
    and categorize it as positive, negative, or neutral. Provide confidence scores and reasoning.""",
    llm_provider="openai",
    model="gpt-4"
)
 
# Register the custom tool
agent.register_tool(analyze_sentiment)
 
def create_agent():
    return agent
 
if __name__ == "__main__":
    async def main():
        await agent.start()
 
        # Agent autonomously uses tools and provides analysis
        result = await agent.run("Analyze the sentiment: I love Daita!")
        print(result)
 
        await agent.stop()
 
    asyncio.run(main())

Test your agent:

bash
python agents/sentiment_analyzer.py
# Or use the CLI
daita test sentiment_analyzer

What's happening:

  1. The @tool decorator converts your function into an agent tool
  2. register_tool() makes it available to the agent
  3. The LLM autonomously decides when to use your tool
  4. Results are processed and returned in natural language

#Step 4: Development Workflow

#Testing with Custom Data

Create test data files in the data/ directory:

bash
# Create a test data file
echo '{"text": "This is a test message for sentiment analysis."}' > data/test_input.json

Test with your data:

bash
daita test --data data/test_input.json

#Configuration

Customize your project in daita-project.yaml:

yaml
name: my-first-agent
version: 1.0.0
description: My first Daita AI agent project
 
agents:
  - display_name: Sentiment Analyzer
    name: sentiment_analyzer
    type: substrate

#Step 5: Deploy Your Agents

#Set Up Daita Cloud (Required for Deployment)

To deploy agents to the cloud, you need a Daita API key:

bash
# Set your Daita API key
export DAITA_API_KEY="your-daita-api-key"

Get your API key from the Daita Dashboard.

#Deploy to Cloud

Deploy your agents to the managed cloud environment:

bash
# Deploy to hosted environment
daita push

The deployment process:

  1. Packages your agents, workflows, and configuration
  2. Analyzes imports to determine layers needed
  3. Uploads package to Daita Cloud API
  4. Creates AWS Lambda functions with API Gateway endpoints
  5. Sets up EventBridge schedules (if configured)
  6. Registers deployment with dashboard

#Monitor Deployments

Check deployment status:

bash
# Check overall status
daita status
 
# View deployment history
daita logs --follow
 
# List all deployments
daita deployments list
 
# View specific deployment details
daita deployments show <deployment-id>

#Execute Agents Remotely

Run your deployed agents from anywhere:

bash
# Execute an agent
daita run sentiment_analyzer --data '{"text": "This is great!"}' --follow
 
# Execute with JSON file
daita run sentiment_analyzer --data input.json
 
# View execution history
daita executions --limit 20 --status completed
 
# Get execution logs
daita execution-logs <execution-id> --follow

#Continue Learning

Core Documentation:

  • Agents - Deep dive into Agent features
  • Workflows - Multi-agent orchestration patterns
  • Configuration - Project and agent configuration
  • Tracing - Observability and monitoring system

Advanced Topics:

CLI Reference:

#Quick Reference

bash
# Project Management
daita init <name> [--type basic|analysis|pipeline]
daita create agent <name>
daita create workflow <name>
 
# Local Development
daita test                          # Test all components
daita test --watch                  # Watch for changes
daita test <component>              # Test specific component
daita test --data data.json         # Test with custom data
 
# Cloud Deployment (requires DAITA_API_KEY)
daita push [--dry-run] [--force]    # Deploy to cloud
daita status    # Check deployment status
daita logs [--follow] [--lines 10]  # View deployment history
 
# Remote Execution
daita run <target> --data input.json --follow
daita executions [--limit 10] [--status completed]
daita execution-logs <execution-id>
 
# Deployment Management
daita deployments list
daita deployments show <deployment-id>
daita deployments rollback <deployment-id>
daita webhook list

Happy building!