Skip to main content

Getting Started

This guide will walk you through creating your first AI agent project from scratch. In just a few minutes, you'll have a working agent that you can customize and deploy.

Prerequisites

Before you begin, make sure you have:

  • Python 3.11+ installed on your system
  • pip package manager
  • An OpenAI API key (or another supported LLM provider)
  • Basic familiarity with Python and command line

Installation

Install Daita using pip:

pip install daita-agents

Verify the installation:

daita --version

You should see the Daita CLI version information.

Step 1: Initialize Your First Project

Create a new Daita project using the CLI:

# Create a new project
daita init my-first-agent --type basic

# Navigate to your project
cd my-first-agent

This creates a new directory with the following structure:

my-first-agent/
├── .daita/ # Daita metadata
├── agents/ # Your AI agents
│ ├── __init__.py
│ └── my_agent.py # Example agent
├── workflows/ # Workflow orchestration
│ ├── __init__.py
│ └── my_workflow.py # Example workflow
├── data/ # Data files
├── tests/ # Test files
│ ├── __init__.py
│ └── test_basic.py
├── daita-project.yaml # Project configuration
├── requirements.txt # Dependencies
├── .gitignore
└── README.md

Step 2: Set Up Your Environment

Configure Your API Key

Set your OpenAI API key as an environment variable:

# On macOS/Linux
export OPENAI_API_KEY=your_api_key_here

# On Windows
set OPENAI_API_KEY=your_api_key_here

For permanent setup, add this to your shell profile (.bashrc, .zshrc, etc.).

Install Dependencies

Install the project dependencies:

pip install -r requirements.txt

Step 3: Test the Example Agent

Your project comes with a simple example agent. Let's test it:

# Run the example agent directly
python agents/my_agent.py

You should see output like:

Based on your message "Hello, world!", this appears to be a friendly greeting...

Test with the CLI

You can also test using the Daita CLI:

# Test all components
daita test

# Test a specific agent
daita test my_agent

# Watch for changes while developing
daita test --watch

Step 4: Understanding the Example Agent

Let's examine the example agent code in agents/my_agent.py:

from daita import SubstrateAgent
import asyncio

# Create the agent with identity
agent = SubstrateAgent(
name="My Agent",
prompt="You are a helpful AI assistant. Provide clear, concise responses.",
llm_provider="openai",
model="gpt-4"
)

async def process(message: str):
"""Process a message with the agent."""
await agent.start()

# Agent autonomously processes the request
result = await agent.run(message)

return result

def create_agent():
"""Factory function for CLI deployment."""
return agent

if __name__ == "__main__":
async def main():
result = await process("Hello, world! Please analyze this message.")
print(result)

await agent.stop()

asyncio.run(main())

Key concepts:

  • SubstrateAgent: The main agent class for autonomous AI agents
  • prompt: Defines the agent's identity and behavior
  • llm_provider: Specify which LLM to use (openai, anthropic, gemini, grok)
  • run method: Execute natural language instructions autonomously
  • start/stop: Agent lifecycle management
  • create_agent function: Factory function required for CLI deployment

Step 5: Create Your First Custom Agent

Let's create a custom agent that analyzes text sentiment with custom tools:

daita create agent sentiment_analyzer

This creates agents/sentiment_analyzer.py. Edit it to add custom tools:

from daita import SubstrateAgent
from daita.core.tools import tool
import asyncio

# Define a custom tool for sentiment analysis
@tool
async def analyze_sentiment(text: str) -> dict:
"""Analyze the sentiment of text and return detailed analysis.

Args:
text: The text to analyze for sentiment
"""
# You can add custom logic here, like calling an API
# For this example, we'll return structured data for the LLM to analyze
return {
"text": text,
"length": len(text),
"ready_for_analysis": True
}

# Create the agent with a clear identity
agent = SubstrateAgent(
name="Sentiment Analyzer",
prompt="""You are a sentiment analysis expert. When given text, analyze its emotional tone
and categorize it as positive, negative, or neutral. Provide confidence scores and reasoning.""",
llm_provider="openai",
model="gpt-4"
)

# Register the custom tool
agent.register_tool(analyze_sentiment)

def create_agent():
return agent

if __name__ == "__main__":
async def main():
await agent.start()

# Agent autonomously uses tools and provides analysis
result = await agent.run("Analyze the sentiment: I love Daita!")
print(result)

await agent.stop()

asyncio.run(main())

Test your agent:

python agents/sentiment_analyzer.py
# Or use the CLI
daita test sentiment_analyzer

What's happening:

  1. The @tool decorator converts your function into an agent tool
  2. register_tool() makes it available to the agent
  3. The LLM autonomously decides when to use your tool
  4. Results are processed and returned in natural language

Step 6: Create a Workflow

Workflows orchestrate multiple agents using relay-based communication:

daita create workflow text_processor

Edit workflows/text_processor.py:

from daita import SubstrateAgent
from daita.core.workflow import Workflow
import asyncio

# Create agents with relay channels for automatic communication
summarizer = SubstrateAgent(
name="Summarizer",
prompt="You are a text summarizer. Create concise, accurate summaries.",
llm_provider="openai",
model="gpt-4",
relay="summary_channel"
)

analyzer = SubstrateAgent(
name="Analyzer",
prompt="You are a text analyzer. Analyze summaries and extract key insights.",
llm_provider="openai",
model="gpt-4",
relay="analysis_channel"
)

# Create and configure workflow
workflow = Workflow("Text Pipeline")
workflow.add_agent("summarizer", summarizer)
workflow.add_agent("analyzer", analyzer)
workflow.connect("summarizer", "summary_channel", "analyzer")

async def process_text(text):
"""Process text through the pipeline."""
await workflow.start()

# Trigger the pipeline by having the first agent process the text
result = await summarizer.run(f"Summarize this text: {text}")
# Data flows automatically: summarizer → summary_channel → analyzer

await workflow.stop()
return {"status": "completed", "summary": result}

def create_workflow():
return workflow

if __name__ == "__main__":
async def main():
result = await process_text("AI is transforming how we work and build applications.")
print(f"Pipeline completed! Result: {result}")

asyncio.run(main())

Key concepts:

  • Agents have clear prompts defining their roles
  • Agents publish results to relay channels automatically
  • Workflow connects agents via channels
  • Data flows through the pipeline autonomously
  • First agent is triggered with run(), rest receive via relay

Test your workflow:

python workflows/text_processor.py

Step 7: Development Workflow

Testing with Custom Data

Create test data files in the data/ directory:

# Create a test data file
echo '{"text": "This is a test message for sentiment analysis."}' > data/test_input.json

Test with your data:

daita test --data data/test_input.json

Configuration

Customize your project in daita-project.yaml:

name: my-first-agent
version: 1.0.0
description: My first Daita AI agent project

agents:
- name: Sentiment Analyzer
type: substrate
enabled: true
preset: analysis

- name: Text Summarizer
type: substrate
enabled: true
preset: analysis

workflows:
- name: Text Processing Pipeline
agents: [summarizer, sentiment]

Step 8: Deploy Your Agents

Set Up Daita Cloud (Required for Deployment)

To deploy agents to the cloud, you need a Daita API key:

# Set your Daita API key
export DAITA_API_KEY="your-daita-api-key"

Get your API key from the Daita Dashboard.

Deploy to Cloud

Deploy your agents to the managed cloud environment:

# Preview deployment without executing
daita push --dry-run

# Deploy to production
daita push

# Force deployment without confirmation
daita push --force

The deployment process:

  1. ✅ Packages your agents, workflows, and configuration
  2. ✅ Analyzes imports to determine layers needed
  3. ✅ Uploads package to Daita Cloud API
  4. ✅ Creates AWS Lambda functions with API Gateway endpoints
  5. ✅ Sets up EventBridge schedules (if configured)
  6. ✅ Registers deployment with dashboard

Monitor Deployments

Check deployment status:

# Check overall status
daita status

# View deployment history
daita logs --follow

# List all deployments
daita deployments list

# View specific deployment details
daita deployments show <deployment-id>

Execute Agents Remotely

Run your deployed agents from anywhere:

# Execute an agent
daita run sentiment_analyzer --data '{"text": "This is great!"}' --follow

# Execute with JSON file
daita run sentiment_analyzer --data input.json --env production

# View execution history
daita executions --limit 20 --status completed

# Get execution logs
daita execution-logs <execution-id> --follow

Next Steps

Congratulations! You've successfully:

✅ Created your first Daita project
✅ Built a custom AI agent
✅ Created a multi-agent workflow
✅ Learned the development workflow
✅ Deployed your agents

Continue Learning

Core Documentation:

  • Agents - Deep dive into SubstrateAgent features
  • Workflows - Multi-agent orchestration patterns
  • Configuration - Project and agent configuration
  • Tracing - Observability and monitoring system

Advanced Topics:

CLI Reference:

Join the Community

Quick Reference

# Project Management
daita init <name> [--type basic|analysis|pipeline]
daita create agent <name>
daita create workflow <name>

# Local Development
daita test # Test all components
daita test --watch # Watch for changes
daita test <component> # Test specific component
daita test --data data.json # Test with custom data

# Cloud Deployment (requires DAITA_API_KEY)
daita push [--dry-run] [--force] # Deploy to cloud
daita status [--env production] # Check deployment status
daita logs [--follow] [--lines 10] # View deployment history

# Remote Execution
daita run <target> --data input.json --follow
daita executions [--limit 10] [--status completed]
daita execution-logs <execution-id>

# Deployment Management
daita deployments list [--env production]
daita deployments show <deployment-id>
daita deployments rollback <deployment-id>
daita webhook list

Happy building!