Daita Init

Initialize a new Daita project with a minimal structure and example components. This command creates everything you need to start building AI agents.

#Syntax

bash
daita init [PROJECT_NAME] [OPTIONS]

#Arguments

ArgumentTypeRequiredDescription
PROJECT_NAMEstringNoName of the project directory to create. If not provided, prompts for name or uses current directory

#Options

OptionDescription
--forceOverwrite existing project directory if it exists
--verboseShow detailed output during initialization

#Examples

#Basic Usage

bash
# Create a new project in a new directory
daita init my-agent
 
# Create project in current directory (will prompt for name)
daita init
 
# Force overwrite existing directory
daita init existing-project --force
 
# Show detailed output during creation
daita init my-agent --verbose

#Generated Project Structure

When you run daita init, a minimal project structure is created:

python
my-project/
├── .daita/                  # Daita metadata (auto-generated)
├── agents/                  # Your AI agents
│   ├── __init__.py
│   └── my_agent.py         # Example agent
├── workflows/               # Your workflows
│   ├── __init__.py
│   └── my_workflow.py      # Example workflow
├── data/                    # Data files
│   └── .gitkeep
├── tests/                   # Test files
│   ├── __init__.py
│   └── test_basic.py       # Example tests
├── daita-project.yaml       # Project configuration
├── requirements.txt         # Python dependencies
├── .gitignore              # Git ignore patterns
└── README.md               # Project documentation

#Configuration Files

#daita-project.yaml

The initial configuration file is minimal:

yaml
name: my-project
version: 1.0.0
description: A Daita AI agent project
created_at: '2025-01-15T10:00:00'
 
agents: [] # Populated as you create agents
workflows: [] # Populated as you create workflows
 
environments:
  development: {}
  staging: {}
  production: {}

#requirements.txt

Minimal dependencies for getting started:

txt
# Daita Agents Framework
daita-agents>=0.1.0
 
# LLM provider (choose one)
openai>=1.0.0
 
# Development
pytest>=7.0.0
pytest-asyncio>=0.21.0

#Example Agent Template

Generated in agents/my_agent.py:

python
"""
My Agent
 
A simple starter agent. Replace this with your own logic.
"""
from daita import Agent
 
def create_agent():
    """Create the agent instance using direct Agent pattern."""
    # Option 1: Simple instantiation (uses defaults)
    agent = Agent(name="My Agent")
 
    # Option 2: Direct LLM configuration (uncomment and modify as needed)
    # import os
    # agent = Agent(
    #     name="My Agent",
    #     llm_provider="openai",
    #     model="gpt-4",
    #     api_key=os.getenv("OPENAI_API_KEY")
    # )
 
    # Optional: Add plugins
    # from daita.plugins import postgresql
    # agent.add_plugin(postgresql(host="localhost", database="mydb"))
 
    return agent
 
if __name__ == "__main__":
    import asyncio
 
    async def main():
        agent = create_agent()
        result = await agent.process("test_task", "Hello, world!")
        print(result)
 
    asyncio.run(main())

#Example Workflow Template

Generated in workflows/my_workflow.py:

python
"""
My Workflow
 
A simple starter workflow. Replace this with your own logic.
"""
from daita import Agent, Workflow
 
def create_workflow():
    """Create the workflow instance using direct Workflow pattern."""
    workflow = Workflow("My Workflow")
 
    # Add your agents here
    # agent = Agent(name="Agent")
    # workflow.add_agent("agent", agent)
 
    return workflow
 
async def run_workflow(data=None):
    """Run the workflow with direct pattern."""
    workflow = create_workflow()
 
    try:
        await workflow.start()
 
        # Your workflow logic here
        result = f"Workflow processed: {data}"
 
        return {
            'status': 'success',
            'result': result
        }
 
    finally:
        await workflow.stop()
 
if __name__ == "__main__":
    import asyncio
 
    async def main():
        result = await run_workflow("Hello, workflow!")
        print(result)
 
    asyncio.run(main())

#Post-Initialization Steps

After running daita init, follow these steps:

#1. Navigate to Project

bash
cd my-project

#2. Install Dependencies

bash
# Install dependencies
pip install -r requirements.txt
 
# Or use a virtual environment (recommended)
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt

#3. Set LLM API Key

bash
# Set your LLM provider key
export OPENAI_API_KEY="your-openai-key"
 
# Or use .env file
echo "OPENAI_API_KEY=your-key" > .env

#4. Test the Example Agent

bash
# Test directly
python agents/my_agent.py
 
# Or use the test command
daita test

#Common Issues and Solutions

#Permission Denied

bash
 Permission denied: Cannot create directory 'my-project'

Solution: Ensure you have write permissions in the current directory:

bash
# Check permissions
ls -la
 
# Create with sudo if needed (not recommended)
sudo daita init my-project
sudo chown -R $USER:$USER my-project

#Directory Already Exists

bash
 Error: Directory 'my-project' already exists

Solutions:

bash
# Use --force to overwrite
daita init my-project --force
 
# Or choose a different name
daita init my-project-v2
 
# Or remove existing directory
rm -rf my-project
daita init my-project

#Missing Dependencies

bash
 Missing CLI dependencies

Solution: Install CLI dependencies:

bash
pip install daita-agents[cli]

#Next Steps

After initializing your project:

  1. Create Components - Add more agents and workflows
  2. Test Your Project - Test and debug your agents
  3. Configure Settings - Customize project configuration
  4. Deploy - Deploy to production

#Best Practices

#Project Naming

  • Use descriptive, kebab-case names: data-processor, sentiment-analyzer
  • Avoid spaces and special characters
  • Keep names concise but meaningful

#Version Control

bash
# Initialize git repository
cd my-project
git init
git add .
git commit -m "Initial Daita project setup"

The generated .gitignore includes:

  • .daita/ cache directory
  • Python bytecode (.pyc, __pycache__)
  • .env files and API keys
  • Virtual environments