Daita Test

Test agents and workflows locally by running them with test data. This command validates that your components can be loaded and executed successfully.

#Syntax

bash
daita test [TARGET] [OPTIONS]

#Arguments

ArgumentTypeRequiredDescription
TARGETstringNoSpecific agent or workflow to test (name without .py extension). If not provided, tests all

#Options

OptionTypeDefaultDescription
--datastringNonePath to test data file (JSON or text)
--watchflagfalseKeep running and watch for changes (basic mode)
--verboseflagfalseShow detailed output including result types

#Requirements

  • Must be run inside a Daita project directory (contains .daita/ folder)
  • Agents must have a create_agent() function
  • Workflows must have a create_workflow() function

#Examples

#Basic Testing

bash
# Test all agents and workflows
daita test
 
# Test specific agent
daita test data_processor
 
# Test specific workflow
daita test data_pipeline
 
# Test with verbose output
daita test --verbose

#Testing with Custom Data

bash
# Test with JSON data file
daita test --data ./test_data.json
 
# Test specific agent with custom data
daita test data_processor --data ./sample_data.json
 
# Test with verbose output and custom data
daita test data_processor --data ./test_data.json --verbose

#Watch Mode

bash
# Watch mode - keeps running (manual re-testing)
daita test --watch
 
# Watch specific component
daita test data_processor --watch

#How It Works

The test command performs these steps:

  1. Discovery: Finds Python files in agents/ and workflows/ directories
  2. Loading: Imports each file and looks for the factory function (create_agent() or create_workflow())
  3. Instantiation: Calls the factory function to create an instance
  4. Execution:
    • For agents: Calls agent.process("process_data", test_data)
    • For workflows: Calls workflow.run(test_data)
  5. Validation: Checks that the result is returned (ideally a dict with status)

#Test Data

#Default Test Data

If no --data file is provided, the test runner uses this default:

json
{
  "test": true,
  "message": "Default test data"
}

#Custom Test Data

You can provide custom test data via the --data option:

#JSON Format

json
{
  "input": "Sample text to process",
  "metadata": {
    "source": "test",
    "priority": "high"
  }
}

#Plain Text Format

python
Sample text data for testing

The test data is passed directly to your agent's process() method or workflow's run() method.

#Output Format

#Successful Test

bash
$ daita test data_processor
 
 Testing: data_processor
 data_processor: OK

#With Verbose Flag

bash
$ daita test data_processor --verbose
 
 Testing: data_processor
 data_processor: OK
   Status: success
   Result type: dict

#Testing All Components

bash
$ daita test
 
 Testing 2 agents and 1 workflows
 data_processor: OK
 text_analyzer: OK
 data_pipeline: OK

#Failed Test

bash
$ daita test broken_agent
 
 Testing: broken_agent
 broken_agent: Failed to create agent instance - missing API key

#With Verbose Error Details

bash
$ daita test broken_agent --verbose
 
 Testing: broken_agent
 broken_agent: Processing failed - 'NoneType' object has no attribute 'process'
Traceback (most recent call last):
  File "/path/to/test.py", line 92, in _test_agent
    result = await agent_instance.process("process_data", test_data)
AttributeError: 'NoneType' object has no attribute 'process'

#Common Issues

#Missing Factory Function

bash
 data_processor: Failed to load agent - No create_agent() function found

Solution: Ensure your agent file has a create_agent() function:

python
# agents/data_processor.py
from daita import Agent
 
def create_agent():
    """Factory function to create the agent."""
    return Agent(
        name="Data Processor",
        description="Processes data"
    )

#Import Errors

bash
 data_processor: Failed to load agent - No module named 'some_package'

Solution: Install missing dependencies:

bash
pip install some_package

#Agent Returns Wrong Type

bash
⚠️ data_processor: Warning - agent returned str instead of dict

Best Practice: Return a dictionary with status:

python
async def process(self, action, data):
    result = # ... your processing logic
    return {
        "status": "success",
        "result": result
    }

#Use Cases

#Quick Smoke Test

Verify all components can be loaded and executed:

bash
daita test

#Testing During Development

Test a specific component while developing:

bash
# Test with default data
daita test my_agent --verbose
 
# Test with custom data
daita test my_agent --data test_inputs.json --verbose

#Pre-Deployment Validation

Before deploying to production:

bash
# Test all components
daita test
 
# If any fail, investigate with verbose mode
daita test failing_component --verbose