Daita Test
Test agents and workflows locally by running them with test data. This command validates that your components can be loaded and executed successfully.
Syntax
daita test [TARGET] [OPTIONS]
Arguments
| Argument | Type | Required | Description |
|---|---|---|---|
TARGET | string | No | Specific agent or workflow to test (name without .py extension). If not provided, tests all |
Options
| Option | Type | Default | Description |
|---|---|---|---|
--data | string | None | Path to test data file (JSON or text) |
--watch | flag | false | Keep running and watch for changes (basic mode) |
--verbose | flag | false | Show detailed output including result types |
Requirements
- Must be run inside a Daita project directory (contains
.daita/folder) - Agents must have a
create_agent()function - Workflows must have a
create_workflow()function
Examples
Basic Testing
# Test all agents and workflows
daita test
# Test specific agent
daita test data_processor
# Test specific workflow
daita test data_pipeline
# Test with verbose output
daita test --verbose
Testing with Custom Data
# Test with JSON data file
daita test --data ./test_data.json
# Test specific agent with custom data
daita test data_processor --data ./sample_data.json
# Test with verbose output and custom data
daita test data_processor --data ./test_data.json --verbose
Watch Mode
# Watch mode - keeps running (manual re-testing)
daita test --watch
# Watch specific component
daita test data_processor --watch
How It Works
The test command performs these steps:
- Discovery: Finds Python files in
agents/andworkflows/directories - Loading: Imports each file and looks for the factory function (
create_agent()orcreate_workflow()) - Instantiation: Calls the factory function to create an instance
- Execution:
- For agents: Calls
agent.process("process_data", test_data) - For workflows: Calls
workflow.run(test_data)
- For agents: Calls
- Validation: Checks that the result is returned (ideally a dict with status)
Test Data
Default Test Data
If no --data file is provided, the test runner uses this default:
{
"test": true,
"message": "Default test data"
}
Custom Test Data
You can provide custom test data via the --data option:
JSON Format
{
"input": "Sample text to process",
"metadata": {
"source": "test",
"priority": "high"
}
}
Plain Text Format
Sample text data for testing
The test data is passed directly to your agent's process() method or workflow's run() method.
Output Format
Successful Test
$ daita test data_processor
✅ Testing: data_processor
✅ data_processor: OK
With Verbose Flag
$ daita test data_processor --verbose
✅ Testing: data_processor
✅ data_processor: OK
Status: success
Result type: dict
Testing All Components
$ daita test
✅ Testing 2 agents and 1 workflows
✅ data_processor: OK
✅ text_analyzer: OK
✅ data_pipeline: OK
Failed Test
$ daita test broken_agent
✅ Testing: broken_agent
❌ broken_agent: Failed to create agent instance - missing API key
With Verbose Error Details
$ daita test broken_agent --verbose
✅ Testing: broken_agent
❌ broken_agent: Processing failed - 'NoneType' object has no attribute 'process'
Traceback (most recent call last):
File "/path/to/test.py", line 92, in _test_agent
result = await agent_instance.process("process_data", test_data)
AttributeError: 'NoneType' object has no attribute 'process'
Common Issues
Missing Factory Function
❌ data_processor: Failed to load agent - No create_agent() function found
Solution: Ensure your agent file has a create_agent() function:
# agents/data_processor.py
from daita import SubstrateAgent
def create_agent():
"""Factory function to create the agent."""
return SubstrateAgent(
name="Data Processor",
description="Processes data"
)
Import Errors
❌ data_processor: Failed to load agent - No module named 'some_package'
Solution: Install missing dependencies:
pip install some_package
Agent Returns Wrong Type
⚠️ data_processor: Warning - agent returned str instead of dict
Best Practice: Return a dictionary with status:
async def process(self, action, data):
result = # ... your processing logic
return {
"status": "success",
"result": result
}
Use Cases
Quick Smoke Test
Verify all components can be loaded and executed:
daita test
Testing During Development
Test a specific component while developing:
# Test with default data
daita test my_agent --verbose
# Test with custom data
daita test my_agent --data test_inputs.json --verbose
Pre-Deployment Validation
Before deploying to production:
# Test all components
daita test
# If any fail, investigate with verbose mode
daita test failing_component --verbose
Related Commands
- daita create - Create new agents and workflows
- daita push - Deploy tested components to production
- daita status - Check deployment status