Remote Execution
Execute your deployed agents and workflows programmatically from Python code or the CLI. The remote execution system allows you to trigger agent executions from your applications, scripts, or automation workflows.
#Overview
Daita provides two ways to execute deployed agents remotely:
- Python SDK -
DaitaClientfor programmatic execution from Python code - CLI Commands - Execute from the command line with
daita run
Both methods use the same underlying API and support:
- Synchronous and asynchronous execution
- Real-time status monitoring
- Execution history and logs
- Error handling and retries
#Python SDK
#Installation
The DaitaClient is included in the daita-agents package:
pip install daita-agents#Basic Usage
from daita import DaitaClient
# Initialize client with your API key
client = DaitaClient(api_key="your_daita_api_key")
# Execute an agent
result = client.execute_agent(
"data_processor",
data={"input": "process this data"}
)
# Check result
if result.is_success:
print(f"Agent output: {result.result}")
else:
print(f"Error: {result.error}")#Execute and Wait
# Execute and wait for completion (synchronous)
result = client.execute_agent(
"sentiment_analyzer",
data={"text": "This product is amazing!"},
wait=True # Wait for completion
)
print(f"Status: {result.status}")
print(f"Result: {result.result}")
print(f"Duration: {result.duration_seconds}s")#Execute Workflows
# Execute a workflow
result = client.execute_workflow(
"data_pipeline",
data={"source": "s3://bucket/data.csv"},
wait=True
)
if result.is_success:
print(f"Pipeline completed: {result.result}")#Check Execution Status
# Start execution without waiting
result = client.execute_agent("long_running_agent", data={...})
# Check status later
status = client.get_execution(result.execution_id)
print(f"Current status: {status.status}")
# Wait for completion when ready
final_result = client.wait_for_execution(
result.execution_id,
timeout=600 # 10 minutes
)#List Recent Executions
# Get recent executions
executions = client.list_executions(limit=10)
for execution in executions:
print(f"{execution.target_name}: {execution.status}")
# Filter executions
completed = client.list_executions(
status="completed",
target_type="agent",
limit=20
)#Get Latest Execution
# Get most recent execution for an agent
latest = client.get_latest_execution(agent_name="my_agent")
if latest:
print(f"Latest execution: {latest.status}")
print(f"Result: {latest.result}")#Cancel Execution
# Cancel a running execution
success = client.cancel_execution(execution_id)
if success:
print("Execution cancelled")#Advanced Python Usage
#Async Execution
Use async/await for concurrent executions:
import asyncio
from daita import DaitaClient
async def process_batch():
async with DaitaClient(api_key="your_key") as client:
# Execute multiple agents concurrently
tasks = [
client.execute_agent_async("agent1", data={"id": 1}),
client.execute_agent_async("agent2", data={"id": 2}),
client.execute_agent_async("agent3", data={"id": 3})
]
results = await asyncio.gather(*tasks)
for result in results:
print(f"{result.target_name}: {result.status}")
asyncio.run(process_batch())#Custom Configuration
Configure client timeout, retries, and environment:
client = DaitaClient(
api_key="your_key",
timeout=600, # Request timeout in seconds
max_retries=5, # Number of retries
retry_delay=2.0 # Base delay between retries
)
# Execute with specific environment
result = client.execute_agent(
"my_agent",
data={...},
environment="staging", # or "production"
wait=True
)#CLI Commands
#Execute Agent
# Basic execution
daita run my_agent
# Execute with data from file
daita run my_agent --data input.json
# Execute with inline JSON data
daita run my_agent --data-json '{"input": "test data"}'
# Execute and follow progress
daita run my_agent --data input.json --follow
# Verbose output
daita run my_agent --data input.json --verbose#Execute Workflow
# Execute workflow
daita run data_pipeline --type workflow --data input.json
# With environment specification
daita run data_pipeline --type workflow --env production#Execute with Options
# Specify task for agent
daita run my_agent --task analyze --data input.json
# Set timeout
daita run long_agent --timeout 600 --data input.json
# Execute in staging
daita run my_agent --env staging --data input.json#View Execution History
# List recent executions
daita executions
# Limit number of results
daita executions --limit 20
# Filter by status
daita executions --status completed
# Filter by type
daita executions --type agent
# Filter by environment
daita executions --env production#View Execution Logs
# Get logs for specific execution
daita execution-logs exec_abc123
# Follow execution progress
daita execution-logs exec_abc123 --follow#ExecutionResult Object
The ExecutionResult object contains all information about an execution:
#Properties
result = client.execute_agent("my_agent", data={...})
# Execution identifiers
result.execution_id # Unique execution ID
result.target_name # Agent/workflow name
result.target_type # "agent" or "workflow"
# Status and results
result.status # "queued", "running", "completed", "failed", "cancelled"
result.result # Execution output (dict)
result.error # Error message if failed
# Timing information
result.created_at # When execution was created
result.started_at # When execution started
result.completed_at # When execution completed
result.duration_ms # Duration in milliseconds
result.duration_seconds # Duration in seconds (property)
# Resource usage
result.memory_used_mb # Memory used in MB
result.cost_estimate # Estimated cost
# Monitoring
result.trace_id # Trace ID for debugging
result.dashboard_url # Link to dashboard (if available)
# Helper properties
result.is_complete # True if completed/failed/cancelled
result.is_success # True if completed successfully
result.is_running # True if queued/running#Example Usage
result = client.execute_agent("my_agent", data={...}, wait=True)
# Check status
if result.is_success:
print(f"✅ Success!")
print(f"Result: {result.result}")
print(f"Duration: {result.duration_seconds:.2f}s")
elif result.is_running:
print(f"⏳ Still running...")
else:
print(f"❌ Failed: {result.error}")
# Access specific result fields
if result.result:
output = result.result.get('output')
metadata = result.result.get('metadata')#Use Cases
#Scheduled Execution
Execute agents on a schedule:
from daita import DaitaClient
import time
client = DaitaClient(api_key="your_key")
def run_daily_pipeline():
result = client.execute_workflow(
"daily_pipeline",
data={"date": time.strftime("%Y-%m-%d")},
wait=True
)
if result.is_success:
print(f"Completed: {result.result}")
else:
print(f"Failed: {result.error}")
# Run with scheduler like cron, schedule, or APScheduler#API Integration
Call agents from your API endpoints:
from daita import DaitaClient
client = DaitaClient(api_key="your_key")
def analyze_endpoint(text: str):
"""API handler that uses Daita agent."""
result = client.execute_agent(
"sentiment_analyzer",
data={"text": text},
wait=True
)
return result.result if result.is_success else {"error": result.error}#Batch Processing
Process multiple items efficiently:
from daita import DaitaClient
client = DaitaClient(api_key="your_key")
for item in items:
result = client.execute_agent(
"item_processor",
data=item,
wait=True
)
print(f"{item['id']}: {result.status}")#Best Practices
Client Management:
- Reuse
DaitaClientinstances across multiple executions - Use async context managers for automatic cleanup:
async with DaitaClient(...) - Call
client.close()when done if not using context managers
Error Handling:
- Always check
result.is_successbefore accessingresult.result - Wrap calls in try-except to catch
ExecutionErrorandAuthenticationError - Log failures and implement retry logic for critical operations
Timeouts:
- Set appropriate timeouts based on expected execution time
- Use shorter timeouts for quick agents (default is usually fine)
- Increase timeout for long-running workflows:
wait_for_execution(id, timeout=1800)
Monitoring:
- Use
wait=Truefor synchronous execution with automatic waiting - For async execution, poll with
get_execution()until complete - Check
result.is_runningto monitor progress - Use execution logs to debug failures
Performance:
- Use async execution for concurrent agent calls
- Batch multiple requests with
asyncio.gather() - Avoid creating new clients in loops
#Environment Variables
The execution system uses these environment variables:
# Required for all operations
export DAITA_API_KEY="your_api_key"
# Optional - customize API endpoint
export DAITA_API_ENDPOINT="https://api.daita-tech.io"#Troubleshooting
Authentication Errors:
- Verify
DAITA_API_KEYenvironment variable is set correctly - Check that API key is valid and has proper permissions
- Ensure you're using the correct API endpoint
Agent Not Found:
- Confirm agent is deployed:
daita status - Deploy if needed:
daita push production - Verify agent name matches deployment exactly
Execution Timeout:
- Increase timeout:
client.wait_for_execution(id, timeout=1800) - Check agent execution time is reasonable
- Review agent logs for bottlenecks
Connection Errors:
- Configure retries:
DaitaClient(max_retries=5, retry_delay=2.0) - Check network connectivity to API endpoint
- Verify firewall settings allow outbound HTTPS
#Next Steps
- Deployment - Deploy agents to production
- Agent - Learn about creating agents
- Workflows - Learn about creating workflows
- CLI Reference - Complete CLI command reference