Skip to main content

Daita Status

Display project status and deployment information, similar to git status. Shows your local project components, cloud deployment history, and any issues that need attention.

Syntax

daita status [OPTIONS]

Options

OptionTypeDefaultDescription
--envstringNoneShow status for specific environment only
--verboseflagfalseShow detailed information (functions, etc.)

Requirements

  • Must be run inside a Daita project directory (contains .daita/ folder)
  • DAITA_API_KEY environment variable for cloud deployment status (optional)

Examples

Basic Usage

# Show project status and deployments
daita status

# Show status for specific environment
daita status --env staging

# Show detailed status with function list
daita status --env staging --verbose

Status Output

Basic Project Status

$ daita status

📊 Project: my-ai-agents (v1.0.0)
📁 Location: /home/user/projects/my-ai-agents

🔧 Components:
Agents (3):
✅ data_processor → 'Data Processing Agent'
✅ sentiment_analyzer → 'Sentiment Analysis'
✅ report_generator → 'Report Generator'

Workflows (2):
✅ data_pipeline → 'Data Processing Pipeline'
✅ analysis_workflow → 'Analysis Workflow'

☁️ Cloud Deployments (5):
● staging: v1.0.5 (2024-12-01 14:30) (current)
○ staging: v1.0.4 (2024-11-30 16:20)
○ staging: v1.0.3 (2024-11-29 10:15)
○ staging: v1.0.2 (2024-11-28 12:00)
○ staging: v1.0.1 (2024-11-27 09:30)

✅ No issues found

📋 Quick commands:
daita create agent my_agent # Create new agent (free)
daita test # Test all components (free)
daita test --watch # Development mode (free)
daita push staging # Deploy to cloud
daita logs staging # View cloud logs

Environment-Specific Status

$ daita status --env staging

📊 Environment: staging
✅ Status: active
Last deployed: 2024-12-01 14:30:45
Version: 1.0.5

With Verbose Flag

$ daita status --env staging --verbose

📊 Environment: staging
✅ Status: active
Last deployed: 2024-12-01 14:30:45
Version: 1.0.5
Functions: 5
• data_processor
• sentiment_analyzer
• report_generator
• data_pipeline
• analysis_workflow

Project With Issues

$ daita status

📊 Project: my-ai-agents (v1.0.0)
📁 Location: /home/user/projects/my-ai-agents

🔧 Components:
Agents (2):
✅ sentiment_analyzer → 'Sentiment Analysis'
❌ data_processor (file not found)

Workflows (1):
✅ data_pipeline → 'Data Processing Pipeline'

☁️ Cloud Deployments: Upgrade required
Get your API key at daita-tech.io
Local deployment history:
No local deployment history

⚠️ Issues:
❌ Missing agent file: data_processor.py
❌ Missing requirements.txt
❌ No LLM API key found (set OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.)

📋 Quick commands:
daita create agent my_agent # Create new agent (free)
daita test # Test all components (free)
daita test --watch # Development mode (free)

Ready for cloud deployment?
Get your API key at daita-tech.io

What Gets Shown

The status command displays:

  1. Project Information: Name, version, and location
  2. Components: Lists agents and workflows from daita-project.yaml (or filesystem if not in config)
  3. Cloud Deployments: Recent deployment history from the Daita API (requires DAITA_API_KEY)
  4. Issues: Missing files, missing API keys, configuration problems
  5. Quick Commands: Helpful commands based on your setup

Status Indicators

Component Status

  • - Component file exists
  • - Component file missing

Deployment Status

  • (filled) - Active deployment
  • (empty) - Previous deployment

Common Issues Detected

The status command checks for:

  • Missing component files: Agents or workflows defined in config but file doesn't exist
  • Missing configuration files: daita-project.yaml or requirements.txt
  • Missing API keys: No LLM provider API key set (OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY)

Cloud vs Local Mode

With DAITA_API_KEY Set

Shows cloud deployment history from the Daita platform API.

Without DAITA_API_KEY

Shows message encouraging cloud deployment and displays local deployment history from .daita/deployments.json (if any).