Authentication
The Daita framework uses two types of authentication:
- LLM Provider Authentication - API keys for AI providers (OpenAI, Anthropic, Google, xAI)
- Daita API Authentication - API key for cloud-hosted services (deployment, monitoring, webhooks)
Quick Start
Set up authentication with environment variables:
# LLM Provider Keys (choose your provider)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AIza..."
export XAI_API_KEY="xai-..."
# Daita Cloud Services (optional - for deployment and monitoring)
export DAITA_API_KEY="daita_..."
Create a .env file in your project:
# LLM Provider (required for local development)
OPENAI_API_KEY=sk-your-key-here
# Daita Cloud (required only for cloud deployment)
DAITA_API_KEY=daita-your-key-here
LLM Provider Authentication
LLM provider keys are used for local development and allow your agents to make AI model calls. The framework automatically detects and loads API keys from environment variables.
Supported Providers
| Provider | Environment Variable | Key Prefix | Models |
|---|---|---|---|
| OpenAI | OPENAI_API_KEY | sk-... | GPT-4, GPT-3.5-turbo |
| Anthropic | ANTHROPIC_API_KEY | sk-ant-... | Claude 3 Sonnet, Opus, Haiku |
| Google Gemini | GOOGLE_API_KEY or GEMINI_API_KEY | AIza... | Gemini 1.5 Pro, Flash |
| xAI (Grok) | XAI_API_KEY or GROK_API_KEY | xai-... | Grok Beta, Grok Vision |
Basic Setup
The framework automatically loads API keys from environment variables when you create an agent:
from daita.agents import SubstrateAgent
# Set environment variable (or use .env file)
import os
os.environ["OPENAI_API_KEY"] = "sk-your-openai-key"
# Create agent - API key loaded automatically
agent = SubstrateAgent(
name="my_agent",
llm_provider="openai",
model="gpt-4"
)
await agent.start()
# LLM calls are automatically authenticated
result = await agent.run("Explain machine learning")
Using Multiple Providers
Configure multiple providers and choose which one to use per agent:
import os
# Configure multiple providers
os.environ["OPENAI_API_KEY"] = "sk-..."
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..."
os.environ["GOOGLE_API_KEY"] = "AIza..."
# Use different providers for different agents
openai_agent = SubstrateAgent(
name="gpt_analyzer",
llm_provider="openai",
model="gpt-4"
)
anthropic_agent = SubstrateAgent(
name="claude_writer",
llm_provider="anthropic",
model="claude-3-sonnet-20240229"
)
gemini_agent = SubstrateAgent(
name="gemini_processor",
llm_provider="gemini",
model="gemini-1.5-flash"
)
Getting API Keys
OpenAI: Get your API key at platform.openai.com/api-keys
Anthropic: Get your API key at console.anthropic.com/settings/keys
Google Gemini: Get your API key at aistudio.google.com/app/apikey
xAI (Grok): Get your API key at console.x.ai
Daita API Authentication
The Daita API key (DAITA_API_KEY) is used for cloud-hosted services including deployment, monitoring, execution logs, and webhooks. This key is optional for local development but required for cloud operations.
What Requires DAITA_API_KEY?
Local Commands (No API key needed):
daita init- Create projectsdaita create agent/workflow- Generate agent filesdaita test- Run tests locallydaita test --watch- Development mode
Cloud Commands (API key required):
daita push- Deploy to productiondaita status- Check deployment statusdaita logs- View execution logsdaita run <agent>- Execute remotelydaita webhook list- List webhook URLsdaita deployments list- View deployment history
Getting Your Daita API Key
Visit daita-tech.io to sign up and get your API key for cloud services.
Setting Up Cloud Authentication
Add your Daita API key to your environment:
# Method 1: Export in shell
export DAITA_API_KEY="daita_your_key_here"
# Method 2: Add to ~/.bashrc or ~/.zshrc for persistence
echo 'export DAITA_API_KEY="daita_your_key_here"' >> ~/.bashrc
source ~/.bashrc
# Method 3: Use .env file in your project
echo "DAITA_API_KEY=daita_your_key_here" >> .env
Using Cloud Commands
Once your API key is set, you can use cloud commands:
# Deploy your agents to the cloud
daita push
# Check deployment status
daita status
# Execute agent remotely
daita run my_agent --data input.json
# View execution logs
daita logs
# List webhook URLs for your organization
daita webhook list
Verifying Authentication
Check if your API key is configured:
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Check LLM provider keys
if os.getenv("OPENAI_API_KEY"):
print("OpenAI: Configured")
if os.getenv("ANTHROPIC_API_KEY"):
print("Anthropic: Configured")
# Check Daita cloud key
if os.getenv("DAITA_API_KEY"):
print("Daita Cloud: Configured")
else:
print("Daita Cloud: Not configured (local mode only)")
Provider-Specific Details
OpenAI
OpenAI keys are automatically loaded from the OPENAI_API_KEY environment variable:
from daita.agents import SubstrateAgent
agent = SubstrateAgent(
name="gpt_agent",
llm_provider="openai",
model="gpt-4" # or "gpt-3.5-turbo"
)
The framework uses the OpenAI Python SDK internally and supports all standard OpenAI parameters (temperature, max_tokens, top_p, etc.).
Anthropic (Claude)
Anthropic keys are loaded from ANTHROPIC_API_KEY:
from daita.agents import SubstrateAgent
agent = SubstrateAgent(
name="claude_agent",
llm_provider="anthropic",
model="claude-3-sonnet-20240229"
)
Anthropic's token format uses input_tokens and output_tokens which are automatically mapped to standard prompt_tokens and completion_tokens for consistent tracking.
Google Gemini
Gemini keys can be set as GOOGLE_API_KEY or GEMINI_API_KEY:
from daita.agents import SubstrateAgent
agent = SubstrateAgent(
name="gemini_agent",
llm_provider="gemini",
model="gemini-1.5-flash" # or "gemini-1.5-pro"
)
The framework uses Google's generativeai package and handles both sync and async generation automatically.
xAI (Grok)
Grok keys can be set as XAI_API_KEY or GROK_API_KEY:
from daita.agents import SubstrateAgent
agent = SubstrateAgent(
name="grok_agent",
llm_provider="grok",
model="grok-beta"
)
Grok uses an OpenAI-compatible API, so the framework internally uses the OpenAI client with xAI's base URL.
Complete Setup Example
Here's a complete example showing both LLM and Daita API authentication:
Step 1: Create .env File
# .env file in your project root
# LLM Provider (choose one or more)
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
# Daita Cloud (optional - only for deployment)
DAITA_API_KEY=daita_...
Step 2: Local Development
# local_agent.py
from daita.agents import SubstrateAgent
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
# Create agent - API key loaded automatically
agent = SubstrateAgent(
name="data_analyzer",
llm_provider="openai",
model="gpt-4"
)
# Use the agent
async def main():
await agent.start()
result = await agent.run(
"Analyze this sales data and provide insights"
)
print(result)
await agent.stop()
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Test locally (no DAITA_API_KEY needed):
daita test data_analyzer
Step 3: Cloud Deployment
Once you have your DAITA_API_KEY configured, deploy to the cloud:
# Deploy to production
export DAITA_API_KEY="daita_your_key"
daita push
# Run remotely
daita run data_analyzer --data-json '{"text": "Sales increased 20%"}'
# View logs
daita logs
Troubleshooting
Common Issues
Issue: "OpenAI package not installed"
# Solution: Install the required provider package
pip install openai
# Or install all LLM providers at once
pip install daita-agents[llm]
Issue: "Authentication failed" or 401 errors
import os
# Check if your key is set
print(os.getenv("OPENAI_API_KEY")) # Should not be None
# Common causes:
# 1. Key not set in environment
# 2. Wrong key format (OpenAI keys start with 'sk-')
# 3. Expired or invalid key
# 4. Key has insufficient permissions
Issue: "Rate limit exceeded" (429 error)
This means you've exceeded your API provider's rate limits. Wait a few seconds and try again, or upgrade your API plan.
Issue: Daita cloud commands not working
# Check if DAITA_API_KEY is set
echo $DAITA_API_KEY
# If not set, add it:
export DAITA_API_KEY="daita_your_key"
# Or add to .env file
echo "DAITA_API_KEY=daita_your_key" >> .env
Debugging Script
Use this script to check your authentication setup:
import os
from dotenv import load_dotenv
# Load .env file
load_dotenv()
def check_auth():
"""Check authentication configuration."""
print("LLM Provider Keys:")
print("-" * 40)
providers = {
"OpenAI": "OPENAI_API_KEY",
"Anthropic": "ANTHROPIC_API_KEY",
"Google Gemini": "GOOGLE_API_KEY",
"xAI Grok": "XAI_API_KEY"
}
for name, env_var in providers.items():
key = os.getenv(env_var)
if key:
# Show first 8 characters for verification
preview = key[:8] + "..."
print(f"✓ {name}: {preview}")
else:
print(f"✗ {name}: Not configured")
print("\nDaita Cloud:")
print("-" * 40)
daita_key = os.getenv("DAITA_API_KEY")
if daita_key:
print(f"✓ Cloud services: Enabled")
else:
print(f"○ Cloud services: Local mode only")
if __name__ == "__main__":
check_auth()
Save as check_auth.py and run:
python check_auth.py
Best Practices
1. Never Hardcode API Keys
# BAD - Don't do this
agent = SubstrateAgent(
name="my_agent",
api_key="sk-proj-hardcoded-key-here" # Never hardcode keys!
)
# GOOD - Use environment variables
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..." # Set in shell or .env file
agent = SubstrateAgent(
name="my_agent",
llm_provider="openai"
)
2. Use .env Files for Local Development
Create a .env file in your project root:
# .env (add to .gitignore!)
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
DAITA_API_KEY=daita_...
Add .env to your .gitignore:
echo ".env" >> .gitignore
3. Use Environment Variables in Production
Set keys in your deployment environment, not in code:
# For AWS Lambda, set via environment variables in console
# For Docker, use docker-compose.yml or -e flags
# For serverless platforms, use their secrets management
export OPENAI_API_KEY="sk-proj-..."
export DAITA_API_KEY="daita_..."
4. Separate Development and Production Keys
Use different API keys for development and production:
# .env.development
OPENAI_API_KEY=sk-proj-dev-...
# .env.production (set in deployment environment)
OPENAI_API_KEY=sk-proj-prod-...
5. Handle Authentication Errors
from daita.agents import SubstrateAgent
from daita.core.exceptions import LLMError
try:
agent = SubstrateAgent(name="my_agent", llm_provider="openai")
await agent.start()
result = await agent.run("Hello")
except LLMError as e:
if "401" in str(e):
print("Invalid API key - check OPENAI_API_KEY")
elif "429" in str(e):
print("Rate limit exceeded - wait and retry")
else:
print(f"LLM error: {e}")
Security Notes
- Never commit API keys to version control
- Rotate keys regularly through your provider's dashboard
- Use separate keys for different environments
- Monitor usage through your provider's dashboard to detect unauthorized use
- Revoke compromised keys immediately if exposed
Next Steps
- Create your first agent using your configured authentication
- Deploy to production with secure cloud credentials using
daita push - Configure workflows with multiple authenticated agents