Authentication
Configure API keys for LLM providers (OpenAI, Anthropic, Google, xAI) and Daita cloud services. Learn how to set up environment variables, use multiple providers, and manage authentication for local development and deployment.
The Daita framework uses two types of authentication:
- LLM Provider Authentication - API keys for AI providers (OpenAI, Anthropic, Google, xAI)
- Daita API Authentication - API key for cloud-hosted services (deployment, monitoring, webhooks)
#Quick Start
Set up authentication with environment variables:
# LLM Provider Keys (choose your provider)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="AIza..."
export XAI_API_KEY="xai-..."
# Daita Cloud Services (optional - for deployment and monitoring)
export DAITA_API_KEY="daita_..."Create a .env file in your project:
# LLM Provider (required for local development)
OPENAI_API_KEY=sk-your-key-here
# Daita Cloud (required only for cloud deployment)
DAITA_API_KEY=daita-your-key-here#LLM Provider Authentication
LLM provider keys are used for local development and allow your agents to make AI model calls. The framework automatically detects and loads API keys from environment variables.
#Supported Providers
| Provider | Environment Variable | Key Prefix | Models |
|---|---|---|---|
| OpenAI | OPENAI_API_KEY | sk-... | GPT-4, GPT-3.5-turbo |
| Anthropic | ANTHROPIC_API_KEY | sk-ant-... | Claude 3 Sonnet, Opus, Haiku |
| Google Gemini | GOOGLE_API_KEY or GEMINI_API_KEY | AIza... | Gemini 1.5 Pro, Flash |
| xAI (Grok) | XAI_API_KEY or GROK_API_KEY | xai-... | Grok Beta, Grok Vision |
#Basic Setup
The framework automatically loads API keys from environment variables when you create an agent:
from daita import Agent
# Create agent - API key loaded automatically
agent = Agent(
name="my_agent",
llm_provider="openai",
model="gpt-4"
)
await agent.start()
# LLM calls are automatically authenticated
result = await agent.run("Explain machine learning")#Getting API Keys
OpenAI: Get your API key at platform.openai.com/api-keys
Anthropic: Get your API key at console.anthropic.com/settings/keys
Google Gemini: Get your API key at aistudio.google.com/app/apikey
xAI (Grok): Get your API key at console.x.ai
#Daita API Authentication
The Daita API key (DAITA_API_KEY) is used for cloud-hosted services including deployment, monitoring, execution logs, and webhooks. This key is optional for local development but required for cloud operations.
#What Requires DAITA_API_KEY?
Local Commands (No API key needed):
daita init- Create projectsdaita create agent/workflow- Generate agent filesdaita test- Run tests locallydaita test --watch- Development mode
Cloud Commands (API key required):
daita push- Deploy to productiondaita status- Check deployment statusdaita logs- View execution logsdaita run <agent>- Execute remotelydaita webhook list- List webhook URLsdaita deployments list- View deployment history
#Getting Your Daita API Key
Visit daita-tech.io to sign up and get your API key for cloud services.
#Setting Up Cloud Authentication
Add your Daita API key to your environment:
# Method 1: Export in shell
export DAITA_API_KEY="daita_your_key_here"
# Method 2: Add to ~/.bashrc or ~/.zshrc for persistence
echo 'export DAITA_API_KEY="daita_your_key_here"' >> ~/.bashrc
source ~/.bashrc
# Method 3: Use .env file in your project
echo "DAITA_API_KEY=daita_your_key_here" >> .env#Using Cloud Commands
Once your API key is set, you can use cloud commands:
# Deploy your agents to the cloud
daita push
# Check deployment status
daita status
# Execute agent remotely
daita run my_agent --data input.json
# View execution logs
daita logs
# List webhook URLs for your organization
daita webhook list#Complete Setup Example
Here's a complete example showing both LLM and Daita API authentication:
#Step 1: Create .env File
# .env file in your project root
# LLM Provider (choose one or more)
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=AIza...
# Daita Cloud (optional - only for deployment)
DAITA_API_KEY=daita_...#Step 2: Local Development
# local_agent.py
from daita import Agent
# Create agent - API key loaded automatically
agent = Agent(
name="data_analyzer",
llm_provider="openai",
model="gpt-4"
)
# Use the agent
async def main():
await agent.start()
result = await agent.run(
"Analyze this sales data and provide insights"
)
print(result)
await agent.stop()
if __name__ == "__main__":
import asyncio
asyncio.run(main())Test locally (no DAITA_API_KEY needed):
daita test data_analyzer#Step 3: Cloud Deployment
Once you have your DAITA_API_KEY configured, deploy to the cloud:
# Deploy to production
export DAITA_API_KEY="daita_your_key"
daita push
# Run remotely
daita run data_analyzer --data-json '{"text": "Sales increased 20%"}'
# View logs
daita logs#Troubleshooting
#Common Issues
Issue: "OpenAI package not installed"
# Solution: Install the required provider package
pip install openai
# Or install all LLM providers at once
pip install daita-agents[llm]Issue: "Authentication failed" or 401 errors
import os
# Check if your key is set
print(os.getenv("OPENAI_API_KEY")) # Should not be None
# Common causes:
# 1. Key not set in environment
# 2. Wrong key format (OpenAI keys start with 'sk-')
# 3. Expired or invalid key
# 4. Key has insufficient permissionsIssue: "Rate limit exceeded" (429 error)
This means you've exceeded your API provider's rate limits. Wait a few seconds and try again, or upgrade your API plan.
Issue: Daita cloud commands not working
# Check if DAITA_API_KEY is set
echo $DAITA_API_KEY
# If not set, add it:
export DAITA_API_KEY="daita_your_key"
# Or add to .env file
echo "DAITA_API_KEY=daita_your_key" >> .env#Debugging Script
Use this script to check your authentication setup:
import os
from dotenv import load_dotenv
# Load .env file
load_dotenv()
def check_auth():
"""Check authentication configuration."""
print("LLM Provider Keys:")
print("-" * 40)
providers = {
"OpenAI": "OPENAI_API_KEY",
"Anthropic": "ANTHROPIC_API_KEY",
"Google Gemini": "GOOGLE_API_KEY",
"xAI Grok": "XAI_API_KEY"
}
for name, env_var in providers.items():
key = os.getenv(env_var)
if key:
# Show first 8 characters for verification
preview = key[:8] + "..."
print(f"✓ {name}: {preview}")
else:
print(f"✗ {name}: Not configured")
print("\nDaita Cloud:")
print("-" * 40)
daita_key = os.getenv("DAITA_API_KEY")
if daita_key:
print(f"✓ Cloud services: Enabled")
else:
print(f"○ Cloud services: Local mode only")
if __name__ == "__main__":
check_auth()Save as check_auth.py and run:
python check_auth.py#Best Practices
#1. Never Hardcode API Keys
# BAD - Don't do this
agent = Agent(
name="my_agent",
api_key="sk-proj-hardcoded-key-here" # Never hardcode keys!
)
# GOOD - Use environment variables
import os
os.environ["OPENAI_API_KEY"] = "sk-proj-..." # Set in shell or .env file
agent = Agent(
name="my_agent",
llm_provider="openai"
)#2. Use .env Files for Local Development
Create a .env file in your project root:
# .env (add to .gitignore!)
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
DAITA_API_KEY=daita_...Add .env to your .gitignore:
echo ".env" >> .gitignore#3. Use Environment Variables in Production
Set keys in your deployment environment, not in code:
# For AWS Lambda, set via environment variables in console
# For Docker, use docker-compose.yml or -e flags
# For serverless platforms, use their secrets management
export OPENAI_API_KEY="sk-proj-..."
export DAITA_API_KEY="daita_..."#4. Separate Development and Production Keys
Use different API keys for development and production:
# .env.development
OPENAI_API_KEY=sk-proj-dev-...
# .env.production (set in deployment environment)
OPENAI_API_KEY=sk-proj-prod-...#5. Handle Authentication Errors
from daita import Agent
from daita.core.exceptions import LLMError
try:
agent = Agent(name="my_agent", llm_provider="openai")
await agent.start()
result = await agent.run("Hello")
except LLMError as e:
if "401" in str(e):
print("Invalid API key - check OPENAI_API_KEY")
elif "429" in str(e):
print("Rate limit exceeded - wait and retry")
else:
print(f"LLM error: {e}")#Security Notes
- Never commit API keys to version control
- Rotate keys regularly through your provider's dashboard
- Use separate keys for different environments
- Monitor usage through your provider's dashboard to detect unauthorized use
- Revoke compromised keys immediately if exposed
#Next Steps
- Create your first agent using your configured authentication
- Deploy to production with secure cloud credentials using
daita push - Configure workflows with multiple authenticated agents