#[0.9.0] - 2026-03-07
Daita Agents is now open source under the Apache 2.0 license. This release prepares the framework for public use — cleaning up internals, hardening the plugin layer, adding a full unit test suite, and making the public API consistent and reliable.
#Added
-
Focus DSL System
- New pipe-delimited DSL for pre-filtering tool results before they reach the LLM, reducing token consumption without changing agent behaviour
- Supports filter expressions,
SELECTprojection,ORDER BY,LIMIT,GROUP BY, and aggregate functions (SUM,COUNT,AVG,MIN,MAX) - Apply per-tool via
@tool(focus="...")or agent-wide viaAgent(focus="...")andAgent(focus={"tool_name": "..."}) - Focus precedence chain: agent dict → agent string →
@tooldefault → no focus - Three backends: Python dict/list-of-dicts (universal fallback), Pandas DataFrame (native
df.query), and SQL (see below) - Public API:
from daita import apply_focus, FocusDSLError - 76 unit and real-world tests; live LLM validation against OpenAI
-
SQL Focus Pushdown (PostgreSQL, MySQL, Snowflake)
- Focus clauses are now compiled into SQL before the query executes — the database does the filtering instead of fetching all rows into Python first
- A
focusparameter is available on thepostgres_query,mysql_query, andsnowflake_querytools; passing a DSL string pushesWHERE,SELECT,ORDER BY,LIMIT, andGROUP BYinto the generated SQL - New
daita/core/focus/backends/sql.pycompiler translatesFocusQueryAST nodes to dialect-specific SQL fragments with correct parameter placeholder styles ($Nfor PostgreSQL,%sfor MySQL/Snowflake) - Unsupported constructs (dot-notation field access, complex expressions) fall back gracefully to Python-side evaluation via the existing
evaluate_remainingpath BaseDatabasePlugingains asql_dialectclass attribute and a shared_run_focus_query()method — focus integration is written once, not per plugin- Measured token reductions of ~90% for typical wide-table queries in integration tests
-
Conversation History
- New
ConversationHistoryclass for maintaining persistent multi-turn context across agent runs - Pass a
ConversationHistoryinstance toagent.run()to carry prior turns into each new call - Workspace isolation ensures conversation state is scoped per agent
- Exported from top-level
daitapackage:from daita import ConversationHistory
- New
-
URI / Connection String Support
- PostgreSQL, MySQL, and MongoDB plugins now accept a
uriparameter as an alternative to individual host/port/database kwargs - Simplifies configuration when connection strings are already available (e.g. from environment variables)
- PostgreSQL, MySQL, and MongoDB plugins now accept a
-
Secrets Management CLI
- New
daita secretscommand group for managing project secrets in the cloud environment - Set, list, and delete secrets without touching config files
- New
-
Unit Test Suite
- 12 new test files covering agent execution, agent initialization, tool registration, configuration, exceptions, LLM base, mock LLM, and plugin base
- ~2,400 lines of tests with a
SequentialMockLLMhelper for deterministic tool-calling scenarios - All tests run without API keys or databases:
pytest tests/ -m "not requires_llm and not requires_db"
-
Examples
examples/basic_agent.py— minimal tool-calling agent with OpenAIexamples/deployments/csv-data-analyst/— full deployment example: a CSV analyst agent with tools, tests, sample data, and adaita-project.yaml.env.examplefor environment variable reference
#Changed
-
Plugin Layer Overhaul
- PostgreSQL, MySQL, MongoDB, S3, Slack, Email, Elasticsearch, Snowflake, REST, Chroma, Pinecone, and Qdrant plugins all received significant internal rewrites
- Improved error messages, more consistent tool schemas, and better handling of edge cases across all database plugins
BaseDatabasePlugininterface simplified and made more consistent across implementors
-
Lazy Imports Enforced Across All Modules
- Comprehensive audit and cleanup of top-level imports across all 50+ source files
- All optional dependencies (database drivers, cloud SDKs, etc.) now strictly imported inside
connect()or@property client— never at module level - Fixes
ImportErroron install for users who only have a subset of optional extras
-
Tracing and Relay Internals Refactored
decision_tracing.py,tracing.py, andrelay.pyrewritten for lower overhead- Removed ~450 lines of redundant tracing infrastructure while preserving all observable behaviour
tools.pyinternal handler dispatch simplified
-
BaseAgent Simplified
- Removed redundant process pipeline scaffolding that duplicated logic in
Agent - Retry infrastructure consolidated into
_retry_with_tracing()used by all subclasses
- Removed redundant process pipeline scaffolding that duplicated logic in
-
Workflow Deduplicated
- Removed ~50 lines of workflow logic that duplicated relay and agent behaviour
- Workflow now delegates cleanly to the relay layer without re-implementing message routing
-
daita-clientDecoupleddaita-clientis no longer a hard dependency ofdaita-agents- It is a separate package for running agents in the Daita cloud environment:
pip install daita-client daita/execution.pyre-export shim removed; import directly fromdaita_clientwhen needed
#Fixed
-
Memory Plugin
- Fixed embedding failures causing silent result drops during memory storage
- Fixed storage migration to handle schema changes without data loss
-
CLI
daita runoutput and error handling correcteddaita memorycommands overhauled — removed ~300 lines of dead sync utilities and fixed state managementdaita deployconsolidated intodaita managed-deploy; redundantdeploy.pyremoved
-
ToolRegistryDuplicate Registration- Re-registering a tool with the same name no longer creates duplicate entries in the tools list
- Previous behaviour caused
available_toolsto return duplicates and inflatedtool_count
-
FocusedToolResult Handling- Fixed a silent data-destruction bug where
Agent(focus=...)applied Python filtering to the plugin result wrapper dict{"success": True, "rows": [...]}instead of therowslist — filters likestatus == 'completed'would find no matching field on the wrapper and return an empty result to the LLM - Focus is now applied to
result["rows"]for plugin-style results, withrow_countupdated to reflect the filtered count FocusedToolnow detects SQL plugin tools (via"focus"in the tool's parameter schema) and injects the DSL as a tool argument before execution, routing through SQL pushdown instead of Python filtering
- Fixed a silent data-destruction bug where
-
SQL Focus NULL Comparisons
status == Nonein a focus DSL previously compiled tocol = NULLin SQL, which is alwaysFALSE— all rows were silently excluded- Now correctly emits
col IS NULLfor==andcol IS NOT NULLfor!=comparisons againstNone - Other operators against
None(>,<, etc.) fall back to Python evaluation
-
FocusConfigField Mapping- Fixed
FocusedToolincorrectly accessing non-existent fields (.columns,.path,.selector,.pattern) onFocusConfig - Corrected field references to match the actual model (
include,paths) - Updated
.dict()call to.model_dump()for Pydantic v2 compatibility
- Fixed
#Removed
-
register_tool()API- Removed the
agent.register_tool()method - Use the
tools=constructor parameter oragent.add_plugin()instead
- Removed the