Back to Changelog
v0.9.0
March 7, 2026

Open Source

AddedChangedFixedRemoved

#[0.9.0] - 2026-03-07

Daita Agents is now open source under the Apache 2.0 license. This release prepares the framework for public use — cleaning up internals, hardening the plugin layer, adding a full unit test suite, and making the public API consistent and reliable.

#Added

  • Focus DSL System

    • New pipe-delimited DSL for pre-filtering tool results before they reach the LLM, reducing token consumption without changing agent behaviour
    • Supports filter expressions, SELECT projection, ORDER BY, LIMIT, GROUP BY, and aggregate functions (SUM, COUNT, AVG, MIN, MAX)
    • Apply per-tool via @tool(focus="...") or agent-wide via Agent(focus="...") and Agent(focus={"tool_name": "..."})
    • Focus precedence chain: agent dict → agent string → @tool default → no focus
    • Three backends: Python dict/list-of-dicts (universal fallback), Pandas DataFrame (native df.query), and SQL (see below)
    • Public API: from daita import apply_focus, FocusDSLError
    • 76 unit and real-world tests; live LLM validation against OpenAI
  • SQL Focus Pushdown (PostgreSQL, MySQL, Snowflake)

    • Focus clauses are now compiled into SQL before the query executes — the database does the filtering instead of fetching all rows into Python first
    • A focus parameter is available on the postgres_query, mysql_query, and snowflake_query tools; passing a DSL string pushes WHERE, SELECT, ORDER BY, LIMIT, and GROUP BY into the generated SQL
    • New daita/core/focus/backends/sql.py compiler translates FocusQuery AST nodes to dialect-specific SQL fragments with correct parameter placeholder styles ($N for PostgreSQL, %s for MySQL/Snowflake)
    • Unsupported constructs (dot-notation field access, complex expressions) fall back gracefully to Python-side evaluation via the existing evaluate_remaining path
    • BaseDatabasePlugin gains a sql_dialect class attribute and a shared _run_focus_query() method — focus integration is written once, not per plugin
    • Measured token reductions of ~90% for typical wide-table queries in integration tests
  • Conversation History

    • New ConversationHistory class for maintaining persistent multi-turn context across agent runs
    • Pass a ConversationHistory instance to agent.run() to carry prior turns into each new call
    • Workspace isolation ensures conversation state is scoped per agent
    • Exported from top-level daita package: from daita import ConversationHistory
  • URI / Connection String Support

    • PostgreSQL, MySQL, and MongoDB plugins now accept a uri parameter as an alternative to individual host/port/database kwargs
    • Simplifies configuration when connection strings are already available (e.g. from environment variables)
  • Secrets Management CLI

    • New daita secrets command group for managing project secrets in the cloud environment
    • Set, list, and delete secrets without touching config files
  • Unit Test Suite

    • 12 new test files covering agent execution, agent initialization, tool registration, configuration, exceptions, LLM base, mock LLM, and plugin base
    • ~2,400 lines of tests with a SequentialMockLLM helper for deterministic tool-calling scenarios
    • All tests run without API keys or databases: pytest tests/ -m "not requires_llm and not requires_db"
  • Examples

    • examples/basic_agent.py — minimal tool-calling agent with OpenAI
    • examples/deployments/csv-data-analyst/ — full deployment example: a CSV analyst agent with tools, tests, sample data, and a daita-project.yaml
    • .env.example for environment variable reference

#Changed

  • Plugin Layer Overhaul

    • PostgreSQL, MySQL, MongoDB, S3, Slack, Email, Elasticsearch, Snowflake, REST, Chroma, Pinecone, and Qdrant plugins all received significant internal rewrites
    • Improved error messages, more consistent tool schemas, and better handling of edge cases across all database plugins
    • BaseDatabasePlugin interface simplified and made more consistent across implementors
  • Lazy Imports Enforced Across All Modules

    • Comprehensive audit and cleanup of top-level imports across all 50+ source files
    • All optional dependencies (database drivers, cloud SDKs, etc.) now strictly imported inside connect() or @property client — never at module level
    • Fixes ImportError on install for users who only have a subset of optional extras
  • Tracing and Relay Internals Refactored

    • decision_tracing.py, tracing.py, and relay.py rewritten for lower overhead
    • Removed ~450 lines of redundant tracing infrastructure while preserving all observable behaviour
    • tools.py internal handler dispatch simplified
  • BaseAgent Simplified

    • Removed redundant process pipeline scaffolding that duplicated logic in Agent
    • Retry infrastructure consolidated into _retry_with_tracing() used by all subclasses
  • Workflow Deduplicated

    • Removed ~50 lines of workflow logic that duplicated relay and agent behaviour
    • Workflow now delegates cleanly to the relay layer without re-implementing message routing
  • daita-client Decoupled

    • daita-client is no longer a hard dependency of daita-agents
    • It is a separate package for running agents in the Daita cloud environment: pip install daita-client
    • daita/execution.py re-export shim removed; import directly from daita_client when needed

#Fixed

  • Memory Plugin

    • Fixed embedding failures causing silent result drops during memory storage
    • Fixed storage migration to handle schema changes without data loss
  • CLI

    • daita run output and error handling corrected
    • daita memory commands overhauled — removed ~300 lines of dead sync utilities and fixed state management
    • daita deploy consolidated into daita managed-deploy; redundant deploy.py removed
  • ToolRegistry Duplicate Registration

    • Re-registering a tool with the same name no longer creates duplicate entries in the tools list
    • Previous behaviour caused available_tools to return duplicates and inflated tool_count
  • FocusedTool Result Handling

    • Fixed a silent data-destruction bug where Agent(focus=...) applied Python filtering to the plugin result wrapper dict {"success": True, "rows": [...]} instead of the rows list — filters like status == 'completed' would find no matching field on the wrapper and return an empty result to the LLM
    • Focus is now applied to result["rows"] for plugin-style results, with row_count updated to reflect the filtered count
    • FocusedTool now detects SQL plugin tools (via "focus" in the tool's parameter schema) and injects the DSL as a tool argument before execution, routing through SQL pushdown instead of Python filtering
  • SQL Focus NULL Comparisons

    • status == None in a focus DSL previously compiled to col = NULL in SQL, which is always FALSE — all rows were silently excluded
    • Now correctly emits col IS NULL for == and col IS NOT NULL for != comparisons against None
    • Other operators against None (>, <, etc.) fall back to Python evaluation
  • FocusConfig Field Mapping

    • Fixed FocusedTool incorrectly accessing non-existent fields (.columns, .path, .selector, .pattern) on FocusConfig
    • Corrected field references to match the actual model (include, paths)
    • Updated .dict() call to .model_dump() for Pydantic v2 compatibility

#Removed

  • register_tool() API

    • Removed the agent.register_tool() method
    • Use the tools= constructor parameter or agent.add_plugin() instead