#[0.14.0] - 2026-04-05
Major release that restructures the Catalog plugin into a modular discovery/profiler architecture with broad AWS service coverage, adds Redis and BigQuery plugins, introduces cooldown control to watch(), and optimizes the Memory plugin with batch storage, TTL, and eviction.
#Added
-
Redis Plugin (
daita/plugins/redis.py)Full-featured Redis data store plugin with key-value, hash, list, and set operations. Supports namespace isolation via
key_prefix, read-only mode, and configurable connection pooling.pythonfrom daita.plugins import redis async with redis(url="redis://localhost:6379", key_prefix="myapp:") as r: await r.set("user:1", '{"name": "Alice"}', ttl=3600) value = await r.get("user:1")Install with
pip install 'daita-agents[redis]'. -
BigQuery Plugin (
daita/plugins/bigquery.py)Google BigQuery data warehouse plugin with query execution and schema inspection. Wraps the synchronous
google-cloud-bigquerySDK with asyncio executors. Supports service account and Application Default Credentials.pythonfrom daita.plugins import bigquery async with bigquery(project="my-project", dataset="analytics") as bq: results = await bq.query("SELECT * FROM users LIMIT 10")Install with
pip install 'daita-agents[bigquery]'. -
Catalog Plugin — Modular Discovery & Profiling Architecture
The monolithic
catalog.pyhas been replaced with a package underdaita/plugins/catalog/featuring a pluggable discoverer/profiler system:BaseDiscovererandBaseProfiler— abstract base classes for building custom discovery and profiling backends.- AWS Discoverer (
catalog/aws.py) — discovers infrastructure from AWS accounts with individual discoverers for PostgreSQL (RDS), MySQL (RDS), DynamoDB, S3, MongoDB (DocumentDB), API Gateway, Kinesis, OpenSearch, SNS, and SQS. - GitHub Discoverer (
catalog/github.py) — discovers repositories and OpenAPI specs from GitHub organizations. - Per-service Profilers — dedicated profilers under
catalog/profiler/for each supported AWS service, extracting normalized schemas. - Schema Normalizer (
catalog/normalizer.py) — unified normalization pipeline producingNormalizedSchemaobjects across all source types. - Schema Comparator (
catalog/comparator.py) — diffs twoNormalizedSchemasnapshots to detect added/removed/changed tables and columns. - Diagram Export (
catalog/diagram.py) — generates Mermaid ER diagrams from normalized schemas. - Persistence Layer (
catalog/persistence.py) — pluggable storage for catalog snapshots withregister_catalog_backend_factory()for custom backends.
New
pyproject.tomlextras:opensearch,github,bigquery. -
Watch Cooldown Parameter
agent.watch()now accepts acooldownparameter that controls repeat alerting while a threshold stays met:python@agent.watch(source=pg, condition="SELECT COUNT(*) FROM orders", threshold=lambda v: v > 100, interval="10s", cooldown=True) # fire once, then only on resolve async def on_spike(event): print(f"Order spike: {event.value}")False/None(default): fire every poll cycle (existing behavior).True: fire once, then only again on resolve (alarm model).str/timedelta: fire once, then re-alert on the given interval (e.g."5m").
-
Memory Plugin — Batch Storage & TTL
remember()now accepts a list of dicts for batch ingestion with a single embedding API call. Newttl_daysparameter (per-memory) anddefault_ttl_days(plugin-wide) enable automatic memory expiry. Amax_chunkssetting caps stored chunks with LRU-style eviction.pythonmemory = MemoryPlugin(max_chunks=2000, default_ttl_days=90) await remember([ {"content": "Q1 revenue was $4.2M", "importance": 0.8}, {"content": "New VP of Eng starts March 15", "category": "people"}, ]) -
Memory Plugin — Auto-Classification & Time-Aware Recall
New
auto_classify.pymodule for automatic memory categorization at ingestion. Time parameters in recall accept relative shorthand ("24h","7d","30d") in addition to ISO datetimes. -
Example Deployments
db-health-monitor— a deployable agent that watches database health metrics with remediation handlers.infrastructure-catalog— a deployable agent that discovers and catalogs infrastructure using the new Catalog plugin.
#Changed
-
Catalog plugin restructured from single file to package
daita/plugins/catalog.py(1 531 lines) has been replaced bydaita/plugins/catalog/with focused modules. Backward-compatible:from daita.plugins.catalog import CatalogPlugincontinues to work. -
Graph backend reads are now lock-protected
get_node(),get_edges(), andload_graph()inLocalGraphBackendnow acquire_lockbefore reading, preventing races with concurrent writes.remove_node()andupdate_node_properties()now set_dirty = Trueinstead of calling_save()directly, deferring persistence to the nextflush()cycle. -
Memory plugin search rewritten (
daita/plugins/memory/search.py)Consolidated search logic with improved scoring, removed legacy
keyword_search.py,storage.py, andtext_utils.pyin favor of unifiedutils.py.
#Fixed
-
Graph node/edge
updated_atstored asdatetimeinstead of ISO stringupdate_node_properties()was storingupdated_atas an ISO string while the rest of the codebase expecteddatetimeobjects. Now consistently usesdatetime.now(timezone.utc). -
Watch
conditionparameter removed fromWatchConfigThe
conditionfield was passed through toWatchConfigbut never used by the watch loop (the source handles condition evaluation). Removed to avoid confusion. -
filterandthresholdmutual exclusion enforcedagent.watch()now raisesValueErrorif bothfilter=andthreshold=are set, since they apply to different watch modes (streaming vs. polling). -
ImportErrorfixes across catalog discoverersAll AWS discovery modules now correctly raise
ImportErrorwith install hints when optional dependencies (boto3,opensearch-py, etc.) are missing, following the project-wide lazy import pattern. -
Stale watch tasks cleaned up before starting new watches
_ensure_watches_started()now prunes completed tasks from_tasksbefore launching new ones, preventing unbounded list growth in long-running agents.