#[0.17.0] - 2026-05-01
This release adds Azure infrastructure discovery to the Catalog plugin, modernizes the LLM provider layer for newer model families and OpenAI compatible APIs, and moves memory graph domain models into the Memory plugin while keeping shared graph persistence in core.
#Added
-
Azure Catalog Support (
daita/plugins/catalog/azure.py)The Catalog plugin can now discover and profile Azure data infrastructure.
AzureDiscovereruses the Azure SDK credential chain and supports explicit subscriptions, locations, tenant IDs, and service allowlists.Built-in Azure coverage includes:
- Azure SQL
- Azure Database for PostgreSQL Flexible Server
- Azure Database for MySQL Flexible Server
- Cosmos DB
- Blob Storage containers
- Azure Cache for Redis
- Event Hubs
- Service Bus queues and topics
- API Management APIs
Install with
pip install 'daita-agents[azure]'. The[cloud]and[all]bundles include the Azure catalog dependencies. -
Azure Normalizers and Profilers
New per-service catalog normalizers and profilers produce unified
NormalizedSchemaoutputs for Azure Blob, Cosmos DB, Event Hubs, Service Bus, and API Management. Catalog graph persistence now recognizes Azure entity types includingazure_blob,azure_apim,cosmosdb,eventhub,servicebus_queue, andservicebus_topic. -
OpenAI-Compatible Provider Helpers (
daita/llm/openai_compatible.py)Shared helpers now handle OpenAI-style message conversion, streamed tool-call accumulation, and safe tool argument parsing. OpenAI, Grok, and Ollama providers use the shared path for more consistent tool calling and streaming behavior.
-
New LLM Provider Options
OpenAI provider calls now support
reasoning_effort,service_tier,parallel_tool_calls,max_completion_tokens, anduse_legacy_max_tokens. Gemini provider calls now supporttop_k,stop_sequences,response_mime_type,response_schema, andthinking_config.
#Changed
-
Default Model Refresh
The default agent and OpenAI provider model changed from
gpt-4togpt-5.4-mini. Provider defaults were also refreshed for Anthropic (claude-haiku-4-5), Gemini (gemini-2.5-flash-lite), and Grok (grok-4.20). -
Memory Graph Split from Core Graph Vocabulary
Memory-specific graph semantics now live under
daita.plugins.memory:MemoryNodeTypeMemoryEdgeTypeMemoryGraphNodeMemoryGraphEdgeGraphBackendMemoryGraphStore
The shared core graph package still owns reusable graph records, backend selection, persistence, locking, and traversal mechanics. Compatibility aliases for
MEMORY,ENTITY,MENTIONS,RELATED_TO, andSUPERSEDESremain in core graph models for now. -
Catalog Auto-Persistence
CatalogPlugin.discover_and_profile()now persists profiled schemas automatically whenauto_persist=True, matching direct discovery behavior and ensuring catalog graph nodes are created during full infrastructure scans. -
LLM Usage Accounting
Token usage extraction and accumulation were centralized in
BaseLLMProvider._record_usage(), with support for OpenAI, Anthropic, and Gemini usage object shapes.
#Fixed
-
Minimal Unit Test Compatibility
Catalog auto-persistence tests no longer require the optional
networkx/lineageextra. The test now uses a small in-memory graph backend while preserving coverage of table, column, and flush behavior.