Skip to main content

MCP (Model Context Protocol)

The MCP plugin enables Daita agents to connect to any Model Context Protocol server and autonomously use their tools via LLM function calling. MCP is Anthropic's open standard for connecting AI systems to external data sources and tools.

Overview

The MCP integration allows agents to discover and use tools from external MCP servers without manual configuration. When you attach an MCP server to an agent, the agent automatically:

  1. Connects to the MCP server via stdio transport
  2. Discovers all available tools from the server
  3. Converts MCP tools to the agent's unified tool format
  4. Routes tool calls to the appropriate MCP server
  5. Manages connection lifecycle and error handling


Key Features:

  • Zero-configuration tool discovery from MCP servers
  • Multiple simultaneous MCP server connections
  • Automatic tool registration in agent tool registry
  • Thread-safe concurrent tool execution
  • Built-in connection pooling and lifecycle management
  • Compatible with all official MCP servers

Quick Start

from daita import SubstrateAgent
from daita.plugins import mcp

# Agent with filesystem MCP server
agent = SubstrateAgent(
name="file_analyzer",
mcp=mcp.server(
command="uvx",
args=["mcp-server-filesystem", "/data"]
)
)

await agent.start()

# Agent autonomously discovers and uses filesystem tools
result = await agent.run("Read report.csv and calculate totals")

MCP Server Configuration

Single Server

Connect to a single MCP server using the mcp.server() factory function:

from daita import SubstrateAgent
from daita.plugins import mcp

# Filesystem server
agent = SubstrateAgent(
name="file_agent",
mcp=mcp.server(
command="uvx",
args=["mcp-server-filesystem", "/data"]
)
)

# GitHub server with authentication
agent = SubstrateAgent(
name="github_agent",
mcp=mcp.server(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_TOKEN": "ghp_your_token"}
)
)

Multiple Servers

Connect to multiple MCP servers simultaneously:

from daita import SubstrateAgent
from daita.plugins import mcp
import os

agent = SubstrateAgent(
name="multi_tool_agent",
mcp=[
# Filesystem access
mcp.server(
command="uvx",
args=["mcp-server-filesystem", "/data"],
name="filesystem"
),
# GitHub integration
mcp.server(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")},
name="github"
),
# Database access
mcp.server(
command="python",
args=["-m", "mcp_server_postgres"],
env={"DATABASE_URL": os.getenv("DATABASE_URL")},
name="postgres"
)
]
)

# Agent has access to tools from all three servers

Server Configuration Function

The mcp.server() function creates an MCP server configuration:

mcp.server(
command: str,
args: Optional[List[str]] = None,
env: Optional[Dict[str, str]] = None,
name: Optional[str] = None
) -> Dict[str, Any]

Parameters

ParameterTypeRequiredDescription
commandstrYesCommand to run MCP server (e.g., "uvx", "npx", "python")
argsList[str]NoArguments for the command
envDict[str, str]NoEnvironment variables for the server process
namestrNoOptional name for the server (for logging/debugging)

Returns

Server configuration dictionary used internally by the agent.

Architecture

MCPServer Class

The MCPServer class manages connection to a single MCP server:

from daita.plugins.mcp import MCPServer

# Create server instance
server = MCPServer(
command="uvx",
args=["mcp-server-filesystem", "/data"],
server_name="my_filesystem"
)

# Connect and discover tools
await server.connect()

# List available tools
tools = server.list_tools()
for tool in tools:
print(f"{tool.name}: {tool.description}")

# Call a tool
result = await server.call_tool("read_file", {"path": "/data/report.csv"})

# Disconnect when done
await server.disconnect()

Server Properties

# Check connection status
if server.is_connected:
print("Server is connected")

# Get tool names
tool_names = server.tool_names
print(f"Available tools: {', '.join(tool_names)}")

# Server info
print(server) # MCPServer(filesystem, connected, 5 tools)

Context Manager

Use servers as async context managers for automatic cleanup:

from daita.plugins.mcp import MCPServer

async with MCPServer(
command="uvx",
args=["mcp-server-filesystem", "/data"]
) as server:
# Server automatically connected
result = await server.call_tool("read_file", {"path": "/data/file.txt"})
# Server automatically disconnected on exit

MCPToolRegistry Class

The MCPToolRegistry manages multiple MCP servers and routes tool calls:

from daita.plugins.mcp import MCPServer, MCPToolRegistry

# Create registry
registry = MCPToolRegistry()

# Add servers
server1 = MCPServer(command="uvx", args=["mcp-server-filesystem", "/data"])
server2 = MCPServer(command="npx", args=["-y", "@modelcontextprotocol/server-github"])

await registry.add_server(server1)
await registry.add_server(server2)

# Get all tools from all servers
all_tools = registry.get_all_tools()
print(f"Total tools: {registry.tool_count} from {registry.server_count} servers")

# Call tool (registry routes to correct server)
result = await registry.call_tool("read_file", {"path": "/data/report.csv"})

# Cleanup all connections
await registry.disconnect_all()

Agent Integration

Automatic Tool Discovery

When you attach MCP servers to an agent, tools are automatically discovered and registered:

from daita import SubstrateAgent
from daita.plugins import mcp

agent = SubstrateAgent(
name="filesystem_agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Tools are discovered lazily on first run() call
result = await agent.run("Read the file data/report.csv")

# Check what tools are available
print(f"Available tools: {agent.tool_names}")

Manual Tool Execution

Call MCP tools directly for testing or custom tool integration:

from daita import SubstrateAgent
from daita.plugins import mcp

agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Manual tool call for testing
result = await agent.call_mcp_tool("read_file", {"path": "/data/config.json"})
print(result)

# Or use within custom tools
from daita.core.tools import tool

@tool
async def process_config_file(file_path: str) -> dict:
"""Process a configuration file."""
# Call MCP tool from custom tool
file_content = await agent.call_mcp_tool("read_file", {"path": file_path})
return {"content": file_content, "processed": True}

agent.register_tool(process_config_file)
result = await agent.run("Process the config file at /data/report.csv")

Tool Registry Access

Access MCP tools through the agent's unified tool registry:

agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Run to initialize tools
await agent.run("Initialize")

# Get all available tools
tools = agent.available_tools
for tool in tools:
if tool.source == "mcp":
print(f"MCP Tool: {tool.name} - {tool.description}")

# Execute tool through registry
result = await agent.call_tool("read_file", {"path": "/data/file.txt"})

Official MCP Servers

Daita works with all official MCP servers. Here are common examples:

Filesystem Server

Access local files and directories:

mcp.server(
command="uvx",
args=["mcp-server-filesystem", "/data"]
)

Available Tools:

  • read_file - Read file contents
  • write_file - Write to files
  • list_directory - List directory contents
  • search_files - Search for files

GitHub Server

Interact with GitHub repositories:

mcp.server(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")}
)

Available Tools:

  • create_issue - Create GitHub issues
  • get_repository - Get repository information
  • list_issues - List repository issues
  • create_pull_request - Create PRs

PostgreSQL Server

Query PostgreSQL databases:

mcp.server(
command="python",
args=["-m", "mcp_server_postgres"],
env={"DATABASE_URL": "postgresql://user:pass@localhost/db"}
)

Available Tools:

  • execute_query - Run SQL queries
  • list_tables - List database tables
  • describe_table - Get table schema

Slack Server

Send messages and interact with Slack:

mcp.server(
command="npx",
args=["-y", "@modelcontextprotocol/server-slack"],
env={"SLACK_TOKEN": os.getenv("SLACK_TOKEN")}
)

Available Tools:

  • post_message - Send messages to channels
  • list_channels - List workspace channels
  • get_channel_history - Get message history

Custom MCP Servers

Creating a Custom Server

You can create custom MCP servers in Python:

# custom_mcp_server.py
from mcp.server import Server
from mcp.types import Tool

server = Server("custom-tools")

@server.list_tools()
async def list_tools():
return [
Tool(
name="calculate_sum",
description="Add two numbers",
inputSchema={
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
}
)
]

@server.call_tool()
async def call_tool(name: str, arguments: dict):
if name == "calculate_sum":
return {"result": arguments["a"] + arguments["b"]}
raise ValueError(f"Unknown tool: {name}")

if __name__ == "__main__":
server.run()

Using Custom Server

from daita import SubstrateAgent
from daita.plugins import mcp

agent = SubstrateAgent(
name="custom_agent",
mcp=mcp.server(
command="python",
args=["custom_mcp_server.py"]
)
)

await agent.start()

# Use custom tools
result = await agent.run("Calculate the sum of 42 and 58")

Advanced Usage

Error Handling

MCP connections handle errors gracefully:

from daita import SubstrateAgent
from daita.plugins import mcp

try:
agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()
result = await agent.run("Read /data/missing_file.txt")

except ConnectionError as e:
print(f"MCP server connection failed: {e}")
except RuntimeError as e:
print(f"Tool execution failed: {e}")

Connection Lifecycle

Control when MCP connections are established:

from daita import SubstrateAgent
from daita.plugins import mcp

# MCP servers connect lazily on first run() call
agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Force connection setup before processing
await agent._setup_mcp_tools()

# Check connection status
if agent.mcp_registry:
print(f"Connected to {agent.mcp_registry.server_count} servers")
print(f"{agent.mcp_registry.tool_count} tools available")

# Manual cleanup
await agent.stop() # Disconnects all MCP servers

Tool Name Conflicts

When multiple servers provide tools with the same name, the last registered server wins:

agent = SubstrateAgent(
name="agent",
mcp=[
mcp.server(command="uvx", args=["server1"], name="server1"),
mcp.server(command="uvx", args=["server2"], name="server2")
]
)

# If both servers have a "read_file" tool, server2's version is used
# Warning is logged about the collision

Concurrent Tool Execution

MCP tool calls are thread-safe and can run concurrently:

import asyncio
from daita import SubstrateAgent
from daita.plugins import mcp

agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Initialize tools (automatic on first run)
await agent.run("Initialize")

# Execute multiple tools concurrently
results = await asyncio.gather(
agent.call_mcp_tool("read_file", {"path": "/data/file1.txt"}),
agent.call_mcp_tool("read_file", {"path": "/data/file2.txt"}),
agent.call_mcp_tool("read_file", {"path": "/data/file3.txt"})
)

Best Practices

Server Configuration

  1. Use environment variables for secrets: Never hardcode API keys or tokens
  2. Name your servers: Use the name parameter for easier debugging
  3. Test connections locally: Verify MCP servers work before using in production
  4. Handle connection failures: Wrap agent creation in try-except for connection errors
import os
from daita import SubstrateAgent
from daita.plugins import mcp

# Good: Environment variables and named servers
agent = SubstrateAgent(
name="production_agent",
mcp=[
mcp.server(
command="uvx",
args=["mcp-server-filesystem", "/data"],
name="filesystem"
),
mcp.server(
command="npx",
args=["-y", "@modelcontextprotocol/server-github"],
env={"GITHUB_TOKEN": os.getenv("GITHUB_TOKEN")},
name="github"
)
]
)

Performance

  1. Reuse agents: Create agents once and reuse them to avoid reconnection overhead
  2. Use connection pooling: The MCP plugin maintains persistent connections
  3. Cleanup properly: Always call agent.stop() or use context managers
  4. Monitor tool count: Check agent.mcp_registry.tool_count to verify setup
# Good: Reuse agent
agent = SubstrateAgent(
name="agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"])
)

await agent.start()

# Process multiple requests with same agent
for file_path in file_paths:
result = await agent.run(f"Read {file_path}")

# Cleanup
await agent.stop()

Debugging

  1. Enable logging: Set log level to DEBUG to see MCP operations
  2. Use display_reasoning: Enable console output for agent decisions
  3. Check tool names: Verify tools are discovered correctly
  4. Test manual calls: Use call_mcp_tool() to test tool execution
import logging
from daita import SubstrateAgent
from daita.plugins import mcp

# Enable debug logging
logging.basicConfig(level=logging.DEBUG)

# Enable decision display
agent = SubstrateAgent(
name="debug_agent",
mcp=mcp.server(command="uvx", args=["mcp-server-filesystem", "/data"]),
display_reasoning=True
)

await agent.start()

# Run to see debug output
await agent.run("List files in /data")

# Check discovered tools
print(f"Tools: {agent.tool_names}")

Integration with Plugins

MCP tools work alongside Daita's native plugins:

from daita import SubstrateAgent
from daita.plugins import mcp, PostgreSQLPlugin
from daita.core.tools import tool

# Combine MCP tools with native plugins
db_plugin = PostgreSQLPlugin(host="localhost", database="mydb")

agent = SubstrateAgent(
name="hybrid_agent",
tools=[db_plugin], # Native plugin tools
mcp=mcp.server( # MCP tools
command="uvx",
args=["mcp-server-filesystem", "/data"]
)
)

await agent.start()

# Agent autonomously uses both database plugin tools and MCP filesystem tools
result = await agent.run(
"Query all users from the database and save the results to /data/users.json"
)

# Or create custom tools that combine both
@tool
async def hybrid_data_operation(user_count: int) -> dict:
"""Fetch users and save to file."""
# Use native plugin
db_results = await db_plugin.query(f"SELECT * FROM users LIMIT {user_count}")

# Use MCP tool
file_content = await agent.call_mcp_tool("write_file", {
"path": "/data/users.json",
"content": str(db_results)
})

return {"db_results": len(db_results), "file_written": True}

agent.register_tool(hybrid_data_operation)
result = await agent.run("Fetch 100 users and save them")

Requirements

Install the MCP SDK to use MCP servers:

pip install mcp

Official MCP SDK: https://github.com/modelcontextprotocol/python-sdk

Troubleshooting

Connection Timeout

If MCP server connection times out:

# Check if server process starts correctly
# Run server command manually first:
# $ uvx mcp-server-filesystem /data

# Verify server responds to stdio

Tool Not Found

If tools aren't discovered:

# Check server actually provides tools
server = MCPServer(command="uvx", args=["mcp-server-filesystem", "/data"])
await server.connect()
print(f"Tools: {server.tool_names}")
await server.disconnect()

Import Errors

If you see "MCP SDK not installed":

pip install mcp

Next Steps

Resources