Skip to main content
Environment is the unified class for defining tools, connecting to services, and formatting for any LLM provider.

Environment

from hud import Environment

env = Environment("my-env")

Constructor

ParameterTypeDescriptionDefault
namestrEnvironment name"environment"
instructionsstr | NoneDescription/instructionsNone
conflict_resolutionConflictResolutionHow to handle tool name conflictsPREFIX

Context Manager

Environment must be used as an async context manager to connect:
async with env:
    tools = env.as_openai_chat_tools()
    result = await env.call_tool("my_tool", arg="value")

Defining Tools

@env.tool()

Register functions as callable tools:
@env.tool()
def count_letter(text: str, letter: str) -> int:
    """Count occurrences of a letter in text."""
    return text.lower().count(letter.lower())

@env.tool()
async def fetch_data(url: str) -> dict:
    """Fetch JSON data from URL."""
    async with httpx.AsyncClient() as client:
        response = await client.get(url)
        return response.json()
Tools are automatically documented from type hints and docstrings.

Scenarios

Scenarios define evaluation logic with two yields:
@env.scenario("checkout")
async def checkout_flow(product: str):
    # First yield: send prompt, receive answer
    answer = yield f"Add '{product}' to cart and checkout"
    
    # Second yield: return reward based on result
    order_exists = await check_order(product)
    yield 1.0 if order_exists else 0.0
Create Tasks from Scenarios:
task = env("checkout", product="laptop")

async with hud.eval(task) as ctx:
    await agent.run(ctx.prompt)
    await ctx.submit(agent.response)

Connectors

Connect to external services as tool sources.

connect_hub()

Connect to a deployed HUD environment:
env.connect_hub("browser", prefix="browser")
# Tools available as browser_navigate, browser_click, etc.

connect_fastapi()

Import FastAPI routes as tools:
from fastapi import FastAPI

api = FastAPI()

@api.get("/users/{user_id}", operation_id="get_user")
def get_user(user_id: int):
    return {"id": user_id, "name": "Alice"}

env.connect_fastapi(api)
# Tool available as get_user
ParameterTypeDescriptionDefault
appFastAPIFastAPI applicationRequired
namestr | NoneServer nameapp.title
prefixstr | NoneTool name prefixNone
include_hiddenboolInclude routes with include_in_schema=FalseTrue

connect_openapi()

Import from OpenAPI spec:
env.connect_openapi("https://api.example.com/openapi.json")

connect_server()

Mount an MCPServer or FastMCP directly:
from fastmcp import FastMCP

tools = FastMCP("tools")

@tools.tool
def greet(name: str) -> str:
    return f"Hello, {name}!"

env.connect_server(tools)

connect_mcp_config()

Connect via MCP config dict:
env.connect_mcp_config({
    "my-server": {
        "command": "uvx",
        "args": ["some-mcp-server"]
    }
})

connect_image()

Connect to a Docker image via stdio:
env.connect_image("mcp/fetch")

Tool Formatting

Convert tools to provider-specific formats.

OpenAI

# Chat Completions API
tools = env.as_openai_chat_tools()
response = await client.chat.completions.create(
    model="gpt-4o",
    messages=messages,
    tools=tools,
)

# Responses API
tools = env.as_openai_responses_tools()

# Agents SDK (requires openai-agents)
tools = env.as_openai_agent_tools()

Anthropic/Claude

tools = env.as_claude_tools()
response = await client.messages.create(
    model="claude-sonnet-4-5",
    messages=messages,
    tools=tools,
)

Gemini

tools = env.as_gemini_tools()
config = env.as_gemini_tool_config()

LangChain

# Requires langchain-core
tools = env.as_langchain_tools()

LlamaIndex

# Requires llama-index-core
tools = env.as_llamaindex_tools()

Google ADK

# Requires google-adk
tools = env.as_adk_tools()

Calling Tools

call_tool()

Execute tools with auto-format detection:
# Simple call
result = await env.call_tool("my_tool", arg="value")

# From OpenAI tool call
result = await env.call_tool(response.choices[0].message.tool_calls[0])

# From Claude tool use
result = await env.call_tool(response.content[0])  # tool_use block
Returns result in matching format (OpenAI tool call → OpenAI tool message, etc.).

Mock Mode

Test without real connections:
env.mock()  # Enable mock mode

# Set specific mock outputs
env.mock_tool("navigate", "Navigation successful")
env.mock_tool("screenshot", b"fake_image_data")

async with env:
    result = await env.call_tool("navigate", url="https://example.com")
    # Returns "Navigation successful" instead of actually navigating

env.unmock()  # Disable mock mode
MethodDescription
mock(enable=True)Enable/disable mock mode
unmock()Disable mock mode
mock_tool(name, output)Set specific mock output
is_mockCheck if mock mode is enabled

Serving as MCP Server

Environment can serve its tools over MCP protocols, either standalone or mounted on an existing server.

serve()

Start a standalone MCP server:
from hud import Environment

env = Environment("my-env")

@env.tool()
def greet(name: str) -> str:
    return f"Hello, {name}!"

# Run as MCP server (blocking)
env.serve()
ParameterTypeDescriptionDefault
transportLiteral["stdio", "sse", "streamable-http"]Transport protocol"streamable-http"
hoststrHost address to bind"0.0.0.0"
portintPort to bind8000
# Serve over stdio (for CLI tools)
env.serve(transport="stdio")

# Serve over HTTP on custom port
env.serve(transport="streamable-http", host="0.0.0.0", port=8765)

http_app()

Get a Starlette/ASGI app to mount on an existing FastAPI server:
from fastapi import FastAPI
from hud import Environment

app = FastAPI()
env = Environment("my-env")

@env.tool()
def my_tool(arg: str) -> str:
    return f"Got: {arg}"

# Mount the HUD environment's MCP endpoint at /mcp
app.mount("/mcp", env.http_app())

# Your other FastAPI routes work normally
@app.get("/health")
def health():
    return {"status": "ok"}
ParameterTypeDescriptionDefault
pathstr | NoneInternal path for the MCP endpoint"/"
transportLiteral["http", "streamable-http", "sse"]Transport protocol"http"
middlewarelist[ASGIMiddleware] | NoneStarlette middlewareNone
json_responsebool | NoneUse JSON response formatNone
stateless_httpbool | NoneUse stateless HTTP modeNone
MCP clients can then connect at http://your-server/mcp:
# Client connecting to mounted environment
env.connect_url("http://localhost:8000/mcp")

Properties

PropertyTypeDescription
namestrEnvironment name
promptstr | NoneDefault prompt (set by scenarios or agent code)
is_connectedboolTrue if in context
connectionsdict[str, Connector]Active connections

Creating Tasks

Call the environment to create a Task:
# With scenario
task = env("checkout", product="laptop")

# Without scenario (just the environment)
task = env()
Then run with hud.eval():
async with hud.eval(task, variants={"model": ["gpt-4o"]}) as ctx:
    ...

See Also