Skip to main content
The @tool decorator converts Python functions into Ray tasks that execute in parallel across your cluster.
from rayai import tool

@tool(desc="Analyze data with heavy computation", num_cpus=2, memory="4GB")
def analyze_data(dataset: str) -> dict:
    # This runs as a Ray task with 2 CPUs and 4GB memory
    return {"result": "analysis complete"}
When you call analyze_data(), RayAI:
  1. Submits the function as a Ray remote task
  2. Allocates the specified resources (CPUs, GPUs, memory)
  3. Waits for the result and returns it

Parallel Execution

Multiple tool calls execute in parallel automatically:
# These run concurrently on Ray
results = [
    analyze_data("dataset_1"),
    analyze_data("dataset_2"),
    analyze_data("dataset_3"),
]

Resource Options

OptionTypeDefaultDescription
descstrFunction docstringDescription for LLM schema generation
num_cpusint1CPU cores per task
num_gpusint0GPUs per task
memorystrNoneMemory limit (e.g., “4GB”, “512MB”)