@tool decorator converts Python functions into Ray tasks that execute in parallel across your cluster.
analyze_data(), RayAI:
- Submits the function as a Ray remote task
- Allocates the specified resources (CPUs, GPUs, memory)
- Waits for the result and returns it
Parallel Execution
Multiple tool calls execute in parallel automatically:Resource Options
| Option | Type | Default | Description |
|---|---|---|---|
desc | str | Function docstring | Description for LLM schema generation |
num_cpus | int | 1 | CPU cores per task |
num_gpus | int | 0 | GPUs per task |
memory | str | None | Memory limit (e.g., “4GB”, “512MB”) |