Tool Use
noviceAn agent's ability to invoke external functions, APIs, or systems to accomplish tasks. Tools extend agent capabilities beyond pure language processing to real-world actions.
Overview
Tool use transforms an LLM from a text generator into an agent that can take actions in the world. Without tools, an AI can only generate text. With tools, it can search the web, query databases, send emails, write files, and interact with any API. The mechanism is elegant: you describe available tools to the LLM (name, description, parameters), and the model decides when to use them. Instead of generating a text response, it outputs a structured tool call. Your code executes the tool and feeds the result back to the model. This pattern is incredibly powerful because it lets you combine the reasoning capabilities of LLMs with the precision of traditional software. The AI decides what to do; your code ensures it's done correctly.
Key Concepts
Tool Definition
A schema describing what the tool does, what parameters it accepts, and what it returns. Good descriptions help the model use tools correctly.
Tool Call
The model's structured request to execute a tool with specific parameters. Usually JSON format.
Tool Result
The output from executing a tool, which gets fed back to the model for further reasoning.
Tool Selection
The model's decision about which tool to use (or whether to use any tool) based on the current task.
Code Examples
from openai import OpenAI
import json
client = OpenAI()
# Define tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., 'San Francisco'"
}
},
"required": ["location"]
}
}
}
]
# Call model with tools
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
tools=tools
)
# Handle tool call
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
args = json.loads(tool_call.function.arguments)
result = get_weather(args["location"]) # Your function
print(f"Weather result: {result}")This shows the complete flow: define a tool schema, let the model decide to call it, execute the function, and use the result.
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=[
{
"name": "search_database",
"description": "Search the product database",
"input_schema": {
"type": "object",
"properties": {
"query": {"type": "string"},
"limit": {"type": "integer", "default": 10}
},
"required": ["query"]
}
}
],
messages=[{"role": "user", "content": "Find laptops under $1000"}]
)Different APIs have slightly different schemas, but the pattern is the same: describe tools, let the model call them.
Real-World Use Cases
- 1Web search and browsing for up-to-date information
- 2Database queries for customer service and analytics
- 3File operations for code generation and document processing
- 4API calls to external services (email, calendar, CRM)
- 5System administration and DevOps automation
Practical Tips
- •Write tool descriptions from the model's perspective—explain when and why to use each tool
- •Use clear, distinct names that indicate the tool's purpose
- •Include examples in descriptions for ambiguous parameters
- •Validate all inputs before executing tools—never trust model outputs blindly
- •Log all tool calls for debugging and monitoring
Common Mistakes to Avoid
- ✗Vague tool descriptions that confuse the model about when to use them
- ✗Not validating tool inputs before execution
- ✗Forgetting to handle errors when tools fail
- ✗Creating too many similar tools that confuse tool selection
- ✗Not including tool results in the conversation for follow-up reasoning