Assignment: Tool Calling & Tavily Search Integration#
Assignment Metadata#
Field |
Description |
|---|---|
Assignment Name |
Tool Calling & Tavily Search Integration |
Course |
LangGraph and Agentic AI |
Project Name |
|
Estimated Time |
90 minutes |
Framework |
Python 3.10+, LangGraph, LangChain, Tavily API, OpenAI API |
Learning Objectives#
By completing this assignment, you will be able to:
Understand Tool/Function Calling mechanics in LLMs
Integrate Tavily Search API for real-time web information
Build agents with multiple tools working in parallel
Implement error handling and retry logic for tool execution
Apply caching and rate limiting for production readiness
Problem Description#
Building on the Multi-Expert Research Agent from Assignment 02, you will add a Web Search tool using Tavily API. This enables the agent to:
Access real-time information beyond LLM knowledge cutoff
Combine expert analysis with current web data
Execute multiple tools in parallel when needed
Technical Requirements#
Environment Setup#
Python 3.10 or higher
Required packages:
langgraph>= 0.2.0langchain>= 0.1.0langchain-community>= 0.1.0tavily-python>= 0.3.0
API Requirements#
OpenAI API key (
OPENAI_API_KEY)Tavily API key (
TAVILY_API_KEY) - Get free at tavily.com
Tasks#
Task 1: Tavily Search Tool Setup (25 points)#
Configure TavilySearchResults tool with:
max_results: 5search_depth: “advanced”include_answer: True (for AI-generated summary)
Create a wrapper tool with proper description:
Clear description of when to use (current events, real-time data)
Specify input format expectations
Handle search errors gracefully
Task 2: Multi-Tool Agent Integration (35 points)#
Extend existing Research Agent to include:
AI Research Expert (from Assignment 02)
Financial Analyst (from Assignment 02)
Web Search Tool (new Tavily integration)
Update coordinator prompt to:
Advise when to use web search vs expert LLMs
Enable parallel tool calling for independent queries
Guide synthesis of web results with expert analysis
Ensure parallel execution capability:
Coordinator can call multiple tools in single response
ToolNode executes all tool_calls and returns all ToolMessages
Task 3: Error Handling & Optimization (25 points)#
Implement retry logic for Tavily API:
Use
tenacityfor exponential backoffHandle rate limits and timeouts gracefully
Add basic caching for search results:
Cache identical queries for 1 hour
Use hash-based cache keys
Implement rate limiting:
Max 10 searches per minute
Queue overflow handling
Task 4: Testing & Validation (15 points)#
Test queries requiring web search:
“What are the latest AI announcements this week?”
“Current NVIDIA stock performance and recent news”
Test combined expert + web search:
“Analyze the impact of recent AI regulations on tech companies”
Verify parallel tool execution:
Show both expert and search tools called simultaneously
Document execution time comparison (parallel vs sequential)
Submission Requirements#
Required Deliverables#
Source code in Python script or Jupyter notebook
README.mdwith setup instructions and API key configurationTest output showing web search integration
Comparison of response quality with/without web search
Submission Checklist#
Tavily tool configured with proper parameters
All three tools work together (2 experts + web search)
Parallel tool execution demonstrated
Error handling implemented for API failures
Code runs without errors
Evaluation Criteria#
Criteria |
Points |
|---|---|
Tavily Search tool setup |
25 |
Multi-tool integration |
35 |
Error handling & optimization |
25 |
Testing & validation |
10 |
Code quality and documentation |
5 |
Total |
100 |
Hints#
Use
TavilySearchResultsfromlangchain_community.tools.tavily_searchFor parallel calls, the coordinator returns multiple tool_calls in one AIMessage
Store Tavily API key in environment variable, never hardcode
Use
@retry(stop=stop_after_attempt(3))from tenacity for retries