Reference: https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/
Multi-Expert Research Agent with ReAct Pattern#
This page covers the ReAct (Reasoning + Acting) pattern for building multi-expert research agents, including how to compose specialized LLM-based tools, use LangGraph’s prebuilt ToolNode, and apply advanced techniques such as reflection, planning, and multi-expert orchestration.
Learning Objectives#
Understand ReAct pattern (Reasoning + Acting)
Implement Research Agent with LLM-based expert tools
Use LangGraph’s prebuilt ToolNode
Apply advanced techniques: reflection, planning, multi-expert
Best practices for production-ready agents
Research Agent Patterns#
overview#
Introduction from Simple Research Agent#
Basic Research Agent: User → LLM → Web Search → Answer
Problems:
Single tool (web search) - limited expertise
No reasoning about quality
No planning for complex queries
No reflection/improvement
3 Core Improvements#
Multi-Expert Tools: Replace web search = specialized LLM experts
ReAct Pattern: Structured reasoning + acting loop
Advanced Techniques: Reflection, planning, iteration control
Agentic Workflows for Research#
Simple: Question → Search → Answer
ReAct: Question → [Think → Act → Observe]* → Answer
Advanced: Question → Plan → [Execute → Reflect]* → Synthesize
The 3 Key Patterns#
1. Tool Use (Multi-Expert)#
Instead of 1 web search tool → Multiple specialized LLM experts
2. ReAct (Reason + Act)#
Coordinator LLM thinks before acting, observes results
3. Reflection (Optional)#
Agent reviews own output quality, refines if needed
Why These Patterns Matter#
Higher accuracy
Expert knowledge > generic search
Structured reasoning > direct answer
Handle complex queries
Multi-step reasoning
Combine multiple perspectives
Quality control
Reflection catches errors
Iterative improvement
Pattern 1: Multi-Expert Tool Use#
Concept#
Specialized LLM Experts#
Instead of search tool → LLM with specialized knowledge:
AI Research Expert (papers, trends, ML)
Financial Analyst (stocks, markets, valuations)
Medical Expert, Legal Expert, etc.
Why LLM Tools > Web Search#
Consistent quality
Structured reasoning
Domain expertise
No hallucination from bad search results
Tool as LLM Invoke#
@tool
def ai_research_expert(query: str) -> str:
"""AI/ML research specialist"""
messages = [
SystemMessage(content=EXPERT_PROMPT),
HumanMessage(content=query)
]
return expert_llm.invoke(messages).content
Implementation with LangGraph#
State Definition#
from typing import TypedDict, List, Annotated
from langchain_core.messages import AnyMessage, add_messages
class ResearchState(TypedDict):
messages: Annotated[List[AnyMessage], add_messages]
max_iterations: int
current_iteration: int
Key points:
messages: Full conversation historyadd_messages: Auto-merge reducerIteration tracking for loop control
Expert LLM Setup#
from langchain_openai import ChatOpenAI
# Expert 1: AI Research
ai_research_llm = ChatOpenAI(model="gpt-4", temperature=0.3)
AI_RESEARCH_PROMPT = """You are an AI Research Expert.
Specialize in: ML papers, architectures, trends, research methods.
Provide academic-style responses with paper references."""
# Expert 2: Financial Analysis
financial_llm = ChatOpenAI(model="gpt-4", temperature=0.3)
FINANCIAL_PROMPT = """You are a Financial Analyst.
Specialize in: stocks, markets, valuations, investment strategy.
Provide data-driven insights with market context."""
Tool Definitions#
from langchain_core.tools import tool
@tool
def ai_research_expert(query: str) -> str:
"""Consult AI Research Expert for ML/AI questions"""
messages = [
SystemMessage(content=AI_RESEARCH_PROMPT),
HumanMessage(content=query)
]
return ai_research_llm.invoke(messages).content
@tool
def financial_analyst(query: str) -> str:
"""Consult Financial Analyst for market/investment questions"""
messages = [
SystemMessage(content=FINANCIAL_PROMPT),
HumanMessage(content=query)
]
return financial_llm.invoke(messages).content
Coordinator Setup#
coordinator_llm = ChatOpenAI(model="gpt-4", temperature=0)
coordinator_with_tools = coordinator_llm.bind_tools([
ai_research_expert,
financial_analyst
])
Graph Construction#
from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode
# Prebuilt ToolNode - auto handles tool execution
tools = [ai_research_expert, financial_analyst]
tool_node = ToolNode(tools)
workflow = StateGraph(ResearchState)
workflow.add_node("coordinator", coordinator_node)
workflow.add_node("tools", tool_node)
workflow.set_entry_point("coordinator")
workflow.add_conditional_edges(
"coordinator",
should_continue,
{"continue": "tools", "end": END}
)
workflow.add_edge("tools", "coordinator")
app = workflow.compile()
Benefits#
Expert Knowledge Quality#
Specialized prompts > generic search
Consistent reasoning
Domain-specific insights
Flexible Tool Composition#
Mix different experts per query
Easy to add new experts
Coordinator auto-routes
Better than Web Search for:#
Analysis questions (not just facts)
Synthesis from multiple domains
Consistent quality (no bad search results)
Limitations#
Higher Cost#
Multiple LLM calls per query
Each expert = full LLM invoke
Latency#
Sequential expert consultations
Longer than single search
Knowledge Cutoff#
Experts don’t know post-training events
Still need web search for current news
Pattern 2: ReAct (Reason + Act)#
Concept#
Think (Reasoning)#
Coordinator LLM explicitly reasons:
“This question needs AI expertise”
“I should consult financial analyst”
“I have enough info now”
Act (Tool Use)#
Based on reasoning → call appropriate tools:
AIMessage(
content="I need AI research expertise",
tool_calls=[{"name": "ai_research_expert", ...}]
)
Observe (Tool Results)#
Receive and analyze expert responses:
ToolMessage(
content="Expert says: transformers evolved to...",
name="ai_research_expert"
)
Repeat#
Loop until sufficient information or max iterations
ReAct Strategies#
Coordinator Prompt#
COORDINATOR_PROMPT = """You are a Research Coordinator using ReAct.
Process:
1. THINK: Analyze user question
2. ACT: Decide which expert(s) to consult
3. OBSERVE: Review expert responses
4. REPEAT if needed OR give final answer
Tools:
- ai_research_expert: ML/AI questions
- financial_analyst: Market/investment questions
Guidelines:
- Use tools when you need expert knowledge
- Can consult multiple experts
- Synthesize inputs into coherent answer
- Max {max_iterations} iterations
Current: iteration {iteration}/{max_iterations}
"""
Routing Logic#
def should_continue(state: ResearchState) -> str:
last_message = state["messages"][-1]
# Has tool calls? → Execute
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "use_tools"
# Max iterations? → Force end
if state["current_iteration"] >= state["max_iterations"]:
return "end"
# Has final answer → End naturally
return "end"
Workflow in Messages#
Iteration 1: Think → Act#
# User input
HumanMessage(content="What are transformer breakthroughs?")
# Coordinator thinks & acts
AIMessage(
content="I need AI research expertise for this ML question",
tool_calls=[{
"name": "ai_research_expert",
"args": {"query": "latest transformer architecture breakthroughs 2024"}
}]
)
Iteration 1: Observe#
# Tool executes → ToolMessage
ToolMessage(
content="Recent advances: Mamba (state-space), RetNet (retention)...",
name="ai_research_expert",
tool_call_id="call_123"
)
Iteration 2: Synthesize or Continue#
Option A: Sufficient info
AIMessage(content="Based on expert analysis, key breakthroughs are...")
# → END
Option B: Need more info
AIMessage(
content="Need financial perspective on AI chip demand",
tool_calls=[{"name": "financial_analyst", ...}]
)
# → Continue loop
Implementation Details#
Coordinator Node#
def coordinator_node(state: ResearchState) -> dict:
messages = state["messages"]
# Add system prompt on first iteration
if state["current_iteration"] == 0:
system_msg = SystemMessage(content=COORDINATOR_PROMPT)
messages = [system_msg] + messages
# Coordinator decides: tool use or final answer
response = coordinator_with_tools.invoke(messages)
return {
"messages": [response],
"current_iteration": state["current_iteration"] + 1
}
Prebuilt ToolNode (NOT custom!)#
from langgraph.prebuilt import ToolNode
# ✨ Prebuilt handles everything:
# - Parse tool_calls from AIMessage
# - Execute tools
# - Create ToolMessages
# - Error handling
tool_node = ToolNode(tools)
Why NOT custom tool execution:
Prebuilt is battle-tested
Handles edge cases (errors, malformed calls)
Cleaner code
Standard LangGraph pattern
Practice: ReAct Research Agent#
Use Case: Complex Multi-Step Research#
question = "How are AI companies valued in current market?"
# ReAct flow:
# 1. Think: Need both AI trends + financial analysis
# 2. Act: Call ai_research_expert
# 3. Observe: Get AI landscape info
# 4. Think: Now need valuation metrics
# 5. Act: Call financial_analyst
# 6. Observe: Get market multiples
# 7. Think: Have both perspectives
# 8. Final: Synthesize comprehensive answer
Complete Implementation#
See full code in artifact multi_expert_research
Key files:
State definition
Expert LLM setup
Tool definitions
Coordinator + ToolNode
Graph compilation
Test cases
Testing Multi-Step Reasoning#
# Question requiring iteration
run_research("""
Compare NVIDIA vs AMD for AI workloads.
Consider both technical capabilities and market position.
""")
# Expected flow:
# Iteration 1: ai_research_expert (technical specs)
# Iteration 2: financial_analyst (market analysis)
# Iteration 3: Synthesize comparison
Combining Multi-Expert + ReAct#
Expert Consultation Strategy#
Sequential Consultation#
# Step 1: Get AI perspective
tool_calls=[{"name": "ai_research_expert", ...}]
# Step 2: Get financial perspective
tool_calls=[{"name": "financial_analyst", ...}]
# Step 3: Synthesize both
Parallel Consultation (Advanced)#
# Call both experts simultaneously
tool_calls=[
{"name": "ai_research_expert", ...},
{"name": "financial_analyst", ...}
]
# ToolNode executes all, returns all ToolMessages
Synthesis Patterns#
Simple Concatenation#
final_answer = f"""
AI Perspective: {ai_expert_response}
Financial Perspective: {financial_response}
Conclusion: ...
"""
Structured Synthesis#
coordinator_prompt = """
Synthesize expert inputs into coherent answer.
Expert 1 (AI Research): {ai_response}
Expert 2 (Financial): {fin_response}
Provide: unified analysis addressing all aspects.
"""
Advanced Techniques#
Dynamic Expert Selection#
LLM-based Routing#
routing_prompt = """
Given question: {question}
Available experts:
- ai_research_expert: ML/AI topics
- financial_analyst: Markets/investments
Which expert(s) should answer? Return JSON list.
"""
Error Handling#
Tool Execution Failures#
try:
result = expert_llm.invoke(messages)
except Exception as e:
return ToolMessage(
content=f"Expert unavailable: {str(e)}. Using fallback.",
name=tool_name,
tool_call_id=call_id
)
Invalid Tool Calls#
# ToolNode handles this automatically
# But you can add custom handling:
def safe_tool_node(state):
try:
return tool_node.invoke(state)
except Exception as e:
return {
"messages": [AIMessage(content=f"Tool error: {e}. Proceeding without tool result.")]
}
Recovery Strategies#
Implementation Best Practices#
Prompt Engineering#
Clear Expert Roles#
EXPERT_PROMPT = """You are [ROLE].
Specialization:
- [Domain 1]
- [Domain 2]
Response style:
- [Guideline 1]
- [Guideline 2]
Always cite sources when possible.
"""
Few-shot Examples#
COORDINATOR_PROMPT = """
Example 1:
Question: "Explain transformers"
Thought: Need AI research expertise
Action: call ai_research_expert("transformer architecture")
Example 2:
Question: "Invest in NVDA?"
Thought: Need both AI trends and financial analysis
Action: call ai_research_expert + financial_analyst
Now handle: {question}
"""
Output Format Specification#
"""
Provide answer in this format:
## Summary
[Brief overview]
## AI Research Perspective
[Expert 1 insights]
## Financial Analysis
[Expert 2 insights]
## Recommendation
[Synthesized conclusion]
"""
Use Cases#
Multi-Expert Tool Pattern#
Technical + Business Analysis#
Question: “Should we adopt Kubernetes?”
Experts: DevOps expert + Cost analyst
Medical Diagnosis Support#
Question: “Patient symptoms analysis”
Experts: Specialist doctors (cardiology, neurology)
Legal + Financial Advisory#
Question: “M&A implications”
Experts: Legal counsel + Investment banker
ReAct Pattern#
Research Tasks#
Multi-step information gathering
Iterative refinement of understanding
Complex Decision Making#
Evaluate multiple options
Gather perspectives from different domains
Data Analysis#
Query data → Analyze → Refine query → Re-analyze
Trade-offs#
Benefits#
Higher Quality#
Expert knowledge > generic responses
Structured reasoning > direct answer
Multi-perspective synthesis
Handles Complexity#
Multi-step problems
Cross-domain questions
Nuanced analysis
Transparent Reasoning#
See coordinator’s thought process
Understand which experts consulted
Debug by inspecting messages
Costs#
More LLM Calls#
Each expert = separate LLM invoke
Coordinator also uses LLM
Can be 3-5x cost vs single call
Higher Latency#
Sequential expert consultations
Multiple iterations
Network round-trips
Token Usage#
Full conversation history in each call
System prompts for each expert
Can hit context limits on long conversations
When to Use#
Use Multi-Expert ReAct when:
Question requires domain expertise
Need multiple perspectives
Quality > Speed/Cost
Analysis > Simple facts
Use Simple Agent when:
Straightforward questions
Single domain
Speed/Cost critical
Recent facts (use web search)
Summary#
Key Takeaways#
Multi-Expert > Web Search for analysis questions
ReAct Pattern provides structured reasoning
Prebuilt ToolNode simplifies implementation
Message History is source of truth
Iteration Control prevents infinite loops
Practice: Multi-Expert Research Agent#
Practice: Multi-Expert ReAct Agent with Messages#
Use Case#
Agent automatically researches by consulting expert LLMs across multiple iterations:
User asks question (HumanMessage)
Coordinator LLM reasoning (AIMessage with tool_calls)
Expert LLMs respond (ToolMessage from ai_research_expert)
Coordinator analyzes (ToolMessage) and reviews
Coordinator Loops or returns final answer
Architecture Pattern: ReAct (Reasoning + Acting)#
User Question (HumanMessage)
↓
Coordinator LLM (thinks & decides)
↓
├─→ AI Research Expert LLM (tool)
↓
Coordinator observes & and review
↓
Loop (call tool again) OR Go to END
Key Components#
1. State Management#
2. LLM-based Tools (not web search!)#
ai_research_expert(query: str)→ Call specialized AI Research LLMfinancial_analyst(query: str)→ Call specialized Financial LLMEach tool is 1 LLM invoke with specialized system prompt
3. Coordinator Node#
Main reasoning LLM
Bind with tools:
coordinator_llm.bind_tools([tool1, tool2])Decides: which expert to call? or enough info to answer?
4. Tool Execution#
Use prebuilt
ToolNodeinstead of custom implementationToolNode(tools)automatically handles tool execution & creates ToolMessages
Routing Logic#
def should_continue(state: ResearchState) -> str:
last_message = state["messages"][-1]
# Has tool calls? → Execute
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return "continue"
# Max iterations? → End
if state["current_iteration"] >= state["max_iterations"]:
return "end"
# Has final answer → End
return "end"
Best Practices#
System Prompts: Set clear specialized roles for each expert LLM
Tool Naming: Clear descriptive names (
ai_research_expertnottool1)Max Iterations: Set limit (3-5) to avoid infinite loops
Use ToolNode: Don’t write custom tool execution - use prebuilt
State Simplicity: Only track essentials (messages, iterations)
Next Steps#
Add More Experts#
@tool
def legal_expert(query: str) -> str: ...
@tool
def medical_expert(query: str) -> str: ...
Add Helper Agent (Split out task from Coordinator)#
With the current ReAct Agent, Coordinator is doing three tasks:
Guess and analyze user’s message
call actions
Review and return final answer. => This can greatly impact the quality of the system since 1 agent/node should not do more than 2 tasks
Solution: => Introduce Planning Agent to handle Guess and analyze user’s message for coordinator, this agent gives out action and reason for coordinator to follow
=> Introduce Synthesizer Agent to generate answer to user if coordinator decided to end.
Production Readiness#
Add comprehensive error handling
Implement rate limiting
Add caching for expensive calls
Monitor costs and performance
A/B test vs simpler approaches
Resources#
LangGraph ReAct: https://langchain-ai.github.io/langgraph/how-tos/react-agent-from-scratch/
Prebuilt Components: https://langchain-ai.github.io/langgraph/reference/prebuilt/
Message Types: https://python.langchain.com/docs/concepts/messages/
Andrew Ng Agentic Patterns: https://www.youtube.com/watch?v=e2zIr_2JMbE