LangGraph Tutorial: GPT-Researcher vs DeerFlow Architecture Compared
A deep architectural comparison of two leading LangGraph-based multi-agent research frameworks — nested parallel graphs vs flat dispatcher patterns.
Agentic Workflow Showdown: GPT-Researcher vs DeerFlow
Building AI agents that can research, reason, and produce quality reports is one of the hottest challenges in 2026. Two frameworks have emerged as leaders: GPT-Researcher and DeerFlow (by ByteDance). Both use LangGraph for orchestration, but their architectural approaches couldn't be more different.
I spent a week diving deep into both codebases. Here's what I found.
Table of Contents
- TL;DR
- The LangGraph Foundation
- GPT-Researcher: The Parallel Machine
- DeerFlow: The Collaborative Researcher
- Head-to-Head Comparison
- When to Use Which
- Architectural Lessons
TL;DR
| Aspect | GPT-Researcher | DeerFlow | |--------|----------------|----------| | Pattern | Nested parallel graphs | Flat dispatcher | | Speed | ~3 min (parallel) | Variable (sequential) | | Human Loop | Minimal | Deep integration | | Best For | Fast autonomous research | Collaborative exploration |
The LangGraph Foundation
Both frameworks build on LangGraph, LangChain's extension for stateful multi-agent applications. The core concepts:
graph TB
subgraph LangGraph Core
SG[StateGraph] --> |defines| S[State TypedDict]
SG --> |contains| N[Nodes - Python Functions]
SG --> |connects via| E[Edges - Static/Conditional]
SG --> |produces| C[Compiled Executable Graph]
end
But how each framework uses these primitives reveals fundamentally different philosophies.
GPT-Researcher: The Parallel Machine
Architecture: Graph-Within-Graph
GPT-Researcher implements a nested graph pattern — a main graph spawns sub-graphs for parallel section research:
flowchart LR
subgraph Main["MAIN GRAPH (ResearchState)"]
B[Browser] --> P[Planner]
P --> R[Researcher]
R --> W[Writer]
W --> E((END))
R --> S1[Sub-Graph\nSection 1]
R --> S2[Sub-Graph\nSection 2]
R --> SN[Sub-Graph\nSection N]
end
subgraph Sub["SUB-GRAPH (DraftState)"]
RS[Research] --> RV[Reviewer]
RV -->|approved| END2((END))
RV -->|revise| REV[Reviser]
REV --> RV
end
Each sub-graph has its own DraftState with a review-revise loop:
# Sub-graph with conditional edges
sub_workflow.add_conditional_edges(
"reviewer",
lambda state: "accept" if state["review"] == "approved" else "revise",
{"accept": END, "revise": "reviser"}
)
sub_workflow.add_edge("reviser", "reviewer") # Loop back
Why This Works
- Isolation — Each section gets its own state, preventing race conditions
- Parallelism — All sub-graphs run concurrently
- Quality Control — Built-in review cycle catches issues early
- Speed — ~3 minutes for a full research report
The Agent Lineup
| Agent | Role | Superpower | |-------|------|------------| | GPT-Researcher | Deep research | Web scraping, source aggregation | | Editor | Planning | Outline structure | | Reviewer | QA | Validates against guidelines | | Reviser | Refinement | Incorporates feedback | | Writer | Compilation | Introduction, conclusion | | Publisher | Export | PDF, Docx, Markdown |
DeerFlow: The Collaborative Researcher
Architecture: Flat Graph with Dispatcher
DeerFlow takes the opposite approach — a single flat graph with a central dispatcher routing to specialists:
flowchart TB
subgraph DeerFlow["DEERFLOW WORKFLOW"]
CO[Coordinator] <-->|multi-turn| CL[Clarification Loop]
CO --> PL[Planner]
PL --> HF[Human Feedback]
HF -->|interrupt| RD[Research Dispatcher]
RD --> RS[Researcher]
RD --> AN[Analyst]
RD --> CD[Coder]
RS --> RP[Reporter]
AN --> RP
CD --> RP
RP --> END((END))
end
The Command Pattern
DeerFlow uses LangGraph's Command objects instead of conditional edges:
from langgraph.types import Command
def coordinator_node(state: State) -> Command:
if needs_clarification(state):
return Command(
update={"clarification_round": state["clarification_round"] + 1},
goto="coordinator" # Loop back
)
elif is_research_request(state):
return Command(
update={"research_topic": extract_topic(state)},
goto="planner"
)
else:
return Command(goto=END)
This gives explicit, readable control flow versus implicit conditional edge logic.
Human-in-the-Loop Done Right
DeerFlow's killer feature is deep human integration:
sequenceDiagram
participant U as User
participant C as Coordinator
participant P as Planner
participant R as Research Team
U->>C: Research request
C->>U: Clarification question
U->>C: Additional context
C->>P: Create plan
P->>U: Review plan?
U->>P: [EDIT_PLAN] Add section on X
P->>P: Revise plan
P->>U: Review updated plan?
U->>P: [ACCEPTED]
P->>R: Execute research
R->>U: Final report
- Clarification Loop — "What specifically about X do you want to know?"
- Plan Approval — Human reviews the research plan before execution
- Plan Editing — Natural language edits: "Add a section on Y"
- Tool Oversight — Approval for sensitive operations
from langgraph.types import interrupt
def human_feedback_node(state: State) -> Command:
feedback = interrupt({
"plan": state["current_plan"],
"message": "Please review the research plan"
})
if feedback.startswith("[ACCEPTED]"):
return Command(goto="research_team")
elif feedback.startswith("[EDIT_PLAN]"):
return Command(goto="planner")
The Agent Lineup
| Agent | Role | Tools | |-------|------|-------| | Coordinator | Entry/routing | Intent classification | | Planner | Task decomposition | None (pure LLM) | | Researcher | Information gathering | web_search, crawl, RAG | | Analyst | Reasoning/synthesis | None (pure LLM) | | Coder | Data processing | python_repl | | Reporter | Report generation | None (pure LLM) |
Head-to-Head Comparison
State Management
# GPT-Researcher: Dual state, isolated sub-graphs
class ResearchState(TypedDict): # Main graph
task: dict
sections: List[str]
report: str
class DraftState(TypedDict): # Sub-graph (per section)
topic: str
draft: dict
review: str
# DeerFlow: Single unified state
class State(MessagesState):
messages: Annotated[list, add_messages] # Automatic history
research_topic: str
current_plan: Optional[Plan]
observations: List[str]
final_report: str
GPT-Researcher's dual state enables parallelism but adds complexity. DeerFlow's unified state is simpler but requires careful preservation.
Parallelism vs Sequential
| Aspect | GPT-Researcher | DeerFlow | |--------|----------------|----------| | Approach | Multiple sub-graphs | Single dispatcher | | Concurrent Agents | Yes | No | | Race Conditions | Prevented by isolation | N/A (sequential) | | Time to Complete | ~3 min | Variable (human dependent) |
LangGraph Feature Usage
| Feature | GPT-Researcher | DeerFlow | |---------|----------------|----------| | StateGraph | Yes | Yes | | Nested Graphs | Yes | No | | Conditional Edges | Yes Primary | Yes Secondary | | Command Objects | No | Yes Primary | | MessagesState | No | Yes | | Interrupt (HITL) | No | Yes | | Checkpointing | No | Yes |
When to Use Which
flowchart TD
Q1{Need human oversight?}
Q1 -->|Yes| DF[DeerFlow]
Q1 -->|No| Q2{Speed critical?}
Q2 -->|Yes| GPT[GPT-Researcher]
Q2 -->|No| Q3{Need code execution?}
Q3 -->|Yes| DF
Q3 -->|No| Q4{Budget constrained?}
Q4 -->|Yes| GPT
Q4 -->|No| BOTH[Either works!]
Choose GPT-Researcher when:
- Speed matters (~3 min vs variable)
- Cost is a concern (~$0.10/research)
- You want autonomous execution
- You need parallel processing
- Simple integration is preferred
Choose DeerFlow when:
- Research requires human guidance
- Multi-turn clarification is needed
- Code execution is required
- You have existing RAG infrastructure
- You need TTS/PPT generation
- Extensive customization matters
Architectural Lessons
Building these systems taught me several principles:
- State isolation enables parallelism — GPT-Researcher's sub-graph pattern is elegant
- Commands > conditional edges for complex routing — DeerFlow's explicit
gotois more readable - Human-in-the-loop is a feature, not a fallback — DeerFlow's interrupt points add real value
- MessagesState simplifies conversation tracking — Automatic history management is worth adopting
What's Next
Both frameworks continue to evolve. GPT-Researcher just added support for local LLMs, while DeerFlow is expanding its MCP protocol integration. The agentic research space is heating up.
For our projects at BlestLabs, we're experimenting with a hybrid approach — using GPT-Researcher's parallel pattern for bulk research, with DeerFlow-style human checkpoints for quality-critical sections.
Sources
- GPT-Researcher Documentation
- GPT-Researcher LangGraph Blog
- DeerFlow System Architecture
- LangGraph Documentation
Have questions or want to discuss multi-agent architectures? Reach out on Twitter @aceism_ or check out our other research posts.
Related Posts
- AI Agents vs ChatGPT Wrappers: How to Build Products That Actually Work
- How to Build a Multi-Agent AI System: Mac + Raspberry Pi
- How to Run Multiple AI Bots in Discord
About the Author
BlestLabs builds AI-powered tools and agents. We run multiple AI agents 24/7 including Alfred (Mac Mini) and Pip (Raspberry Pi) for research, automation, and monitoring. Follow our journey on Twitter.
Related Posts
OpenClaw Model Benchmark 2026: Claude Opus 4.6, GPT-5.2, O3, LLaMA & Gemma — 17 Models, 8 Tests, Raw Results
We benchmarked 17 AI models across 8 real-world tasks — Claude Opus 4.6, Sonnet 4, Haiku 4, GPT-5.2, O3, LLaMA 3.2, Gemma 3, and more. Every prompt, response, and millisecond recorded.
AI Agents vs ChatGPT Wrappers: Build with Claude, OpenClaw & LangGraph
The difference between building another ChatGPT wrapper and building something that actually completes work. A framework for thinking about AI product opportunities.
Using AI to Translate 4,000-Year-Old Mesopotamian Tablets
How we're building an AI to translate ancient Akkadian cuneiform texts — unlocking 22,000 untranslated tablets from the world's first international trade network.
How to Run Multiple AI Bots in Discord with OpenClaw + Claude
How to set up multiple AI bots in Discord for collaborative brainstorming — with mention routing, role separation, and coordination patterns that actually work.