ReAct prompting is a framework for large language model agents that interleaves natural-language reasoning steps with tool actions and observations in a single scratchpad. By alternating Thought → Action → Observation, the model plans, executes API/tool calls, inspects feedback, and revises, enabling grounded multi-step problem solving and controllable tool use.
What is ReAct Prompting?
ReAct combines chain-of-thought style reasoning with explicit actuation. Prompts define a schema like: “Thought: … Action: <tool>(args) Observation: …” that repeats until a final answer is produced. This structure reduces hallucinations by forcing the model to consult evidence and to justify steps, while giving developers a transparent trace for debugging and safety checks. It pairs well with function calling (deterministic tool invocation) and retrieval, and can be steered with constraints (allowed tools, budget, max steps). Compared with pure CoT, ReAct closes the loop with observable outcomes.
Why it matters and where it’s used
ReAct improves reliability and auditability for assistants that must browse, query databases, run code, or operate business workflows. Teams use it to plan granular steps, gate high-risk tools, and capture execution traces for review and analytics. It underpins web-browsing agents, RAG copilots with follow-up retrieval, code/SQL agents, and enterprise automations.
Examples
- Web Q&A: Thought →
search(query)→ Observation(snippets) → Thought →open(url)→ Observation → Final answer with citations. - Math/code: Thought →
run_python(code)→ Observation(results) → Thought → fix bug → Observation → Final. - CRM action: Thought →
create_ticket(args)→ Observation(ticket_id) → Thought → notify(user) → Final summary.
FAQs
- How is ReAct different from chain-of-thought? CoT reveals reasoning but doesn’t act. ReAct alternates reasoning with tool calls and feedback to ground steps.
- How does it relate to function calling? Function calling supplies schemas and validation; ReAct is the prompting pattern that decides when/what to call.
- Does ReAct prevent prompt injection? No. Treat tool outputs and retrieved text as untrusted; enforce least-privilege scopes, schemas, allow/deny lists, and reviews.
- How do you evaluate ReAct agents? Track success rate, step count, tool call accuracy, citation faithfulness, and safety violations; use curated task suites.
- When is ReAct overkill? For simple single-turn tasks without tools; use direct prompting or CoT.
