REACT: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS https://arxiv.org/pdf/2210.03629.pdf
- While large language models (LLMs) have demonstrated impressive performance across tasks in language understanding and interactive decision making, their abilities for reasoning (e.g. chain-of-thought prompting) and acting (e.g. action plan generation) have primarily been studied as separate topics. In this paper, we explore the use of LLMs to generate both reasoning traces and task-specific actions in an interleaved manner, allowing for greater synergy between the two: reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with and gather additional information from external sources such as knowledge bases or environments. We apply our approach, named ReAct, to a diverse set of language and decision making tasks and demonstrate its effectiveness over state-of-the-art baselines in addition to improved human interpretability and trustworthiness. Concretely, on question answering (HotpotQA) and fact verification (Fever), ReAct overcomes prevalent issues of hallucination and error propagation in chain-of-thought reasoning by interacting with a simple Wikipedia API, and generating human-like task-solving trajectories that are more interpretable than baselines without reasoning traces.
- Furthermore, on two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted with only one or two in-context examples. (DeepL)
- Large-scale language models (LLMs) have demonstrated impressive performance in tasks such as language comprehension and interactive decision making, but their ability to perform inferences (e.g., facilitate chains of thought) and actions (e.g., generate action plans) have been studied largely as separate topics. This paper explores the use of LLMs that generate both inference traces and task-specific actions in an interleaved fashion, allowing for greater synergy between the two. Inference traces help the model to guide, track, and update action plans and handle exceptions, and actions allow it to interface with external sources such as knowledge bases and environments to gather additional information. We applied our approach, named ReAct, to a variety of linguistic and decision-making tasks and demonstrated its effectiveness against state-of-the-art baselines, in addition to improving human interpretability and reliability. Specifically, in question answering (HotpotQA) and fact checking (Fever), ReAct interacts with a simple Wikipedia API to overcome the problems of illusions and error propagation in thought-chain reasoning, and human-like task resolution that is more interpretable than a baseline without inference traces It generates trajectories that are more interpretable than baselines without inference traces.
- Furthermore, in two interactive decision-making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by 34% and 10% absolute success rates, respectively, when prompted with only one or two contextual examples.
Improvements to CoT(Chain of Thought)
This page is auto-translated from /nishio/ReAct(LLM) using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I’m very happy to spread my thought to non-Japanese readers.