Highlights:
- New AI paradigm ReCode bridges the gap between high-level planning and low-level action.
- Developed by a research team led by Zhaoyang Yu and collaborators across multiple institutions.
- Introduces recursive code generation to enable dynamic control across decision granularities.
- Outperforms existing AI baselines in inference and training efficiency.
TLDR:
ReCode introduces a unified framework for AI decision-making that merges planning and action through recursive code generation, dramatically improving adaptability, generalization, and efficiency in large language model agents.
A groundbreaking study titled **“ReCode: Unify Plan and Action for Universal Granularity Control”** has been released by a team of computer scientists including Zhaoyang Yu (https://arxiv.org/search/cs?searchtype=author&query=Yu,+Z), Jiayi Zhang (https://arxiv.org/search/cs?searchtype=author&query=Zhang,+J), Huixue Su (https://arxiv.org/search/cs?searchtype=author&query=Su,+H), Yufan Zhao (https://arxiv.org/search/cs?searchtype=author&query=Zhao,+Y), Yifan Wu (https://arxiv.org/search/cs?searchtype=author&query=Wu,+Y), Mingyi Deng (https://arxiv.org/search/cs?searchtype=author&query=Deng,+M), Jinyu Xiang (https://arxiv.org/search/cs?searchtype=author&query=Xiang,+J), Yizhang Lin (https://arxiv.org/search/cs?searchtype=author&query=Lin,+Y), Lingxiao Tang (https://arxiv.org/search/cs?searchtype=author&query=Tang,+L), Yingchao Li (https://arxiv.org/search/cs?searchtype=author&query=Li,+Y), Yuyu Luo (https://arxiv.org/search/cs?searchtype=author&query=Luo,+Y), Bang Liu (https://arxiv.org/search/cs?searchtype=author&query=Liu,+B), and Chenglin Wu (https://arxiv.org/search/cs?searchtype=author&query=Wu,+C). The research proposes **ReCode (Recursive Code Generation)**, a novel artificial intelligence paradigm that merges planning and action within a single computational structure. The approach addresses a long-standing limitation of large language model (LLM)-based agents—their inability to dynamically switch between high-level strategy and low-level execution.
At the core of ReCode is the insight that planning is not distinct from action but rather a higher-level abstraction of it. Instead of separating planning modules from action modules, ReCode represents both within the same code framework. High-level plans are defined as abstract placeholder functions, which the model automatically expands into finer-grained subfunctions through recursion. This recursive decomposition continues until the most primitive actions are reached, effectively allowing the model to operate at any desired granularity. The result is a system capable of seamlessly reasoning and executing across varying layers of complexity—similar to how humans can both strategize and act with fluidity.
Technically, this unified code representation improves not only adaptive decision-making but also data efficiency. Because the recursive approach inherently generates hierarchical and multi-granularity training data, it enhances the model’s ability to learn structured decision patterns. Extensive benchmarks show that ReCode significantly outperforms traditional planning-action separation frameworks in inference success rate, robustness, and performance scalability. Its open-source implementation, available on [GitHub](https://github.com/FoundationAgents/ReCode), provides researchers and developers a flexible foundation to experiment with recursive decision mechanisms across autonomous systems, robotics, and LLM agents. The study underscores a shift toward **universal granularity control**, where AI systems can dynamically select the appropriate level of reasoning for any real-world task.
This development marks a critical step in advancing general-purpose intelligence—by treating planning as an extension of action, ReCode allows AI to mimic the natural adaptability shown in human cognition. The research team’s contribution widens the horizon for applications in adaptive task execution, dynamic reasoning, and cognitive simulation within the rapidly evolving landscape of artificial intelligence.
Source:
Source:
arXiv:2510.23564v1 [cs.AI] (https://arxiv.org/abs/2510.23564)
