I'd like to quickly summarize what is different about our approach and why it matters.
Our work was inspired by brilliant research done at MIT CSAIL on "Recursive Language Models" (RLMs). One of the controversies has been whether these models are just a formalization of what agents like Claude Code already do vs. whether they bring new capabilities to the table.
By outperforming Claude on the major long-context benchmark, we provide a strong signal that something fundamentally new is happening. (In other words, it's not "just Claude Code" because it demonstrably outperforms Claude Code in the long-context regime.)
Where our contribution, LCM, differs from RLMs is how we handle recursion. RLMs use "symbolic recursion" -- i.e., they have an LLM write a script to recursively call itself in order to manipulate the context, which is stored in a REPL. This provides maximum flexibility... but it often goes wrong, since the LLM may write imperfect scripts.
LCM attempts to decompose the recursion from RLMs into deterministic primitives so that the control flow can be managed by an engine rather than left to the whims of the LLM. In practice, this means we replace bespoke scripts with two mechanisms: (1) A DAG-based context management system that works like paged virtual memory, except for managing conversations and files; and (2) Operator-level recursion, like "Map" for LLMs, which lets one tool call process thousands of tasks.
An analogy we draw in the paper is the evolution from GO-TO statements (of Dijkstra's "Considered Harmful" fame) to structured programming. RLMs are maximally expressive, but all of that power comes with the risk of things going awry. We have built a more mechanistic system, which can provide stronger guarantees when deployed in production with today's models.
Happy to answer any questions! Thanks for taking a look at the paper!
I've echoed the sentiment here on HN (and elsewhere) that these kinds of mechanisms seem to be a pathway to extending context longer and longer and longer and I wish I could toy around with this technology right now (can I?). I'm so excited!!
Your work is the shoulders-built-on-shoulders upon which other giants shall keep on building. Thank you so much.
Yes, we think there is a ton of low-hanging fruit from taking lessons from OS/PL theory and applying them to LLM tooling.
This is our first contribution in that direction. There will be more!
Just bring an API key. :)
github.com/voltropy/volt
Have you considered making an openclaw plugin/PR for it? I understand you have your own coding CLI tool, but I don’t think this looks so hard to implement that it can’t be implemented elsewhere.
Either way, thanks for sharing this.
We have heard from a ton of OpenClaw users that the biggest barrier to them getting everything they want out of their agents is that memory is not a solved problem.
LCM could be a great solution to that. Stay tuned -- will ship it ASAP.
1 - global namespace - for the gateway agent/coordinator - would make inspecting results of subagent tasks much more safe and efficient, and all the benefits of precision across compaction boundaries for the main chat thread. I could see giving the subagents access to it, or just prompting them fresh and storing results in the global memory - probably the second is better.
2 - permissioned memory spaces - stuff that a given subagent should know without giving them global memory access. Then a gateway could mark some stuff ‘available’ as part of prompting.
This would be a super useful set of primitives - from reading the paper, I think you could do this relatively cheaply, maybe a tagging system for branches/nodes in the DAG. openclaw keeps some sort of track of what subagents should have access to already in the form of skills, but I haven’t looked into the actual permissions architecture.
Of course getting permissions to work well might be easier said than done, but I like this direction.
We will probably ship a fairly basic version to start, but I think there are a lot of cool things that can be added.
> deterministic primitives
Are agent-map and LLM-map the only two options you've given the model for recursive invocations? No higher-level, er, reduction operators to augment the map primitives?
It's quite possible that's wrong. If so, I would write llm_reduce like this: it would spawn a sub-task for every pair of elements in the list, which would call an LLM with a prompt telling it how to combine the two elements into one. The output type of the reduce operation would need to be the same as the input type, just like in normal map/reduce. This allows for a tree of operations to be performed, where the reduction is run log(n) times, resulting in a single value.
That value should probably be loaded into the LCM database by default, rather than putting it directly into the model's context, to protect the invariant that the model should be able to string together arbitrarily long sequences of maps and reduces without filling up its own context.
I don't think this would be hard to write. It would reuse the same database and parallelism machinery that llm_map and agentic_map use.