I think everyone should read the post from ARC-AGI organisers about HRM carefully: https://arcprize.org/blog/hrm-analysis

With the same data augmentation / 'test time training' setting, the vanilla Transformers do pretty well, close to the "breakthrough" HRM reported. From a brief skim, this paper is using similar settings to compare itself on ARC-AGI.

I too, want to believe in smaller models with excellent reasoning performance. But first understand what ARC-AGI tests for, what the general setting is -- the one that commercial LLMs use to compare against each other -- and what the specialised setting HRM and this paper uses as evaluation.

The naming of that benchmark lends itself to hype, as we've seen in both HRM and this paper.

  • tsoj
  • ·
  • 19 hours ago
  • ·
  • [ - ]
The TRM paper addresses this blog post. I don't think you need to read the HRM analysis very carefully, the TRM has the advantage of being disentangled compared to the HRM, making ablations easier. I think the real value of the arcprize HRM blog post is to highlight the importance of ablation testing.

I think ARC-AGI was supposed to be a challenge for any model. The assumption being that you'd need the reasoning abilities of large language models to solve it. It turns out that this assumption is somewhat wrong. Do you mean that HRM and TRM are specifically trained on a small dataset of ARC-AGI samples, while LLMs are not? Or which difference exactly do hint at?

> Do you mean that HRM and TRM are specifically trained on a small dataset of ARC-AGI samples, while LLMs are not? Or which difference exactly do hint at?

Yes, precisely this. The question is really what is ARC-AGI evaluating for?

1. If the goal is to see if models can generalise to the ARC-AGI evals, then models being evaluated on it should not be trained on the tasks. Especially IF ARC-AGI evaluations are constructed to be OOD from the ARC-AGI training data. I don't know if they are. Further, there seems to be usage of the few-shot examples in the evals to construct more training data in the HRM case. TRM may do this via the training data via other means.

2. If the goal is that even _having seen_ the training examples, and creating more training examples (after having peeked at the test set), these evaluations should still be difficult, then the ablations show that you can get pretty far without universal/recurrent Transformers.

If 1, then I think the ARC-prize organisers should have better rules laid out for the challenge. From the blog post, I do wonder how far people will push the boundary (how much can I look at the test data to 'augment' my training data?) before the organisers say "This is explicitly not allowed for this challenge."

If 2, the organisers of the challenge should have evaluated how much of a challenge it would actually have been allowing extreme 'data augmentation', and maybe realised it wasn't that much of a challenge to begin with.

I tend to agree that, given the outcome of both the HRM and this paper, is that the ARC-AGI folks do seem to allow this setting, _and_ that the task isn't as "AGI complete" as it sets out to be.

I should probably also add: It's long been known that Universal / Recursive Transformers are able to solve _simple_ synthetic tasks that vanilla transformers cannot.

Just check out the original UT paper, or some of it's follow ups: Neural Data Router, https://arxiv.org/abs/2110.07732; Sparse Universal Transformers (SUT), https://arxiv.org/abs/2310.07096. There is even theoretical justification for why: https://arxiv.org/abs/2503.03961

The challenge is actually scaling them up to be useful as LLMs as well (I describe why it's a challenge in the SUT paper).

It's hard to say with the way ARC-AGI is allowed to be evaluated if this is actually what is at play. My gut tells me, given the type of data that's been allowed in the training set, that some leakage of the evaluation has happened in both HRM and TRM.

But because as a field we've given up on actually carefully ensuring training and test don't contaminate, we just decide it's fine and the effect is minimal. Especially considering LLMs, the test set example leaking into the dataset is merely a drop in the bucket (I don't believe we should be dismissing it this way, but that's a whole 'nother conversation).

With these models that are challenge-targeted, it becomes a much larger proportion of what influences the model behaviour, especially if the open evaluation sets are there for everyone to look at and simply generate more. Now we don't know if we're generalising or memorising.

[dead]
Not exactly "vanilla Transformer", but rather "a Transformer-like architecture with recurrence".

Which is still a fun idea to play around with - this approach clearly has its strengths. But it doesn't appear to be an actual "better Transformer". I don't think it deserves nearly as much hype as it gets.

Right. There should really be a vanilla Transformer baseline.

With recurrence: The idea has been around: https://arxiv.org/abs/1807.03819

There are reasons why it hasn't really been picked up at scale, and the method tends to do well on synthetic tasks.

  • ·
  • 19 hours ago
  • ·
  • [ - ]
Makes me think once again about the similarity to Finite Impulse Response[1] filters (traditional LLMs) and Infinite Impulse Response[2] filters (recursive models). Not that it's a very good or original analogy.

Anyway, with FIR you typically need many, many times the coefficients to get similar filter cutoff performance as a what few IIR coefficients can do.

You can convert a IIR to a FIR using for example the window design method[3], where if you use a rectangular window function you essentially unroll the recursion but stop after some finite depth.

Similarly it seems unrolling the TRM you end up with the traditional LLM architecture of many repeated attention+ff blocks, minus the global feedback part. And unlike a true IIR, the TRM does implement a finite cut-off, so in that sense is more like a traditional FIR/LLM than the structure suggest.

So, would perhaps be interesting to compare the TRM network to a similarly unrolled version.

Then again, maybe this is all mad ramblings from a sleep deprived mind.

[1]: https://en.wikipedia.org/wiki/Finite_impulse_response

[2]: https://en.wikipedia.org/wiki/Infinite_impulse_response

[3]: https://en.wikipedia.org/wiki/Finite_impulse_response#Window...

Deep Equilibrium Models

>We present a new approach to modeling sequential data: the deep equilibrium model (DEQ). Motivated by an observation that the hidden layers of many existing deep sequence models converge towards some fixed point, we propose the DEQ approach that directly finds these equilibrium points via root-finding. Such a method is equivalent to running an infinite depth (weight-tied) feedforward network, but has the notable advantage that we can analytically backpropagate through the equilibrium point using implicit differentiation.

https://arxiv.org/abs/1909.01377

What's fascinating about deep equilibrium models is that you only need a single layer to be equivalent to a conventional deep neural network with multiple layers. Recursion is all you need! The model automatically uses more iterations for difficult tasks and fewer iterations for easy tasks.

Thanks something like that was going through my mind, nice to get a good reference for it. Any insights on why this is not a more popular approach? Maybe it's too difficult for a single layer to scale.

I read a paper recently on something similar for diffusion, called Fixed Point Diffusion Models. They specialize the first and last layers but recurse the middle layer some number of times until convergence.

Considering how a Transformer is a residual model, each layer must be adding more and more precise adjustments to the selected token. It makes a lot of sense to think of this like the steps of an optimisation method.

I implemented HRM for educational purposes and got good results for path finding. But then I started to do ablation experiments and came to the same conclusions as the ARC-AGI team (the HRM architecture itself didn’t play a big role): https://github.com/krychu/hrm

This was a bit unfortunate. I think there is something in the idea of latent space reasoning.

Awesome work, thanks for writing it up! Replication is absolutely critical, as is writing down and sharing learnings.
Thanks, appreciated
Very cool to see the effectiveness of recurrence on ARC. For those interested in recurrence, here are other works that leverage a similar approach for other types of problems:

Language modeling:

Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach https://arxiv.org/pdf/2502.05171

Puzzle solving:

A Simple Loss Function for Convergent Algorithm Synthesis using RNNs https://openreview.net/pdf?id=WaAJ883AqiY

End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking https://arxiv.org/abs/2202.05826

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks https://proceedings.neurips.cc/paper/2021/file/3501672ebc68a...

General:

Think Again Networks and the Delta Loss https://arxiv.org/pdf/1904.11816

Universal Transformers https://arxiv.org/abs/1807.03819

Adaptive Computation Time for Recurrent Neural Networks https://arxiv.org/pdf/1603.08983

Wow, so not only are the findings from https://arxiv.org/abs/2506.21734 (posted on HN a while back) confirmed, they're generalizable? Intriguing. I wonder if this will pan out in practical use cases, it'd be transformative.

Also would possibly instantly void the value of trillions of pending AI datacenter capex, which would be funny. (Though possibly not for very long.)

Any mention of "HRM" is incomplete without this analysis:

https://arcprize.org/blog/hrm-analysis

This here looks like a stripped down version of HRM - possibly drawing on the ablation studies from this very analysis.

Worth noting that HRMs aren't generally applicable in the same way normal transformer LLMs are. Or, at least, no one has found a way to apply them to the typical generative AI tasks yet.

I'm still reading the paper, but I expect this version to be similar - it uses the same tasks as HRMs as examples. Possibly quite good at spatial reasoning tasks (ARC-AGI and ARC-AGI-2 are both spatial reasoning benchmarks), but it would have to be integrated into a larger more generally capable architecture to go past that.

That's a good read also shared by another poster above, thanks! If I'm reading this right, it contextualizes, but doesn't negate the findings from that paper.

I've got a major aesthetic problem with the fact LLMs require this much training data to get where they are, namely, "not there yet"; it's brute force by any other name, and just plain kind of vulgar. Although more importantly it won't scale much further. Novel architectures will have to feature in at some point, and I'll gladly take any positive result in that direction.

Evolution is brute force by any other name. Nothing elegant about it. Nonetheless, here you are.

Poor sample efficiency of the current AIs is a well known issue - but you should keep in mind what kind of grisly process was required to give you the architecture that makes you as sample efficient as you are.

We don't know yet what kind of architectural quirks enable this sample efficiency in the human brain. It could be something like a non-random initialization process that confers the right inductive biases, a more efficient optimizer, recurrent background loops... or just more raw juice.

It might be that one biological neuron is worth 10000 LLM weights, and a big part of how the brain is so sample efficient is that it's hilariously overparametrized.

Yeaaaaaah, I kinda doubt there's much coming from evolutionary biases.

If it's a matter of clever initialization bias, it's gotta be pretty simple to survive the replication via DNA and procedural generative process in the meat itself, alongside all of the other stuff which /doesn't/ differentiate us from chimpanzees. Likely simple enough that we would just find something similar ourselves through experimentation. There's also plenty of examples of people learning Interesting Unnatural Stuff using their existing hardware (eg, echolocation, haptic vision, ...) which suggests generality of learning mechanisms in the brain.

The brain implements some kind of fairly general learning algorithm, clearly. There's too little data in the DNA to wire up 90 billion neurons the way we can just paste 90 billion weights into a GPU over a fiber optic strand. But there's a lot of innate scaffolding that actually makes the brain learn the way it does. Things like bouba and kiki, instincts, all the innate little quirks and biases - they add up to something very important.

For example, we know from neuroscience that humans implement something not unlike curriculum learning - and a more elaborate version of it than what we use for LLMs now. See: sensitive periods. Or don't see sensitive periods - because if you were born blind, but somehow regained vision in adulthood, it'll never work quite right. You had an opportunity to learn to use the eyes well, and you missed it.

Also, I do think that "clever initialization" is unfortunately quite plausible. Unfortunately - because yes, it has to be simple enough to be implemented by something like a cellular automata, so the reason why we don't have it already is that the search space of all possible initializations a brain could implement is still extremely vast and we're extremely dumb. Plausible - because of papers like this one: https://arxiv.org/abs/2506.20057

If we can get an LLM to converge faster by "pre-pre-training" it on huge amounts of purely synthetic, algorithmically generated meaningless data? Then what are the limits of methods like that?

Brute force:

    for i in 1..99999999:
        if i == 66666654:
             print(i)
             break
GA:

    for g in 1..100:
        pop, best = crossover(tournament(pop, heuristic_fn))
        print(best.value)
        if best.fitness < 0.01:
            break

GA uses a heuristic to converge. If that is brute force, so is binary search.
> If that is brute force, so is binary search.

Binary search is guaranteed to find the target if it exists, so it's not a heuristic. GA isn't, as it can get stuck in local minima. However, I agree that GA isn't brute force.

Heuristic just means there is a function telling you where to go. For A* it is the goal, for binary search it is lte, for geadient descent it is adam.
> Evolution is brute force by any other name.

No, it's not.

That analysis provided a very non-abrasive wording of their evaluation of HRM and its contributions. The comparison with a recursive / universal transformer on the same settings is telling.

"These results suggest that the performance on ARC-AGI is not an effect of the HRM architecture. While it does provide a small benefit, a replacement baseline transformer in the HRM training pipeline achieves comparable performance."

  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
Jevon’s paradox applies here IMHO. Cheaper AI/watt = more demand.
It would be fitting if the AI bubble was popped by AI getting too good and too efficient
  • ivape
  • ·
  • 1 day ago
  • ·
  • [ - ]
Also would possibly instantly void the value of trillions of pending AI datacenter capex

GPU compute is not just for text inferencing. The video generation demand is something I don’t think we’ll ever saturate for quite a while, even with breakthroughs.

It doesn't matter how much compute you have, you'll always be able to saturate it one way or another with ai and having more compute will forever be an advantage.

If breakthrough in ai happens you'll get multiplied benefits, not loss.

That does depend on GPUs being more efficient than CPUs for those breakthroughs.
For matrix multiplication that's probably true though.
  • ivape
  • ·
  • 23 hours ago
  • ·
  • [ - ]
The “AI is hype” can’t seem to wrap this idea around their little heads for some reason.
>Also would possibly instantly void the value of trillions of pending AI datacenter capex

I think they would just adopt this idea and use it to continue training huge but more capable models.

" With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI- 1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters"

That is very impressive.

Side note: Superficially reminds me of Hierarchical Temporal Memory from Jeff Hawkins "On Intelligence". Although this doesn't have the sparsity aspect, its hierarchical and temporal aspects are related.

https://en.wikipedia.org/wiki/Hierarchical_temporal_memory https://www.numenta.com

I suspect the lack of sparsity is an Achilles' heel of the current LLM approach.
Abstract:

Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies.

This biologically inspired method beats Large Language models (LLMs) on hard puzzle tasks such as Sudoku, Maze, and ARC-AGI while trained with small models (27M parameters) on small data (around 1000 examples). HRM holds great promise for solving hard problems with small networks, but it is not yet well understood and may be suboptimal.

We propose Tiny Recursive Model (TRM), a much simpler recursive reasoning approach that achieves significantly higher generalization than HRM, while using a single tiny network with only 2 layers.

With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters.

"With only 7M parameters, TRM obtains 45% test-accuracy on ARC-AGI-1 and 8% on ARC-AGI-2, higher than most LLMs (e.g., Deepseek R1, o3-mini, Gemini 2.5 Pro) with less than 0.01% of the parameters."

Well, that's pretty compelling when taken in isolation. I wonder what the catch is?

It won't be any good at factual questions, for a start; it will be reliant on an external memory. Everything would have to be reasoned from first principles, without knowledge.

My gut feeling is that this will limits its capability, because creativity and intelligence involve connecting disparate things, and to do that you need to know them first. Though philosophers have tried, you can't unravel the mysteries of the universe through reasoning alone. You need observations, facts.

What I could see it good for is a dedicated reasoning module.

  • js8
  • ·
  • 23 hours ago
  • ·
  • [ - ]
Basic english is about 2000 words. So a small scale LLM that would be capable of reasoning in basic english, and transforming a problem in normal english to basic english by automatically including the relevant word/phrase definitions from a dictionary, could easily beat a large LLM (by being more consistent).

I think this is where all reasoning problems of LLMs will end up. We will use LM to transform problem in informal english (human language) into a formal logical language (possibly fuzzy and modal), from that possibly into an even simpler logic, then we will solve the problem in the logical domain using traditional reasoning approaches, and convert the answer back to informal english. That way, you won't need to run a large model during the reasoning. Larger models will be only useful as a fuzzy K-V stores (attention mechanism) to help drive heuristics during reasoning search.

I suspect the biggest obstacle to AGI is philosophical, we don't really have a good grasp/formalization of human/fuzzy/modal epistemology. Even if you look at formalization of mathematics, it's mostly about proofs, but we lack understanding what is e.g. an interesting mathematical problem, or how to even express in formal logic that something is a problem, or that experiments suggest something, that one model has an advantage over the other in this respect, that there is a certain cost associated with testing a hypothesis etc. Once we figure out what we actually want in epistemology, I am sure the algorithm required will be greatly reduced.

  • bcrl
  • ·
  • 20 hours ago
  • ·
  • [ - ]
The biggest obstacle to AGI is data.

Take the knowledge the average human has about integrating visual information with texture of an object. Nearly every adult can take a quick glance around a room and have a good idea what it will feel like to run your fingers along its surface, or your lips, or even your tongue, and be able to describe the experience. We have this knowledge because when we were infants and toddlers, everything we encountered was picked up, pulled towards our mouth, and touched by our hands. An AGI inside a computer cannot have that experience today, so it will lack the foundations of intelligence that humans have built up by interacting with the real world.

At some point it will become possible to either collect that data or simulate an experience sufficiently accurately to mimic the development a human child goes through. Until that happens, true AGI will be out of reach as it will have deficiencies the average human does not.

That said, a lot of people will try to get to that point using other means, and they'll probably get pretty close, albeit with really weird hallucinations in the corner cases.

That's been my expectation from the start.

We'll need a memory system, an executive function/reasoning system as well as some sort of sense integration - auditory, visual, text in the case of LLMs, symbolic probably.

A good avenue of research would be to see if you could glue opencyc to this for external "knowledge".

LLM's are fundamentally a dead end.

Github link: https://github.com/SamsungSAILMontreal/TinyRecursiveModels

  • taneq
  • ·
  • 20 hours ago
  • ·
  • [ - ]
They’re no more a dead end than your hippocampus is a dead end. They’re just not a complete AGI in and of themselves. They’re a component.
LLMs are pretty a central to most next step architectures, like neurosymbolic ones, multimodal ones, etc.
  • ivape
  • ·
  • 1 day ago
  • ·
  • [ - ]
Should it be a larger frontier model, with this as a tool call (tool call another llm) to verify the larger one?

Why not go nuts with it and put it in the speculative decoding algorithm.

  • baq
  • ·
  • 1 day ago
  • ·
  • [ - ]
If we could somehow weave in a reasoning tool directly into the inference process, without having to use the context for it, that’s be something. Perhaps compile to weights and pretend this part is pretrained…? No idea if it’s feasible, but it’d definitely be a breakthrough if AI had access to z3 in hidden layers.
[dead]
  • Zee2
  • ·
  • 4 hours ago
  • ·
  • [ - ]
Oh boy. Is this not essentially a neuralese CoT? They’re explicitly labelling z/z_L as a reasoning embedding that persists/mutates through the recursive process, used to refine the output embedding z_H/y. Is this not literally a neuralese CoT/reasoning chain? Yikes!
Overall I really like these transformer RNNs. They are basically EBMs learning an energy landscape that falls into a solution, relaxing a discrete problem into a smooth convex one. Reminds me of other iterative methods like neural cellular automata and flow matching / diffusion. This method looks promising for control problems: just tumble your way down the state space, where each step is constrained to be a valid action.
Can someone explain to a noob how this "outer loop" differs from an LLM modified to do reasoning in latent space?
  • ·
  • 22 hours ago
  • ·
  • [ - ]
If it is a recursive one, can it apply the induction and solve the Towers of Hanoi beyond level six?
You'll first need to frame Towers of Hanoi as a supervised learning problem. I suspect the answer to your question will differ depending on what you pick as the input-output pairs to train the model on.
So what happens when we figure out how to 10x both scale and throughput on existing hardware by using it more efficiently? Will gigantic models still be useful?
Of course! We still have computers the size of mainframes that ran on vacuum tubes. They are just built with vastly more powerful hardware and are used for specialized tasks that supercomputing facilities care about.

But it has the potential to alter the economics of AI quite dramatically

Anything that improves scaling at the bottom end will also improve scaling at the top end, give or take.
  • exe34
  • ·
  • 14 hours ago
  • ·
  • [ - ]
Any idea why the tiny network takes days to run on massive GPUs? is it the large dataset, or the recursive nature of the algorithm? i.e. would a simple question take hours to solve or require a huge amount of memory?

I don't have a huge amount of experience in the nitty gritty details and I'm wondering if I'll be able to run some interesting training on a 3090 in a few days.

This is just from my skim of the paper, take it with a pinch of salt.

It's tiny in terms of number of weights. This is because it reuses and refines the same weights across recursion steps, instead of repeating them for each layer which is what stacked transformers are in usual LLMs.

However, the FLOPs is the exact same.

In usual LLMs you have number of transformer blocks * (per block costs), here you have number of recursion steps * number of blocks(smaller than usual,2 here) * (per block cost)

Basically, this needs compute like a 16-block LLM per training step. Because here recursions = 8, and 2 blocks. How many steps depends on dataset used mostly.

If this is a way to get equivalent results to a much larger network in the same FLOPs but with a fraction of the VRAM, it's transformative.

I'm particularly keen to see if you could do speech-to-text with this architecture, and replace Whisper for smaller devices.

  • asjir
  • ·
  • 5 hours ago
  • ·
  • [ - ]
Nvidia's parakeet dropped recently with better performance and 0.6B params, so the rate of progress here looks good, probably next year (or mby the year after) they'll be running no probs
I mean, I wouldn't say it's transformative or bet on it equalling usual LLM performance in general. It's kind of similar to weight reuse you see in RNNs, where the same `h` is maintained throughout. In usual LLMs each block has its own state.

These guys are choosing a middle ground - stacking few transformers, and then reusing the same 2 blocks 8 times over.

It'll be interesting to see what usecases are served well with this approach. Understanding of these architectures' response to these changes are still largely empirical so hard to say ahead of time. My intuition is that for repetitive input signals it could be good - audio processing comes to mind. But complex attention and stuff like in elevenlabs style translation is probably too much to hope for. Whisper type transcription tho, might work.

  • exe34
  • ·
  • 5 hours ago
  • ·
  • [ - ]
thank you! I'll need to have a read soon.