General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.
Not morally, not practically. Mathematically.
The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.
This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.
In other words: you can’t generalize from what can’t be compressed.
⸻
Here’s the abstract:
There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got
The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.
https://philpapers.org/archive/SCHAII-18.pdf
Happy to read your view.
Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."
Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.
Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.
So you're saying the human brain can do something infinite then?
Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(
Or.. "After Johnny read the paper humanity disappeared in a puff of logic"
Since when is mathematics based on the laws of our physical universe? last time I checked, it's an abstract system with no material reality.
They say humans exists and are intelligent, therefore intelligence is possible in this universe. And it might well be possible in other configurations within this universe, on computers for example.
The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.
AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.
The above is nonsense. Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't. That's what medical science is for. We used our intelligence to figure out antibiotics, for example.
The broadly accepted meaning of AGI is human intelligence on a machine. Redefining it to mean something else does nothing useful.
I think in some sense you're right. There's a higher level way to address disease that humans have made progress on.
But in the critical sense more related to the point I was making I completely disagree. The sense I'm speaking of is we do not (maybe one day we well) directly affect disease states without our brains the way our immune system does. It's a very complicated process that we know works but we absolutely do not understand all the mechanisms involved like we do with say, solving a calculus equation.
My point is, if we could do this our central nervous system would also be the immune system. But it is not because it operates in a entirely different cognitive space than our conscious brain. There are many examples of this, like regulating your body's blood sugar. We know the endocrine system is doing this but we are not actively involved in say the way we are when speaking to one another. The examples are actually countless and go far behind what the human cognitive system is currently capable of. AGI, by definition, would have to not only encompass intelligence in all these different cognitive spaces but also encompass intelligence in any arbitrary future space.
> The broadly accepted meaning of AGI is human intelligence on a machine.
Then the name is inaccurate to the point of deception. You've just described artificial human intelligence, not artificial general intelligence.
Just because we don't have direct conscious control over our white blood cells or pancreas does not mean we don't have general intelligence. We may not control them, but we have the ability to figure out how they work. Our intelligence is general in the sense we can understand body functions, or invent calculus, or develop relativity, or any unlistable number of other things. If a machine can do that - understand arbitrary concepts, even if its at or below human abilities, we'd have AGI. Instead with LLMs we have a tool that can mimic understanding, but when you go a bit deeper its clear that understanding is not present. Not to say LLMs don't have uses, they obviously do, but they are not intelligent.
But in this world, in this universe, there are lots of problems to solve. Humans understand these problems more than any other organism or machine we know of, but we are not general. The most we can say is we are the most general. There are far too many problem domains that are beyond the capability of humans to solve to call human intelligence general intelligence. The pancreas _does_ have direct control over its problem domain. That makes it a specific form of cognition. Maybe one day we will have that too. So does that mean when that day comes we are more general? I believe it does.
I like Francois Chollet's definition of intelligence: efficiency of skill acquisition, not demonstration of skill. I don't know if I should attribute that to him but for me, he is the first person I heard put it so succinctly. Using that definition there, is currently no known learning architecture that can acquire _any_ arbitrary skill efficiently. You and I do not currently do not have the ability to acquire the skill to consciously regulate bodily functions. It is a form of cognition that doesn't map to any thing we're aware of. Understanding how it works, in principle doesn't mean you have the skill to perform it.
I'm curious about this. Is that simply not a limitation of our current knowledgebase? That is, as we figure out more about reality, we will eventually conquer those domains as well.
Or do you mean there are domains that are provably beyond the structural capability of our brains? For instance, abstract things like higher-dimensional geometry or number theory which is hard for people to visualize "natively" in their brains. Yet people regularly solve problems in those types of fields. Sure, we rely on tools like computers or pen and paper, but we do solve those problems.
Similarly, take your point about pancreas: sure, our brains cannot do the things it does, that is simply due to lack of the requisite "actuators" connecting our brains to the organ, an artifact of our evolution. But we do understand a lot of the biological mechanisms involved in its operation, enough to treat related problems, again through "tools" like medication and surgery. As we learn more about how they work, that increasingly becomes a problem domain our brains are "capable of solving."
As such, I don't see how these examples show that the brain is not "generally intelligent", unless you exclude tool use, which to me seems like incorrectly conflating cognition and action.
I think tools are fine depending on how you define a tool and its relationship to human intelligence. If we build an artificial pancreas that learns about the body its sitting in and is able to function just as well as a natural pancreas would we say humans solved this problem? In some sense because humans built the machine. But not in another sense because humans are not the machine. Just like we say "AlphaZero beat the world's best human chess player". We don't say usually say "the humans who designed AlphaZero beat the world's best chess player". Did the humans who designed AlphaZero master chess? Not necessarily.
My argument is that there are examples of cognition that we know the human brain currently doesn't operate in. Are these learnable by the human brain? We don't know yet so we can't say the human brain is completely general.
As I type this and as you read this our bodies are constantly rebuilding themselves. Maybe if you connect the appropriate actuators we can do the same thing with our conscious minds? I highly doubt it due to the nature of the problem and how inefficient the brain would be in solving this sort of problem. It likely wouldn't work, but it's too far for me to say "rovably beyond the structural capability of our brains". I'll just say "I'm very pleased I don't have to do that right now". Developing a living organism, either starting from a single cell or from a fully grown adult, in real time in the real world is a very difficult problem to solve and the human brain would be terribly inefficient at it.
These are just my opinions.
This poster didn't understand this response last time this was raised: https://news.ycombinator.com/item?id=44349818
Using your analogy, what this means is, we have to make humans to make human like intelligence. Not that we can make human like intelligence out side of humans.
>>Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."
What exactly does the brain do? Part of the problems with this is language itself might be insufficient to describe intelligence. And Language might be working a level below our thought. There are occasions where even the best of us fail to come up with how we think. We can go close and its not enough. A picture is better than a thousand words - why? Perhaps language is enough to display signs of intelligence but can't entirely contain or describe it.
Similarly, even in the case of LLMs we have seen showing spatial intelligence is whole lot different than predicting text.
Heck intelligence might not even be one monolithic thing. It could be a collection of several intelligences. And this whole idea of one grand AGI monolith could be wrong.
Exactly. There's this "thing" you see in certain circles, where people (intentionally?) mis-interpret the "G" in AGI as meaning "the most general possible intelligence". But that's not the reality. AGI has pretty much always been taken to mean "AI that is approximately human level". Going beyond that is getting into the realm of Artificial Super Intelligence, or Universal Artificial Intelligence.
The LLM that will convince me that AGI is near is one that will understand language enough to find the linguistic rules of conlangs they weren't trained on (or more specifically, engineered languages made by linguists with our current knowledge of how languages work), and create grammatically correct sentences. Something someone trained on can do with great effort, but it's more due to breaking habits and limited brainpower than real complexity.
I think for your theory to hold up, you would need to show that physics cannot, even in principle, be simulated mathematically at sufficient scale (the number of interacting subatomic particles). That would be surprising.
At the moment it seems like your results contradict reality, meaning your starting assumptions cannot all be true.
AGI is clearly possible, because our brains are fundamentally machines, and there’s no reason in principle why we couldn’t build something similar. Right now we don’t - as human beings - have the ability to do that, but it clearly isn’t impossible since cellular machinery is able to build it in the first place.
Then I understood why not. Your paper proves that I am unable to understand your paper. It also proves that you are unable to understand your paper.
"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"
your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.
On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.
The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?
https://arxiv.org/abs/2507.06952 https://arxiv.org/abs/1801.04016
Quantum entanglement?:
https://www.popularmechanics.com/science/a65368553/quantum-e...
And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”
Wasn't this essentially the conclusion of Gödel. Math, based on a set of axioms, will either have to accept that there are things that are true but can't be proven, or that there are proofs that aren't true.
If you aren't arguing for a non-materialist position, then the distinction between "artificial" and human intelligence isn't meaningful. A powerful enough computer could simulate the material processes in your brain. If as the OP claims it is mathematically impossible for a computer to generate intelligence, no matter how powerful that computer, then it is impossible for your brain to do so (via material processes).
Every generation tries to map its most complex technology onto its understanding of nature. "AGI" has a specific meaning today, but if you want it to mean atheism versus theism or whatever materialist argument, you're far outside of science and technology. Like our fathers of the Enlightenment with their watchmaker god. The idea there is some way for humans to break free of nature seems like a religious belief to me, but whether you agree or not, certainly there is room for doubting that faith, since we're outside the realm of what science can explore.
But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.
> How is your brain doing it then?
Do you have an answer? What indication do we have that any AGI we would create would have to follow the same process to achieve the result? Can humans recreate all phenomenon observed in the universe? You're arguing yes in all of these then? I'd love to read more of that argument. I don't care about this proof though. I don't think I've indicated I think AGI is impossible. I care far more about why someone would be convinced it must be possible for humans to recreate in the exact same manner as the brain, which this commenter and you seem to think. I know humans have not shown we can fully model our observations cohesively.
> But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.
record scratch What? Did I agree to all of these premises? Do you have some backing for the three or four assumptions you've made in this sentence? You still need to show humans not only could be but are capable of replicating the system. I am asking you for some argument out there that says everything we observe in nature humans can replicate in the exact same way it occurs. That's a much stronger statement than such intelligence exists. You can just link a book. I am not sold on one way or the other, but you seem very confident. Is there some argument I can read? To me, our models in physics point to a fragmented and contradictory understanding of our world to get results. But yes, results are results, but that doesn't mean we are doing anything but modeling -- but can we model everything? Is that the implication of evolution?
We seem to be wandering into capital-S Science vs. science, and I'm not really into religious discussions here. I would love to understand why you seem to think I'm so dimwitted as to dismiss with an edge, when all of this stems from a glib reply to a glib reply that I am no less convinced is in fact glib and fatuous. (And that original comment was not yours, lest you feel insulted in the same way you have insulted me.)
In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.
Maybe I'm just being pedantic, but I'd argue that there's no particular reason to say that AGI involves "approximating the human thought process". That is, what matters is the result, not the process. IF one can find another way to "get there" in a completely different manner than the human mind, then great.
That said, obviously there is some appeal to the "mimic human thought" approach since human thought is currently an existence proof that the kind of intelligence we are talking about is possible at all and mimicking that does seem like an obvious path to try.
Even that isn't needed. A "general intelligence" separable from ethics and rights is valuable in itself. It's valuable to subjugate, as long as the subjugated object is producing more than they are consuming.
If you solve a problem "by accident", there are very many other people who make foolish decisions daily because they do not think. Some of those pan out too and lead to understanding. A resource-bounded agent can also maintain a notion of fuel and give a random answer when it has exhausted its fuel.
The structural incompleteness mentioned isn't really meaningful. Humans have not demonstrated the capacity to make epsilon-optimal decisions on an infinite number of tasks, since we do not do an infinite number of tasks anyway.
K-complexity, and resource-bounded K-complexity are indeed extremely useful tools to talk about generalization, I'd agree, but I think the author has misunderstood the limits that K-complexity places on generalization.
Then I started to read the paper, and it's worse.
Every one of his 'examples' would not just be 'solved' by any existing LLM, even a 'dumb' system that just spits out a random sentence to any question would pass his first 2 'tests' with flying colors. I'm not kidding, he accepts "Leave the classroom and stop confusing everybody with your senseless questions" as a good solution.
In fact, the only system that would fail is this hypothetical AI he imagines that somehow gets into infinitely analyzing loops.
Then his 3rd test, an investment decision, gives the same outcome as himself up until the point he draws in extra information not available to the AI, after which he flips his 'answer' which he then labels as 'correct' and the previous answer based on the original info as 'false' because he made some money on the bet a few weeks later, seriously?
I would politely suggest that until you do that and then come up with a convincing rebuttal for every point they make that is not self-evidently wrong, you shouldn't be wasting humans' time.
I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.
This is the classical foundational syllogism of computationalism. In short:
1.The brain obeys the laws of physics.
2.The laws of physics are (in principle) computable.
3.Therefore, the brain is computable.
4.Therefore, human-level general intelligence is computable, and AGI is
inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical:
Simulating a system’s physical behavior is not the same as instantiating its cognitive function.The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.
Yes, we can simulate water. -> No, the simulation isn’t wet.
Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.
And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.
Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.
If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.
No.
This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.
If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure
…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.
That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)
In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc
That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.
It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.
But that’s not what’s happening here.
– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.
That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.
So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.
Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.
But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction
In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.
That is what “IOpenER” means: Information Opens, Entropy Rises.
It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?
Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.
So unless those are now fringe newsletters, the definition stands:
- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -
If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.
Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.
You don’t need empirical falsification to verify this. You need mathematical framing. Period.
Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.
These are not performance limitations. They are limits of principle.
Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.
NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)
If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.
Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.
But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.
Let me know how that goes
But I take that to mean there's no general, universal algorithm to tell us anything we want to know. But that's not what intelligence is, we're not defining some kind of absolute intelligence like an oracle for the halting problem. That definition would be a category error.
Also, what's the point of telling others you believe what they are doing is impossible, specially after the results we are seeing even at the free-tier, open-to-the-public services?
https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...
" The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.
The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. "
:3
Please explain why do you believe this is relevant to the points I've made.
But you may have a different version of history.
Like eval/apply under Lisp. Or Forth.