AGI Is Mathematically Impossible (3): Kolmogorov Complexity
Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:

General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.

Not morally, not practically. Mathematically.

The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.

This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.

In other words: you can’t generalize from what can’t be compressed.

Here’s the abstract:

There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got

The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.

https://philpapers.org/archive/SCHAII-18.pdf

Happy to read your view.

AGI Is Mathematically Impossible

Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe. Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

Also, Marcus Hutter already proved that AIXI[1] is a universal intelligence, where it's only short-coming is that it requires infinite compute. But the quest of the AGI project is not "universal intelligence" but simply intelligence that approximates that of us humans. So I'd count AIXI as another bit of suggestive evidence that AGI is possible.

Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.

So you're saying the human brain can do something infinite then?

Still, happy to give the paper a read... eventually. Unfortunately the "papers to read" pile just keeps getting taller and taller. :-(

[1]: https://en.wikipedia.org/wiki/AIXI

This. First thing that came to my mind when I read the headline. Sounds like someone saying "Birds fly but we can't make planes because flying is mathematically impossible".

Or.. "After Johnny read the paper humanity disappeared in a puff of logic"

  • seu
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> another "system based on the laws of our physical universe."

Since when is mathematics based on the laws of our physical universe? last time I checked, it's an abstract system with no material reality.

  • edanm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Either humans are not generally intelligent, or they are, in which case they're an existence proof of general intelligence. Math really has nothing to do with it beyond the most basic statements of logic.
The OP isn't comparing maths.

They say humans exists and are intelligent, therefore intelligence is possible in this universe. And it might well be possible in other configurations within this universe, on computers for example.

  • chrsw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.

The problem I see isn't that AGI isn't possible, that's not even surprising. The problem is the term "AGI" caught on when people really meant "AHI" or artificial _human_ intelligence, which is fundamentally distinct from AGI.

AGI is difficult to define and quite likely impossible to implement. AHI is obviously implementable but I'm unaware of any serious public research that has made significant progress towards this goal. LLMs, SSMs or any other trainable artificial systems are not oriented towards AHI and, in my opinion, are highly unlikely achieve this goal.

> Human intelligence is not general intelligence. If it were, you'd be able to use your conscious thoughts to fight off diseases and you wouldn't need your immune system, for example.

The above is nonsense. Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't. That's what medical science is for. We used our intelligence to figure out antibiotics, for example.

The broadly accepted meaning of AGI is human intelligence on a machine. Redefining it to mean something else does nothing useful.

  • chrsw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Mostly because we actually do use our conscious thoughts to fight disease when our immune system can't.

I think in some sense you're right. There's a higher level way to address disease that humans have made progress on.

But in the critical sense more related to the point I was making I completely disagree. The sense I'm speaking of is we do not (maybe one day we well) directly affect disease states without our brains the way our immune system does. It's a very complicated process that we know works but we absolutely do not understand all the mechanisms involved like we do with say, solving a calculus equation.

My point is, if we could do this our central nervous system would also be the immune system. But it is not because it operates in a entirely different cognitive space than our conscious brain. There are many examples of this, like regulating your body's blood sugar. We know the endocrine system is doing this but we are not actively involved in say the way we are when speaking to one another. The examples are actually countless and go far behind what the human cognitive system is currently capable of. AGI, by definition, would have to not only encompass intelligence in all these different cognitive spaces but also encompass intelligence in any arbitrary future space.

> The broadly accepted meaning of AGI is human intelligence on a machine.

Then the name is inaccurate to the point of deception. You've just described artificial human intelligence, not artificial general intelligence.

Human intelligence is general intelligence.

Just because we don't have direct conscious control over our white blood cells or pancreas does not mean we don't have general intelligence. We may not control them, but we have the ability to figure out how they work. Our intelligence is general in the sense we can understand body functions, or invent calculus, or develop relativity, or any unlistable number of other things. If a machine can do that - understand arbitrary concepts, even if its at or below human abilities, we'd have AGI. Instead with LLMs we have a tool that can mimic understanding, but when you go a bit deeper its clear that understanding is not present. Not to say LLMs don't have uses, they obviously do, but they are not intelligent.

  • chrsw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I agree with you LLMs are not intelligent.

But in this world, in this universe, there are lots of problems to solve. Humans understand these problems more than any other organism or machine we know of, but we are not general. The most we can say is we are the most general. There are far too many problem domains that are beyond the capability of humans to solve to call human intelligence general intelligence. The pancreas _does_ have direct control over its problem domain. That makes it a specific form of cognition. Maybe one day we will have that too. So does that mean when that day comes we are more general? I believe it does.

I like Francois Chollet's definition of intelligence: efficiency of skill acquisition, not demonstration of skill. I don't know if I should attribute that to him but for me, he is the first person I heard put it so succinctly. Using that definition there, is currently no known learning architecture that can acquire _any_ arbitrary skill efficiently. You and I do not currently do not have the ability to acquire the skill to consciously regulate bodily functions. It is a form of cognition that doesn't map to any thing we're aware of. Understanding how it works, in principle doesn't mean you have the skill to perform it.

  • keeda
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> There are far too many problem domains that are beyond the capability of humans to solve to call human intelligence general intelligence.

I'm curious about this. Is that simply not a limitation of our current knowledgebase? That is, as we figure out more about reality, we will eventually conquer those domains as well.

Or do you mean there are domains that are provably beyond the structural capability of our brains? For instance, abstract things like higher-dimensional geometry or number theory which is hard for people to visualize "natively" in their brains. Yet people regularly solve problems in those types of fields. Sure, we rely on tools like computers or pen and paper, but we do solve those problems.

Similarly, take your point about pancreas: sure, our brains cannot do the things it does, that is simply due to lack of the requisite "actuators" connecting our brains to the organ, an artifact of our evolution. But we do understand a lot of the biological mechanisms involved in its operation, enough to treat related problems, again through "tools" like medication and surgery. As we learn more about how they work, that increasingly becomes a problem domain our brains are "capable of solving."

As such, I don't see how these examples show that the brain is not "generally intelligent", unless you exclude tool use, which to me seems like incorrectly conflating cognition and action.

  • chrsw
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm leaning "no" and "yes, for what we know so far" to your questions.

I think tools are fine depending on how you define a tool and its relationship to human intelligence. If we build an artificial pancreas that learns about the body its sitting in and is able to function just as well as a natural pancreas would we say humans solved this problem? In some sense because humans built the machine. But not in another sense because humans are not the machine. Just like we say "AlphaZero beat the world's best human chess player". We don't say usually say "the humans who designed AlphaZero beat the world's best chess player". Did the humans who designed AlphaZero master chess? Not necessarily.

My argument is that there are examples of cognition that we know the human brain currently doesn't operate in. Are these learnable by the human brain? We don't know yet so we can't say the human brain is completely general.

As I type this and as you read this our bodies are constantly rebuilding themselves. Maybe if you connect the appropriate actuators we can do the same thing with our conscious minds? I highly doubt it due to the nature of the problem and how inefficient the brain would be in solving this sort of problem. It likely wouldn't work, but it's too far for me to say "rovably beyond the structural capability of our brains". I'll just say "I'm very pleased I don't have to do that right now". Developing a living organism, either starting from a single cell or from a fully grown adult, in real time in the real world is a very difficult problem to solve and the human brain would be terribly inefficient at it.

These are just my opinions.

Control does equal understanding. The pancreas may control its problem domain, but its not cognition any more than a circuit breaker thinks about cutting off power when the current gets too high, or a motion sensor thinks about opening a door when a person comes close. These are example of control not intelligence. We have no direct control over what happens inside the sun, but that doesn't stop us from developing an understanding of how those fusion processes work.
  • ben_w
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> …the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe.

This poster didn't understand this response last time this was raised: https://news.ycombinator.com/item?id=44349818

>>Unless you believe in magic, the human brain proves that human level general intelligence is possible in our physical universe, running on a system based on the laws of said physical universe.

Using your analogy, what this means is, we have to make humans to make human like intelligence. Not that we can make human like intelligence out side of humans.

>>Given that, there's no particular reason to think that "what the brain does" OR a reasonably close approximation, can't be done on another "system based on the laws of our physical universe."

What exactly does the brain do? Part of the problems with this is language itself might be insufficient to describe intelligence. And Language might be working a level below our thought. There are occasions where even the best of us fail to come up with how we think. We can go close and its not enough. A picture is better than a thousand words - why? Perhaps language is enough to display signs of intelligence but can't entirely contain or describe it.

Similarly, even in the case of LLMs we have seen showing spatial intelligence is whole lot different than predicting text.

Heck intelligence might not even be one monolithic thing. It could be a collection of several intelligences. And this whole idea of one grand AGI monolith could be wrong.

  • pama
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I didnt read your draft paper, but your premise in HN sounds a bit off to me. AGI does not assume the ability for finding or learning an optimal solution to every problem (with that assumption it would be trivial to prove it impossible in many different ways). Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI. (If there exist a couple incompressible problems along the way you can memorize the human solution.) If you proved AGI impossible under such a (weaker?) definition you would prove that humans can no longer improve in any domain (as the set of all humans is a general intelligence). Or you would need to assume that there is something special inside humans, which no technology can ever build. I disagree with both premises.
Independent of the exact definition, a system of intelligence that is better or equal to the best human in any domain would be at least termed AGI.

Exactly. There's this "thing" you see in certain circles, where people (intentionally?) mis-interpret the "G" in AGI as meaning "the most general possible intelligence". But that's not the reality. AGI has pretty much always been taken to mean "AI that is approximately human level". Going beyond that is getting into the realm of Artificial Super Intelligence, or Universal Artificial Intelligence.

  • orwin
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What I remember is that a lot of people used the word 'AI', other (including me) said 'thats not intelligence, it's too specific', and poof, a new word came to replace the word AI, 'AGI', to mean an AI that can adapt to new, unforseen situations.

The LLM that will convince me that AGI is near is one that will understand language enough to find the linguistic rules of conlangs they weren't trained on (or more specifically, engineered languages made by linguists with our current knowledge of how languages work), and create grammatically correct sentences. Something someone trained on can do with great effort, but it's more due to breaking habits and limited brainpower than real complexity.

I think that OP's conclusion may be true in a not very meaningful sense: once a particular non-trivial threshold of competence is defined for every task (infinitely many), then any policy must be bad at some of them.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • mkl
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The physics of our brains can in principle be simulated at a subatomic quantum level mathematically, even on a classical computer. It would be absurdly expensive and slow with current technology, but it is mathematically possible. Therefore our own generally intelligent brains can be considered a counterexample.

I think for your theory to hold up, you would need to show that physics cannot, even in principle, be simulated mathematically at sufficient scale (the number of interacting subatomic particles). That would be surprising.

At the moment it seems like your results contradict reality, meaning your starting assumptions cannot all be true.

And even if OP could show that physics couldn’t be simulated, it still wouldn’t follow that AGI was impossible, or even that it couldn’t be achieved by approximating the simulation that was proved to be impossible to do accurately.

AGI is clearly possible, because our brains are fundamentally machines, and there’s no reason in principle why we couldn’t build something similar. Right now we don’t - as human beings - have the ability to do that, but it clearly isn’t impossible since cellular machinery is able to build it in the first place.

I tried to understand your paper, but could not.

Then I understood why not. Your paper proves that I am unable to understand your paper. It also proves that you are unable to understand your paper.

Okay, read the abstract and Intro. Recently, in the paper

"What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models"

your thesis of Ai's lack of capacity to abstract or at least extract understanding from noisy data was largely experimentally confirmed. I am uncertain though about the exact mechanics b/c as they used LLM's, its not transparent what happened internally that lead to constant failure to abstract the concept despite ample predictive power. One interesting experiment was the introduction of the Oracle that literally enabled the LLM to solve the task that was previously impossible without the oracle, which means, at least its possible that LLM's can reconstruct known rules. They just can't find new ones.

On a more fundamental level, I am not so sure why these experiments and mathematical proofs still are made since Judea Pearl already established about seven years ago in "Theoretical Impediments to Machine Learning " that all correlation based methods are doomed as they fail to understand anything. his point about causality is well placed, but will not solve the problem either.

The question I have though, if we ignore all existing methods for one moment, then what makes you so sure that AGI is really Mathematically impossible? Suppose some advancement in quantum computing would allow to reconstruct incomplete information, does your assertion still holds true?

https://arxiv.org/abs/2507.06952 https://arxiv.org/abs/1801.04016

How is your brain doing it then?
> How is your brain doing it then?

Quantum entanglement?:

https://www.popularmechanics.com/science/a65368553/quantum-e...

And we’re not mathematically impossible, unless that’s some new philosophical theory: “If human intelligence is mathematically impossible, and yet it exists, then mathematics is fallible, and by inductive reasoning logic is fallible, and I can prove things with inductive reasoning, because piss off.”

Quantum entanglement is not magic that lets you bypass mathematics. Anything that's true with it is also true without it.
> then mathematics is fallible

Wasn't this essentially the conclusion of Gödel. Math, based on a set of axioms, will either have to accept that there are things that are true but can't be proven, or that there are proofs that aren't true.

Even if the brain were using quantum voodoo (and it's not: it's too messy and too hot), a machine could use the same techniques to implement AGI.
I could believe we're not generally intelligent.
Then your definition of general intelligence is useless.
Being accurate is hardly useless. Individual people are worse at some tasks than others beyond what past training alone would produce.
My brain isn’t artificial. I hope.
Artificial is the distinction between biologically made or made by humans, does that mathematically matter?
Wouldn’t it be possible that not all brains can do it all, but some can specialize in certain problems. But when combined with everyone else’s we can approach general intelligence?
The "A" stands for "Artificial" in contrast to what our brains do.
If you want to argue for a fundamentally non-material mind (ie, that human cognition happens in a physically impossible, spiritual plane), then cool. Though you might want to give some consideration to how much it seems like physical processes on the brain can demonstrably affect cognition.

If you aren't arguing for a non-materialist position, then the distinction between "artificial" and human intelligence isn't meaningful. A powerful enough computer could simulate the material processes in your brain. If as the OP claims it is mathematically impossible for a computer to generate intelligence, no matter how powerful that computer, then it is impossible for your brain to do so (via material processes).

There's no reason to assume our current pursuit is not a dead end for any number of reasons we do not yet understand. There is a lot of faith we are capturing the same thing based on perceptions, which have a lot to do with the individual observer. It seems very important to some folks that our natural process is a mirror of what our technology does. The same result does not mean it is the same process or anything other than a mirage -- though one that may trick a lot of us.

Every generation tries to map its most complex technology onto its understanding of nature. "AGI" has a specific meaning today, but if you want it to mean atheism versus theism or whatever materialist argument, you're far outside of science and technology. Like our fathers of the Enlightenment with their watchmaker god. The idea there is some way for humans to break free of nature seems like a religious belief to me, but whether you agree or not, certainly there is room for doubting that faith, since we're outside the realm of what science can explore.

"Current LLMs are not going to get to AGI" is a different and much weaker claim than "AGI is mathematically impossible."
I was responding to the claim that an observer bound by a system may understand and replicate all phenomenon within that system. It's quite a bold claim which has already exited the bounds of science, IMHO. That you're using the language of religion and philosophy is the point.
Nope, try again. "You can't possibly learn enough about a system to simulate it" is still a different and weaker claim than "It is mathematically impossible to compute this thing that is being computed."

But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.

You're arguing with the article not me. I replied to

> How is your brain doing it then?

Do you have an answer? What indication do we have that any AGI we would create would have to follow the same process to achieve the result? Can humans recreate all phenomenon observed in the universe? You're arguing yes in all of these then? I'd love to read more of that argument. I don't care about this proof though. I don't think I've indicated I think AGI is impossible. I care far more about why someone would be convinced it must be possible for humans to recreate in the exact same manner as the brain, which this commenter and you seem to think. I know humans have not shown we can fully model our observations cohesively.

> But also: if general intelligence were computable, but it was not possible to learn how to make the computer that can compute it, then you've disproved evolution.

record scratch What? Did I agree to all of these premises? Do you have some backing for the three or four assumptions you've made in this sentence? You still need to show humans not only could be but are capable of replicating the system. I am asking you for some argument out there that says everything we observe in nature humans can replicate in the exact same way it occurs. That's a much stronger statement than such intelligence exists. You can just link a book. I am not sold on one way or the other, but you seem very confident. Is there some argument I can read? To me, our models in physics point to a fragmented and contradictory understanding of our world to get results. But yes, results are results, but that doesn't mean we are doing anything but modeling -- but can we model everything? Is that the implication of evolution?

We seem to be wandering into capital-S Science vs. science, and I'm not really into religious discussions here. I would love to understand why you seem to think I'm so dimwitted as to dismiss with an edge, when all of this stems from a glib reply to a glib reply that I am no less convinced is in fact glib and fatuous. (And that original comment was not yours, lest you feel insulted in the same way you have insulted me.)

That's kind of distinction with no distinction though, in this context. Our brains are physical machines, and computers are physical machines. Sure, one is wetware and based on chemistry, biology, and some electricity, while the other is based on electricity, logic gates, and bits and bytes, but still... if one can be intelligent, there doesn't seem to be any particular reason to think that the other can't as well.
Oh certainly there is: why assume we will ever be able to fully replicate the natural process? We may very well be bound here by our own bodies.
"artificial" literally means "man-made". It does not imply anything about how the created thing works, and whether or not it is different from the corresponding natural equivalent.
I am replying to "How does your brain do it then?" So I guess we agree. You should take it up with OP.
OP's point is that given that the brain does it, and given that it is possible in principle to simulate the brain (even if we don't know how yet), the result of such a simulation would necessarily be "AGI", disproving the original claim that "AGI is mathematically impossible".
Doing artificial?
The brain still exists in the same universe running on the same mathematical laws. The A adds no constraints that can make anything impossible. There is nothing that the brain is doing that we cannot also replicate, we just cannot get to the same scale yet.
Scale has nothing to do with it. We cannot replicate it, nor do we fully understand it. This is a fact, agreed upon by neurologists, physicist, and philosophers.
  • baq
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It follows that it doesn’t.

In practical terms, the result doesn’t matter. The race to approximate the human thought process and call it AGI (which is what matters economically) is on. If you can approximate it meaningfully faster that the real brain works in meatspace, you are winning. What it will mean for humanity or civilization is an open question.

The race to approximate the human thought process and call it AGI (which is what matters economically) is on

Maybe I'm just being pedantic, but I'd argue that there's no particular reason to say that AGI involves "approximating the human thought process". That is, what matters is the result, not the process. IF one can find another way to "get there" in a completely different manner than the human mind, then great.

That said, obviously there is some appeal to the "mimic human thought" approach since human thought is currently an existence proof that the kind of intelligence we are talking about is possible at all and mimicking that does seem like an obvious path to try.

  • Veen
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That's one possible inference. However, it would also be consistent to claim that there is a fundamentally uncomputable and impossible-to-artificially-replicate "mechanism" underlying human intelligence.
  • baq
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
My inner philosopher agrees. My inner engineer doesn’t care; a good enough approximation will suffice.
I don't think it even needs to be faster, if you can make an artificial brain that's useful even if it's 100x slower, you can always run stuff in parallel.
> you can always run stuff in parallel.

Even that isn't needed. A "general intelligence" separable from ethics and rights is valuable in itself. It's valuable to subjugate, as long as the subjugated object is producing more than they are consuming.

It is already faster because you don't need to wait 12 years of K-12 and 4 years of college before it can produce meaningful answers.
> This is cognition at its weirdest: solving problems somewhat by accident, finding answers in the wrong place, connecting dots that aren’t even in the same picture.

If you solve a problem "by accident", there are very many other people who make foolish decisions daily because they do not think. Some of those pan out too and lead to understanding. A resource-bounded agent can also maintain a notion of fuel and give a random answer when it has exhausted its fuel.

The structural incompleteness mentioned isn't really meaningful. Humans have not demonstrated the capacity to make epsilon-optimal decisions on an infinite number of tasks, since we do not do an infinite number of tasks anyway.

K-complexity, and resource-bounded K-complexity are indeed extremely useful tools to talk about generalization, I'd agree, but I think the author has misunderstood the limits that K-complexity places on generalization.

At first I was thinking, let's see if an argument is made that is not applicable to GI, whether artificial or not, and if not, why even mention AI at all?

Then I started to read the paper, and it's worse.

Every one of his 'examples' would not just be 'solved' by any existing LLM, even a 'dumb' system that just spits out a random sentence to any question would pass his first 2 'tests' with flying colors. I'm not kidding, he accepts "Leave the classroom and stop confusing everybody with your senseless questions" as a good solution.

In fact, the only system that would fail is this hypothetical AI he imagines that somehow gets into infinitely analyzing loops.

Then his 3rd test, an investment decision, gives the same outcome as himself up until the point he draws in extra information not available to the AI, after which he flips his 'answer' which he then labels as 'correct' and the previous answer based on the original info as 'false' because he made some money on the bet a few weeks later, seriously?

Feeding your papers to any SOTA LLM is a quick way to expose all the logical holes and omissions in them.

I would politely suggest that until you do that and then come up with a convincing rebuttal for every point they make that is not self-evidently wrong, you shouldn't be wasting humans' time.

The human brain does not have perfect memory. It is not always logical. And more often than not it is motivated and influenced by "external" forces - health, hunger, sex drive, environmental conditions, luck, spiritual inspiration, or whatever. The perfect worker is purely logical and has perfect memory and no external influences - never gets hungry or sick or wants to be the boss themselves. The AI race is funded by folks interested in creating the perfect worker, not a human. I have to agree with the conclusions of this paper that they won't be able to make humans. (But they don't really want to.) The Vatican has also published interesting works on this idea. The question is - if you take out everything that makes it human, can you call it intelligent?
Probably every intelligence has its limits, as every systems (eg. mathematics, remembering goedel) has his own. This kind of AGI seems like a deity, hard to belive it's possible, but in pratice many kind of "smaller" intelligences exist (from ants to primates) less "general" but enough to solve enough problems to live and evolve, and maybe can be even created by others intelligences. IMHO It's reasonable to think a real intelligence as a property of complex evolving systems interacting with a complex environment, so to live in a complex world a not-so-general intelligence can be enough, even given some limits and errors.
Hey all, apologies for the delayed response. I was on a flight, then had guests, then had to make some rapid decisions involving actual real-world complexity (the kind that is not easily tokenized).

I’ve now had time to read through the thread properly, and I appreciate the range of engagement—even the sharp-edged stuff. Below, I’ve gathered a set of structured responses to the main critique clusters that came up.

1. On “The brain obeys physics, physics is computable—so AGI must be possible”

This is the classical foundational syllogism of computationalism. In short:

   1.The brain obeys the laws of physics.
   2.The laws of physics are (in principle) computable.
   3.Therefore, the brain is computable.
   4.Therefore, human-level general intelligence is computable, and AGI is  
     inevitable and a question of time, power and compute.
This seems elegant, tidy, logically sound. And: it is patently false — at step 3… And this common mistake is not technical, but categorical: Simulating a system’s physical behavior is not the same as instantiating its cognitive function.

The flaw is in the logic — it’s nothing less than a category error. The logic breaks exactly where category boundaries are crossed without checking if the concept still applies. That by no means inference, this is mere wishful thinking in formalwear. It happens when you confuse simulating a system with being the system. It’s in the jump from simulation to instantiation.

Yes, we can simulate water. -> No, the simulation isn’t wet.

Yes, I can “simulate” a fridge. ->But if I put a beer in myself, and the beer doesn’t come out cold after some time,then what we’ve built is a metaphor with a user interface, not a cognitive peer.

And yes: we can simulate Einstein discovering special relativity. -> But only after he’s already done it. We can tokenize the insight, replay the math, even predict the citation graph. But that’s not general intelligence, that’s a historical reenactment, starring a transformer with a good memory.

Einstein didn’t run inference over a well-formed symbol set. He changed the set, reframed the problem from within the ambiguity. And that is not algorithmic recursion, is it? Nope… That’s cognition at the edge of structure.

If your model can only simulate the answer after history has solved it, then congratulations: you’ve built a cognitive historian, not a general intelligence.

6. On “This is just a critique of current models—not AGI itself”

No.

This isn’t about GPT-4, or Claude, or whatever model’s in vogue this quarter. Neither is it about architecture. It’s about what no symbolic system can do—ever.

If your system is: a) Finite b)Bounded by symbols C) Built on recursive closure

…it breaks down where things get fuzzy: where context drifts, where the problem keeps changing, where you have to act before you even know what the frame is.

That’s not a tuning issue, that IS the boundary. (And we’re already seeing it.)

In The Illusion of Reasoning (Shojaee et al., 2025, Apple), they found that as task complexity rises: - LLMs try less - Answers get shorter, shallower - Recursive tasks—like the Tower of Hanoi—just fall apart - etc

That’s IOpenER in the wild:Information Opens. Entropy Rises. The theory predicts the divergence, and the models are confirming it—one hallucination at a time.

5. On “Kolmogorov and Chaitin are misused”

It’s a fair concern.Chaitin does get thrown around too easily — usually in discussions that don’t need him.

But that’s not what’s happening here.

– Kolmogorov shows that most strings are incompressible. – Chaitin shows that even if you find the simplest representation, you can’t prove it’s minimal. – So any system that “discovers” a concept has no way of knowing it’s found something reusable.

That’s the issue. Without confirmation, generalization turns into guesswork. And in high-K environments — open-ended, unstable ones — that guesswork becomes noise. No poetic metaphor about the mystery of meaning here. It’s a formal point about the limits of abstraction recognition under complexity.

So no, it’s not a misuse. It’s just the part of the theory that gets quietly ignored because it doesn’t deliver the outcome people are hoping for.

4. On “This is just the No Free Lunch Theorem again”

Well … not quite. The No Free Lunch theorem says no optimizer is universally better across all functions. That’s an averaging result.

But this paper is not at all about average-case optimization. It’s about specific classes of problems—social ambiguity, paradigm shifts, semantic recursion—where: a)The tail exponent alpha is = or < 1 —>no mean exists, b) Kolmogorov complexity is incompressible, and c) the symbol space lacks the needed abstraction

In these spaces, learning collapses not due to lack of training, but due to structural divergence. Entropy grows with depth. More data doesn’t help. It makes it worse.

That is what “IOpenER” means: Information Opens, Entropy Rises.

It is NOT a theorem about COST… rather a structure about meaning. What exactly is so hard to understand about this?

3. On “He redefines AGI to make his result inevitable”

Sure. I redefined AGI. By using… …the definition from OpenAI, DeepMind, Anthropic, IBM, Goertzel, and Hutter.

So unless those are now fringe newsletters, the definition stands:

- A general-purpose system that autonomously solves a wide range of human-level problems, with competence equivalent to or greater than human performance -

If that’s the target, the contradiction is structural: No symbolic system can operate stably in the kinds of semantic drift, ambiguity, or frame collapse that general intelligence actually requires. So if you think I smuggled in a trap, check your own luggage because the industry packed it for me.

2. On “This is just philosophy with no testability”

Yes, the paper is also philosophical. But not in the hand-wavy, incense-burning sense that’s being implied. It makes a formal claim, in the tradition of Gödel, Rice, and Chaitin: Certain classes of problems are structurally undecidable by any algorithmic system.

You don’t need empirical falsification to verify this. You need mathematical framing. Period.

Just as the halting problem isn’t “testable” but still defines what computers can and can’t do, the Infinite Choice Barrier defines what intelligent systems cannot infer within finite symbolic closure.

These are not performance limitations. They are limits of principle.

And finally 7. On “But humans are finite too—so why not replicable?”

Yes. Humans are finite. But we’re not symbol-bound, and we don’t wait for the frame to stabilize before we act.We move while the structure is still breaking, speak while meaning is still assembling, and decide before we understand—then change what we were deciding halfway through.

NOT because we’re magic. Simply because we’re not built like your architecture (and if you think everything outside your architecture is magic, well…)

If your system needs everything cleanly defined, fully mapped, and symbolically closed before it can take a step, and mine doesn’t— then no, they’re not the same kind of thing.

Maybe this isn’t about scaling up? … Well, it isn’t It’s about the fact that you can’t emulate improvisation with a bigger spreadsheet. We don’t generalize because we have all the data. We generalize because we tolerate not knowing—and still move.

But hey, sure, keep training. Maybe frame-jumping will spontaneously emerge around parameter 900 billion.

Let me know how that goes

  • calf
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'll bite; There was a Kurt Jaimungal interview yesterday explaining that the Navier-Stokes fluid equations are not only unpredictable (chaotic), but also uncomputable (in the Turing sense). (if I recalled it correctly)

But I take that to mean there's no general, universal algorithm to tell us anything we want to know. But that's not what intelligence is, we're not defining some kind of absolute intelligence like an oracle for the halting problem. That definition would be a category error.

I think that any paper that argues something is impossible is fundamentally flawed, particularly when there are examples of it being possible.

Also, what's the point of telling others you believe what they are doing is impossible, specially after the results we are seeing even at the free-tier, open-to-the-public services?

You might want to check out the works of that buzzkill that Gödel is ^^

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...

" The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e. an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

The second incompleteness theorem, an extension of the first, shows that the system cannot demonstrate its own consistency. "

:3

> You might want to check out the works of that buzzkill that Gödel is ^^

Please explain why do you believe this is relevant to the points I've made.

It argues of “impossibilities” and also proves it.
I think you need to read it again.
Gödel wrote his teorem to test David Hilbert’s endeavor, Logic and the Foundation of Mathematics[0], to unify mathematics. Gödel proved that it is impossible to do.

But you may have a different version of history.

[0] https://www.famousscientists.org/david-hilbert/

  • Veen
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What examples are there of the possibility of artificial general intelligence?
  • anthk
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Consciusness = intrinsic information evaluating itself.

Like eval/apply under Lisp. Or Forth.

My pet theory is that AGI is not possible until we have real quantum computing.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
there is a thing called quantum computing. So nope.