This is no longer true, a prior solution has just been found[1], so the LLM proof has been moved to the Section 2 of Terence Tao's wiki[2].
[1] - https://www.erdosproblems.com/forum/thread/281#post-3325
[2] - https://github.com/teorth/erdosproblems/wiki/AI-contribution...
And even odder that the proof was by Erdos himself and yet he listed it as an open problem!
Carbon copy would mean over fitting
It really contextualizes the old wisdom of Pythagoras that everything can be represented as numbers / math is the ultimate truth
They create concepts in latent space which is basically compression which forces this
I know that at least some LLM products explicitly check output for similarity to training data to prevent direct reproduction.
The infeasibility is searching for the (unknown) set of translations that the LLM would put that data through. Even if you posit only basic symbolic LUT mappings in the weights (it's not), there's no good way to enumerate them anyway. The model might as well be a learned hash function that maintains semantic identity while utterly eradicating literal symbolic equivalence.
A lot of pure mathematics seems to consist in solving neat logic puzzles without any intrinsic importance. Recreational puzzles for very intelligent people. Or LLMs.
Just because we can't imagine applications today doesn't mean there won't be applications in the future which depend on discoveries that are made today.
My favorite example is number theory. Before cyptography came along it was pure math, an esoteric branch for just number nerds. defund Turns out, super applicable later on.
Among others.
Of course you never know which math concept will turn out to be physically useful, but clearly enough do that it's worth buying conceptual lottery tickets with the rest.
Don't be so ignorant. A few years ago NO ONE could have come up with something so generic as an LLM which will help you to solve this kind of problems and also create text adventures and java code.
Evidence shows otherwise: Despite the "20x" length, many people actually missed the point.
I agree brevity is always preferred. Making a good point while keeping it brief is much harder than rambling on.
But length is just a measure, quality determines if I keep reading. If a comment is too long, I won’t finish reading it. If I kept reading, it wasn’t too long.
Vs
> Interesting that in Terrance Tao's words: "though the new proof is still rather different from the literature proof)"
I guess this is the end of the human internet
"Glorified Google search with worse footnotes" what on earth does that mean?
AI has a distinct feel to it
For better or worse, I think we might have to settle on “human-written until proven otherwise”, if we don’t want to throw “assume positive intent” out the window entirely on this site.
It wasn't AI generated. But if it was, there is currently no way for anyone to tell the difference.
This is false. There are many human-legible signs, and there do exist fairly reliable AI detection services (like Pangram).
You're lying: https://www.pangram.com/history/94678f26-4898-496f-9559-8c4c...
Not that I needed pangram to tell me that, it's obvious slop.
(edit: fixed link)
I'm pretty sure it's like "can it run DOOM" and someone could make an LLM that passes this that runs on an pregnancy test
All of that is going away so the best way to deal with it is to call it a stochastic parrot and deny reality.
LLMs will continue to get slightly better in the next few years, but mainly a lot more efficient. Which will also mean better and better local models. And grounding might get better, but that just means less wrong answers, not better right answers.
So no need for doomerism. The people saying LLMs are a few years away from eating the world are either in on the con or unaware.
The only possible explanation is people say things they don't believe out of FUD. Literally the only one.
EDIT: After reading a link someone else posted to Terrance Tao's wiki page, he has a paragraph that somewhat answers this question:
> Erdős problems vary widely in difficulty (by several orders of magnitude), with a core of very interesting, but extremely difficult problems at one end of the spectrum, and a "long tail" of under-explored problems at the other, many of which are "low hanging fruit" that are very suitable for being attacked by current AI tools. Unfortunately, it is hard to tell in advance which category a given problem falls into, short of an expert literature review. (However, if an Erdős problem is only stated once in the literature, and there is scant record of any followup work on the problem, this suggests that the problem may be of the second category.)
from here: https://github.com/teorth/erdosproblems/wiki/AI-contribution...
The problems are a pretty good metric for AI, because the easiest ones at least meet the bar of "a top mathematician didn't know how to solve this off the top of his head" and the hardest ones are major open problems. As AI progresses, we will see it slowly climb the difficulty ladder.
"Very nice! ... actually the thing that impresses me more than the proof method is the avoidance of errors, such as making mistakes with interchanges of limits or quantifiers (which is the main pitfall to avoid here). Previous generations of LLMs would almost certainly have fumbled these delicate issues.
...
I am going ahead and placing this result on the wiki as a Section 1 result (perhaps the most unambiguous instance of such, to date)"
The pace of change in math is going to be something to watch closely. Many minor theorems will fall. Next major milestone: Can LLMs generate useful abstractions?
"On following the references, it seems that the result in fact follows (after applying Rogers' theorem) from a 1936 paper of Davenport and Erdos (!), which proves the second result you mention. ... In the meantime, I am moving this problem to Section 2 on the wiki (though the new proof is still rather different from the literature proof)."
Point in case: I just wanted to give z.ai a try and buy some credits. I used Firefox with uBlock and the payment didn't go through. I tried again with Chrome and no adblock, but now there is an error: "Payment Failed: p.confirmCardPayment is not a function." The irony is, that this is certainly vibe-coded with z.ai which tries to sell me how good they are but then not being able to conclude the sale.
And we will get lots more of this in the future. LLMs are a fantastic new technology, but even more fantastically over-hyped.
The answer is yes. Assume, for the sake of contradiction, that there exists an \(\epsilon > 0\) such that for every \(k\), there exists a choice of congruence classes \(a_1^{(k)}, \dots, a_k^{(k)}\) for which the set of integers not covered by the first \(k\) congruences has density at least \(\epsilon\).
For each \(k\), let \(F_k\) be the set of all infinite sequences of residues \((a_i)_{i=1}^\infty\) such that the uncovered set from the first \(k\) congruences has density at least \(\epsilon\). Each \(F_k\) is nonempty (by assumption) and closed in the product topology (since it depends only on the first \(k\) coordinates). Moreover, \(F_{k+1} \subseteq F_k\) because adding a congruence can only reduce the uncovered set. By the compactness of the product of finite sets, \(\bigcap_{k \ge 1} F_k\) is nonempty.
Choose an infinite sequence \((a_i) \in \bigcap_{k \ge 1} F_k\). For this sequence, let \(U_k\) be the set of integers not covered by the first \(k\) congruences, and let \(d_k\) be the density of \(U_k\). Then \(d_k \ge \epsilon\) for all \(k\). Since \(U_{k+1} \subseteq U_k\), the sets \(U_k\) are decreasing and periodic, and their intersection \(U = \bigcap_{k \ge 1} U_k\) has density \(d = \lim_{k \to \infty} d_k \ge \epsilon\). However, by hypothesis, for any choice of residues, the uncovered set has density \(0\), a contradiction.
Therefore, for every \(\epsilon > 0\), there exists a \(k\) such that for every choice of congruence classes \(a_i\), the density of integers not covered by the first \(k\) congruences is less than \(\epsilon\).
\boxed{\text{Yes}}
You could have just rubber-stamped it yourself, for all the mathematical rigor it holds. The devil is in the details, and the smallest problem unravels the whole proof.
Is this enough? Let $U_k$ be the set of integers such that their remainder mod 6^n is greater or equal to 2^n for all 1<n<k. Density of each $U_k$ is more than 1/2 I think but not the intersection (empty) right?
This would all be a fairly trivial exercise in diagonalization if such a lemma as implied by Deepseek existed.
(Edit: The bounding I suggested may not be precise at each level, but it is asymptotically the limit of the sequence of densities, so up to some epsilon it demonstrates the desired counterexample.)
On the contrary for DeepSeek you could but not for a non open model.
It says that the OpenAI proof is a different one from the published one in the literature.
Whereas whether the Deepseek proof is the same as the published one, I dont know enough of the math to judge.
That was what I meant.
I’m not sure what this proves. I dumped a question into ChatGPT 5.2 and it produced a correct response after almost an hour [2]?
Okay? Is it repeatable? Why did it come up with this solution? How did it come up with the connections in its reasoning? I get that it looks correct and Tao’s approval definitely lends credibility that it is a valid solution, but what exactly is it that we’ve established here? That the corpus that ChatGPT 5.2 was trained on is better tuned for pure math?
I’m just confused what one is supposed to take away from this.
[1] https://news.ycombinator.com/item?id=46560445
[2] https://chatgpt.com/share/696ac45b-70d8-8003-9ca4-320151e081...
I've "solved" many math problems with LLMs, with LLMs giving full confidence in subtly or significantly incorrect solutions.
I'm very curious here. The Open AI memory orders and claims about capacity limits restricting access to better models are interesting too.
Personally, I've been applying them to hard OCR problems. Many varied languages concurrently, wildly varying page structure, and poor scan quality; my dataset has all of these things. The models take 30 minutes a page, but the accuracy is basically 100% (it'll still striggle with perfectly-placed bits of mold). The next best model (Google's flagship) rests closer to 80%.
I'll be VERY intrigued to see what the next 2, 5, 10 years does to the price of this level of model.
One wonders if some professional mathematicians are instead choosing to publish LLM proofs without attribution for career purposes.
"This LLM is kinda dumb in the thing I'm an expert in"
Anecdotally, I, as a math postdoc, think that GPT 5.2 is much stronger qualitatively than anything else I've used. Its rate of hallucinations is low enough that I don't feel like the default assumption of any solution is that it is trying to hide a mistake somewhere. Compared with Gemini 3 whose failure mode when it can't solve something is always to pretend it has a solution by "lying"/ omitting steps/making up theorems etc... GPT 5.2 usually fails gracefully and when it makes a mistake it more often than not can admit it when pointed out.
I believe the ones that are NOT studied are precisely because they are seen as uninteresting. Even if they were to be solved in an interesting way, if nobody sees the proof because they are just too many and they are again not considered valuable then I don't see what is gained.
I would love to know which concepts are active in the deeper layers of the model while generating the solution.
Is there a concept of “epsilon” or “delta”?
What are their projections on each other?
But thanks for the downvote in addition to your useless comment.
I'm beginning to think the Bitter Lesson applies to organic intelligence as well, because basic pattern matching can be implemented relatively simply using very basic mathematical operations like multiply and accumulate, and so it can scale with massive parallelization of relatively simple building blocks.
The ability to think about your own thinking over and over as deeply as needed is where all the magic happens. Counterfactual reasoning occurs every time you pop a mental stack frame. By augmenting our stack with external tools (paper, computers, etc.), we can extend this process as far as it needs to go.
LLMs start to look a lot more capable when you put them into recursive loops with feedback from the environment. A trillion tokens worth of "what if..." can be expended without touching a single token in the caller's context. This can happen at every level as many times as needed if we're using proper recursive machinery. The theoretical scaling around this is extremely favorable.
But I think the trend line unmistakably points to a future where it can be MORE intelligent than a human in exactly the colloquial way we define "more intelligent"
The fact that one of the greatest mathematicians alive has a page and is seriously bench marking this shows how likely he believes this can happen.
Ie you want to find a subimage in a big image, possibly rotated, scaled, tilted, distorted, with noise. You cannot do that with a pattern matcher, but you can do that with a matcher, such as a fuzzy matcher, a LLM.
You want to find a go position on a go board. A LLM is perfect for that, because you don't need to come up with a special language to describe go positions (older chess programs did that), you just train the model if that position is good or bad, and this can be fully automated via existing literature and later by playing against itself. You train the matcher not via patterns but a function (win or loose).
We are very close.
(by the way; I like writing code and I still do for fun)
The ability to make money proves you found a good market, it doesn't prove that the new tools are useful to others.
Isn't that a perfectly reasonable metric? The topic has been dominated by hype for at least the past 5 if not 10 years. So when you encounter the latest in a long line of "the future is here the sky is falling" claims, where every past claim to date has been wrong, it's natural to try for yourself, observe a poor result, and report back "nope, just more BS as usual".
If the hyped future does ever arrive then anyone trying for themselves will get a workable result. It will be trivially easy to demonstrate that naysayers are full of shit. That does not currently appear to be the case.
If I release a claim once a month that armageddon will happen next month, and then after 20 years it finally does, are all of my past claims vindicated? Or was I spewing nonsense the entire time? What if my claim was the next big pandemic? The next 9.0 earthquake?
What you are doing however is dismissing the outrageous progress on NLP and by extension code generation of the last few years just because people over hype it.
People over hyped the Internet in the early 2000s, yet here we are.
I never dismissed the actual verifiable progress that has occurred. I objected specifically to the hype. Are you sure you're arguing with what I actually said as opposed to some position that you've imagined that I hold?
> People over hyped the Internet in the early 2000s, yet here we are.
And? Did you not read the comment you are replying to? If I make wild predictions and they eventually pan out does that vindicate me? Or was I just spewing nonsense and things happened to work out?
"LLMs will replace developers any day now" is such a claim. If it happens a month from now then you can say you were correct. If it doesn't then it was just hype and everyone forgets about it. Rinse and repeat once every few months and you have the current situation.
When someone says something to the effect of "LLMs are on the verge of replacing developers any day now" it is perfectly reasonable to respond "I tried it and it came up with crap". If we were actually near that point you wouldn't have gotten crap back when you tried it for yourself.
Coding was never the hard part of software development.
> the best way to find a previous proof of a seemingly open problem on the internet is not to ask for it; it's to post a new proof
https://mehmetmars7.github.io/Erdosproblems-llm-hunter/probl...
https://chatgpt.com/share/696ac45b-70d8-8003-9ca4-320151e081...