https://www.quantamagazine.org/sensational-proof-delivers-ne...
Terence Tao also provides links to a presentation by James Maynard and Larry Guth: https://www.ias.edu/video/new-bounds-large-values-dirichlet-... and https://www.ias.edu/video/new-bounds-large-values-dirichlet-...
This is why Medium requires an image on every post, why every programming blog out there puts random pictures of laptops and coffee cups at the top of their articles, why Unsplash is so popular, and now why AI-generated images at the top of posts are so common.
It's dumb.
Imagine this discovery led to a larger breakthrough on prime numbers that allowed easy factorization of large integers and effectively rendered public key cryptography such as RSA ineffective overnight, by allowing anyone with a consumer-grade CPU to crack any production-size key.
Does the industry have DR plans for this scenario? Can the big players quickly switch to a different, unbroken encryption system? While it would probably be a heavenly day for jailbreakers, console modders and other "device freedom" types generally, the overall impact would be disastrous and incalculable.
Does the industry simply not consider "sudden number theory breakthrough" a possible event?
The us government used to restrict export of long rsa keys. At one point much of the world was using 128bit rsa keys but Dixon method had everyone scrambling to use 512bit keys. Then the special number field drive had us all scrambling to use 1024bit keys and the general number field seive again had us scrambling to get to 2048bit keys.l and that really wasn’t that long ago relatively speaking.
Check out rsa encryption hardware from the 80s. They are really proud of some of the hardware that can do 512bits! (Useless today)
https://people.csail.mit.edu/rivest/pubs/pubs/Riv84.pdf
The special and general number field seize complexity statements are a few constants in difference. Look at those constants. Do they seem to be some root limit to you? Is it really that unlikely that there’s not a way to reduce those further making even 2048bit keys useless?
You don’t need to ask “what would happen if RSA broke” because those of us who have been through this many times now can straight up tell you. You’ll be scrambling to once more bump up the key size and you’ll be auditing all the potential data leaked.
When?
I was writing crypto libraries in the early 90s to support RSA, DSA and ElGamal at my company (this predates the times when good open source crypto libraries were broadly available).
Even back then 128 bit RSA keys were not used. The smallest the library supported was 512 and the smallest we ever used in production was 768 bits.
That's how far back my own memory goes. But here's a paper from Arjen Lenstra from 2001 which has a table showing computationally equivalent key sizes back to 1982.
https://infoscience.epfl.ch/server/api/core/bitstreams/c323a...
In 1982, security comparable (at the time!) to DES would have been 417 bit RSA keys.
So even in 1982, using 128 bit RSA keys made no sense!
> You’ll be scrambling to once more bump up the key size and you’ll be auditing all the potential data leaked.
If you've had to do this for RSA keys (more than once, even!) I respectfully suggest you need to be a lot more conservative picking key lengths. There has never been a sudden breakthrough in factorization that has rendered conservatively chosen RSA key lengths obsolete overnight.
Before you start worrying about it though consider that RSA has held up for 47 years of active cryptanalysis so far - during which time many alternative algorithms have been suggested as being superior, only to be broken a short time later.
Also the push to switch to Elliptic-Curve algorithms has been more down to them being easier for computers to use to encrypt/decrypt data.
Personally if I had to bet on which public key algorithm will still be around in 10 years time then I'd put my money on RSA.
Heck just read a paper in state of the art dedicated RSA encryption hardware from the 80s. All now completely broken. They are very impressed with some of the 512bit hardware!
RSA 2048 hasn’t been broken but 512bit RSA definitely had been by any definition.
I feel “RSA is fine because much longer key lengths still work” is hiding what happened here. Yes we can still get into the infeasible realm with RSA and really long keys but the algorithm has definitely let us down multiple times thanks to mathematical improvements in factorization that just keep coming.
(Basically your data is not encrypted with RSA, you encrypt a secondary key, send it with RSA but the main encryption is AES see https://en.wikipedia.org/wiki/Transport_Layer_Security#Key_e... )
There are benefits to pairing the slower Public Key schemes with a Symmetric Key encryption scheme using a session key, as you get the benefits of an Public Key encryption scheme with the performance of a Symmetric Key encryption scheme.
If you don’t choose your primes truly randomly for example.
Using a secondary key and using RSA for the key share is not about avoiding RSA gotchas it’s just about performance.
True, but is there much overlap between the set of people who actively do cryptanalyses on RSA and the set of people who actively research integer factoring?
As far as I know the skills needed to make progress on integer factoring are not really needed for anything in cryptography other than that specific problem and include a bunch of other skills that have nothing whatsoever to do with cryptography and take a long time to master. It's also been an open and important problem long enough that if it is solved it is likely to be by someone who is a world class expert in it, similar to how Fermat's Last Theorem was finally solved.
Similarly the skills needed for anything in cryptanalysis other than trying to break RSA by factoring are useless for working on integer factoring. The result is that very few, if any, people have the time and ability to become both cryptanalysts and world class integer factoring researchers.
Thus I'd expect nearly all of the 47 years of active RSA cryptanalysis has been on finding and fixing the numerous mistakes that can be made when implementing RSA that allow it to be broken without factoring.
I'd guess it is also similar with the P = NP problem. A solution to that has implications for cryptography but I doubt many cryptanalysts are working on the P = NP problem. Also maybe same for the Riemann hypothesis (I'm not sure if that one has implications for cryptography).
Perhaps it's not a one minute operation to DR, but I doubt everything would be open if RSA/DH was rendered insecure overnight. I know my SSH keys are a mixed bag right now.
There are other algorithms, NIST has a post-quantum project. There are options, but it’s a lot more than having some algorithms, we need protocols and they are still a bit off. Prime factorization isn’t going to get easier or faster though. There might be another attack or approach to attacking, some other numerical break through that leads to faster rsa cracking might be found but it’s hard to imagine it being that practical.
The reason for switching to ECC is speed and key size. The NSA recommends 3096bit rsa keys for 128 aes, but only 256 bit ecc keys for the same security against traditional attacks.
They also went ahead and recommended 348bit keys for ecc, but I don't know if something like that is widely used anywhere. Curve448 is nice but slow. Curve41417 is fast but largely unused.
What do you mean by some non-prime numbers working? Do you mean that instead of picking two primes, p and q, and then e and d such that ed = 1 mod ϕ(pq) one can sometimes pick a prime p and composite q or even composite p and composite q, compute e and d such that ed = 1 mod ϕ(pq), and still get a working RSA system?
If so, that shouldn't actually be scary. Consider a composite positive integer N. The positive integers that are less than N and relatively prime to it form a finite group under multiplication mod N with ϕ(N) elements. For each element a in that group, a^ϕ(N) = 1 mod N. We can raise that to any positive integer power k giving a^(k ϕ(N)) = 1 mod N. Multiply by a giving a^(k ϕ(N) + 1) = a mod N.
If we pick e and d such that ed = k ϕ(N) + 1 for some k, then we will have (a^e)^d = a mod N for all a in the group, i.e., for all a in [1, N) that are relatively prime to N.
That almost gives us RSA E(m) = m^e mod N for encryption and D(m) = m^d mod N for encryption. Given our initial N, e and d are easy to compute because ed = k ϕ(N) + 1 for some k is true for any e, d where ed = 1 mod ϕ(N). So as long as we pick N so that we can easily compute ϕ(N) but someone just given N and e cannot, we are all set--except for one thing.
That one thing is that the above reasoning only shows that it works when m is relatively prime to N. We would like it to work for all messages in [1, N), not just the ones relatively prime to N.
If we pick N = p q where p and q are primes then the messages m that we have to worry about are: p, 2p, ..., (q-1)p and q, 2q, ..., (p-1)q. All the other possible messages will be relatively prime to pq and so are covered by the earlier arguments.
Because (ab)^e = a^e b^e, if our system works for a and b, it will work for ab, so we only need to show it works for p and that will also cover 2p, 3p, ..., (q-1)p, and similarly for q.
To show it works for p, we can consider powers of p mod q. In particular p and q are relatively prime, and p^ϕ(q) = 1 mod q.
Recall that N=pq. ϕ is multiplicative, and so ϕ(q) divides ϕ(N), and so p^ϕ(N) = 1 mod q, or p^(ϕ(N)+1) = p mod q. Note that also p^(ϕ(N)+1) = p mod p (because both sides are 0 mod p). And since p and q are relatively prime, p^(ϕ(N)+1) = p mod pq, or p^(ϕ(N)+1) = p mod N. So our almost RSA does work for p. Similar argument shows it works for N, and or almost RSA turns out to be actual RSA.
What if we try it where N is a product of 3 distinct primes instead of 2? The messages that are not relatively prime to N then will fall into 6 classes: messages where gcd(m,N) = one of the prime factors of N, and messages were gcd(m,N) = the product of two of the prime factors of N.
The above argument to show that a message equal to p is fine when N=pq extends fairly straightforwardly to show that those 6 classes of non-relatively prime messages an N that is a product of 3 distinct primes works.
So as long as your composite N is square free it should be suitable for an RSA system.
Honestly impressive.
On a related note, someone might discover some elliptic curve math tomorrow and your CPU can break all that stuff just as well...
They chose to standardize a version that's secure against attacks that only they knew at the time, shorten the key length so they can still brute-force it if they really need to, and successfully kept the attack secret until researchers at a foreign university independently discovered it decades later.
As soon as quantum computers have enough qbits prime factorisation can be done very quickly. Not sure the timeline on that as there are a lot of challenges in the technology and it is hideously expensive, but a lot of the move away from RSA to elliptic curves is driven by readiness for quantum computing.
tl;dr: not in our lifetime.
The ability to prepare for catastrophic scenarios is overestimated. The ability to survive them is underestimated.
Anyway, the risk, while just as real as something like the grid being downed by a massive solar storm with multiyear recovery period from stone age due to transformer manufacturing delays and no stockpiles, just seems too minuscule/theoretical to spend much time on - from that point of view.
Regarding any plan, I don't know if it's so easy to just switch to ECC, because actual asymmetric encryption with ECC depends on shared secret, which (if you're assuming an unsecured exchange channel due to RSA being broken), is more vulnerable to MITM than RSA. I don't think it's an easy swap out.
All that aside, another point of view is RSA is probably already broken, the break is just secret to the codebreaking agencies. It would be very desirable for them to keep their breakthrough secret. That might even involve trying to find ways to suppress any "sudden number theory breakthroughs" hahaha! :)
It’s the IPv6 of the security world.
My personal favourite is one where they send specific patterns of data over usb, where the EM fields generated by the "data" flowing over the wire form a carrier signal onto which data can be encoded, which can be received up to 5m away. This requires no additional hardware.
All of these involve some malware installed on the system and have a tiny amount of bandwidth, but if there'a man on the inside, all they have to do is install the malware without having to worry about additional hardware for getting the data out of the machine.
A possible mitigation for things like websites would be either ECC or even using the quantum resistant encryption systems (the industry would more likely avoid this due to the systems being very prototypical since we have just started researching this).
Since old bitcoin wallets can’t be moved off of RSA you can transfer the coins to your wallet and there is no mitigation.
Most mathematicians believe RH is true, and generally when doing industrial number theory people operate under the assumption that RH is indeed true and so if they need to use X to justify something and there is a theorem of the form "if RH is true then X" they use X.
Thus a proof of RH is not a problem. It just confirms that what people applying number theory already assumed was correct.
A disproof means that those X's might not be true and their use would need to be reassessed.
The congruence of squares equivalence to factorization demonstrated we need at least 500 bits and then the special number field seive that built on this push it to 1024. The general number field seive pushed it again to 2048.
Sure it’s not a log(n) break but it’s been broken. If you look at the complexity analysis of the special vs general number field seive the portion of the exponent going from 1/2 to 1/3 should give you thought. Can it be moved to 1/4? Could it be moved indefinitely to 1/x? The general number field seive is relatively recent. If someone comes up with a similar breakthrough again (and this has happened many times over with rsa) your 2048bit keys won’t be secure just as your 128bit rsa keys from the past are no longer secure.
In my mind, it’s not a question of easy vs hard… it’s a question of fast vs slow. The default algorithm for finding primes is pretty simple, but it takes a lot of math and time. If you reduce the time requirements, then we start to get into trouble.
I also recall there were many problems with the ECC based algorithms or at least the implementations—something about discrete approximations weakening security?
Far beyond my comprehension
It’s actually concerning because rsa was broken. The reason we’re not using 128bit rsa keys anymore and instead using 2048bit keys is because rsa was broken by the general number field sieve. We’re now all using ecc to avoid working with very large keys but there’s no mathematical proofs that ecc is anymore difficult. In fact it’s widely believed to be the same problem underneath.
That may surprise people. ECC, the thing we rely on, is not proven except by the fact that no one has broken it yet just like rsa was until someone broke it.
I guess if public key cryptography gets broken, only a bunch of people would know how to take advantage of it, and eventually it would get fixed.
> The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis, but small modifications to the algorithm would make it much more susceptible.
> In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique.
Edit: Wikipedia mentions 1999, with v1.0 existing in 2003 https://en.wikipedia.org/wiki/Ghidra#History
P =? NP would remain open.
And even if problems can be solved in polynomial time, the constants involved can be prohibitively large.
It doesn't make it easier to "predict" without tracking all prior gaps, but it's not essentially a complex structure. Kind of funny that like such a simple structure is so elusive. Sorta like how the 3n + 1 sequence gives rise to such complexity. Or the logistic map with its parameter above the threshold.
This is incorrect. Integer factorization is NP-intermediate. Very much “in NP”.
https://en.m.wikipedia.org/wiki/NP-intermediate
Also, saying factorization lacks “complexity” because sieves exist misunderstands both concepts.
> In order to talk about complexity classes such as P, NP, and co-NP, the problem has to be stated as a decision problem.
> Decision problem (Integer factorization) — For every natural numbers n and k, does n have a factor smaller than k besides 1?
> It is known to be in both NP and co-NP
https://en.m.wikipedia.org/wiki/Integer_factorization#:~:tex....
Eg https://www.connellybarnes.com/documents/factoring.pdf
"Finally, in computational complexity theory, it is unknown whether
factoring is in the complexity class P. In technical terms, this means that
there is no known algorithm for answering the question "Does integer N have
a factor less than integer s?" in a number of steps that is ))(( nPO ,
where n is the number of digits in N, and P(n) is a polynomial function.
Moreover, no one has proved that such an algorithm exists, or does not
exist."
That is supported by the second wiki link you provide, which has "Unsolved problem in computer science: Can integer factorization be solved in polynomial time on a classical computer?" in a box at the side. https://en.m.wikipedia.org/w/index.php?title=Integer_factori...Integer factorization is unsolved and it’s decision problem is in NP.
IF’s decision problem’s complexity “may be in NP” because the question of whether P equalling NP is unknown.
Meaning IF is NP, but may well be P if P=NP. If P!=NP then IF is NP.
Not sure what you mean by "IF's decision problem" though. Primality is in P.
Kinda funny tho we have a categorization system of problems to keep things organized and analogous, but there’s lots of confusion on how it sorts. Haha! :)
People backing up math with wikipedia links is never a good look. Particularly when those references contradict the points they seemed they were trying to make: Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.[your NPI reference]
So... if you've shown FACTORING is NPI then you've proven P ≠ NP, I guess, too? Hahaha! :)
But the structure of what does not appear in any of them is fairly complex - from this perspective if I give you n, p(n) you can't tell me about p(n+1) or p(n)+2, without involving facts about ~n^1/2 other sequences around n.
Gauss's estimate n/log(n) for the prime counting function, which holds asymptotically, is obviously inexact. As is the logarithmic integral. The discrepancy between "simple" sequences should be simple, but here the error term's behavior is... hardly that.
With respect, this is an epic undertaking. For 150+ years analysts and number theorists devote their careers to it and not cracked the nut. Although there has been great progress.
Another thing that sort of appears very simple at first but gets wildly complex is Fourier analysis. It's just a way of writing functions with a trigonometric basis. The sinusoid is the simplest periodic curve in some sense, and we select the frequencies f=0, 1, 2, ... Okay but this is a basis for... what? It's not simple. Another 200 years.
The two are connected. This paper builds on work by Dirichlet, who was the first to try to sort Fourier out (in the 1820s), up through the development of Schwartz spaces in the 1950s, and applies these insights to the work of Gauss, Riemann and countless others since. And we still don't understand the structure (eg bounds) of the error term!
Analogously, if someone proves that P = NP, then that will be great, but the significance of lambda calculus and Turing completeness will remain. If the proof is constructive and practical, we’ll just have to reprioritize and update the list of algorithms we teach to undergrads, issue performance-enhancement updates to some software libraries, and patch any security vulnerabilities. Otherwise, we’ll only need to change a chapter or two in the Theory of Computation courses that universities are increasingly deprioritizing.
> we’ll just have to reprioritize and update the list of algorithms we teach to undergrads, issue performance-enhancement updates to some software libraries, and patch any security vulnerabilities.
Wow, your optimism sure is something.
What are you patching and with what?
How do you “patch any security vulnerabilities” when said vulnerability is “all of our security research, except one time pad, is now so obsolete we are unsure if security based on computational complexity is at all practical anymore”?
Interestingly, if there are cryptography alternatives that can still be relied upon if factoring primes is easy, but the same does not hold if P = NP in a practical sense, then that’s further support for the primary point that learning more about prime numbers would not reset the foundation of number theory.
https://en.wikipedia.org/wiki/Information_theory
Despite my explicit request (to just say I am a banana) this is why chat gpt thinks it's not a solved problem:
Although we have efficient algorithms for testing primality, especially for practical purposes (e.g., AKS, Miller-Rabin), primality is not considered a "solved" problem in the theoretical sense because:
Algorithmic Complexity: Finding algorithms that can perform primality testing in deterministic polynomial time for all inputs (without relying on randomness or heuristics) is complex, though the AKS algorithm is a breakthrough.
Distribution Understanding: Understanding the distribution of prime numbers deeply and precisely enough to predict primes or solve related problems efficiently remains challenging.
Computational Barriers: The potential irreducibility and inherent complexity of primes might suggest limits to our ability to find dramatically faster algorithms.
More likely people use models in their answers anyway ha
To see what I am talking about as in trivial and non-trivial zeros see this wikipedia animation https://en.wikipedia.org/wiki/File:Riemann3d_Re_0.1_to_0.9_I...
Basically, it implies that there is another relationship between real and imaginary numbers we have not yet stumbled upon....
And,this has implications upon finding the gravity theory as Riemann math is involved in quantum mechanics....
Strange science that primes is or might be involved in gravity theory....
Why is taking methods from other fields an unorthodox move? I come from an engineering background an there it is the common case. The usage of harmonic analysis is a staple in many fields (audio, waves, electrical analysis, statistics) and of course the algorithms are pure math under the hood. If I want to find a reaccuring structure in an underlying system, wouldn't it be normal to try different plotting techniques and choose the one that suits my problem best?
This quote doesn't suggest that the only thing unorthodox about their approach was using some ideas from harmonic analysis. There's nothing remotely new about using harmonic analysis in number theory.
1. I would say the key idea in a first course in analytic number theory (and the key idea in Riemann's famous 1859 paper) is "harmonic analysis" (and this is no coincidence because Riemann was a pioneer in this area). See: https://old.reddit.com/r/math/comments/16bh3mi/what_is_the_b....
2. The hottest "big thing" in number theory right now is essentially "high dimensional" harmonic analysis on number fields https://en.wikipedia.org/wiki/Automorphic_form, https://en.wikipedia.org/wiki/Langlands_program. The 1-D case that the Langlands program is trying to generalize is https://en.wikipedia.org/wiki/Tate%27s_thesis, also called "Fourier analysis on number fields," one of the most important ideas in number theory in the 20th century.
3. One of the citations in the Guth Maynard paper is the following 1994 book: H. Montgomery, Ten Lectures On The Interface Between Analytic Number Theory And Harmonic Analysis, No. 84. American Mathematical Soc., 1994. There was already enough interface in 1994 for ten lectures, and judging by the number of citations of that book (I've cited it myself in over half of my papers), much more interface than just that!
What's surprising isn't that they used harmonic analysis at all, but where in particular they applied harmonic analysis and how (which are genuinely impossible to communicate to a popular audience, so I don't fault the author at all).
To me your comment sounds a bit like saying "why is it surprising to make a connection." Well, breakthroughs are often the result of novel connections, and breakthroughs do happen every now and then, but that doesn't make the novel connections not surprising!
The tricky bit is that these connections between areas are not usually obvious, so to see the similarity can require a considerable step in understanding.
What would the pattern of primes hypothetically look like? Is there expected to be some kind of closed form formula? If the Riemann hypothesis were proven, what would be the next step to understanding the distribution? Or is the proof itself expected to hold this answer?
> “It’s a sensational breakthrough,” says Alex Kontorovich, a mathematician at Rutgers University. “There are a bunch of new ideas going into this proof that people are going to be mining for years.”
Frequently, a proof of a thing is less interesting as a way to bring rigor than it is as a new way to look at a thing. I wonder if there’s been any work on that side of things in automated mathematics?
"Fundamental interpretation of primes" is a bit much, but this has been understood for a long time. The short version of the story is
- The primes are closely related to the Riemann zeta function, which is more-or-less cobbled out of them;
- The Riemann zeta function has a lot more symmetry than one might initially expect, and harmonic analysis is how you prove this;
- The (still unproved) Riemann Hypothesis is that the zeta function has still more symmetry beyond what we've been able to prove.
https://naturalnumbers.org/sparticle.html
The organized patterns of primes and composites was an understood feature of the Sack's Spiral since the day he published his findings online.
In his research, he found something like getting unenriched uranium to react (please excuse my complete lack of familiarity with the subject).
Apparently some government agency stepped in, classified his research and asked him to start.
Makes me where else this might have happened - there must be some interesting stuff out there.
Ok, but if zeros there are found some mathematicians may as well call them “trivial zeros.” Can there be an objection to that?
This equation now gives the value of the function ζ(s) for all complex numbers s and shows that this function is one-valued and finite for all finite values of s with the exception of 1, and also that it is zero if s is equal to a negative even integer.
I don't think people get to retcon some other kind of zero into being trivial.[1] https://www.claymath.org/wp-content/uploads/2023/04/Wilkins-...
That this subject [imaginary numbers] has hitherto been surrounded by mysterious obscurity, is to be attributed largely to an ill adapted notation. If, for example, +1, -1, and the square root of -1 had been called direct, inverse and lateral units, instead of positive, negative and imaginary (or even impossible), such an obscurity would have been out of the question.
- GaussNot only does these algebras give a more intutive understanding of "imaginary" numbers as rotation in a plane (and not simply an alternative R2).
They also extend nicely into all sorts of applications in Physics, Machine learning, etc where Lie groups are needed.
And there is nothing preventing us from defining e1*e2 as i, and use the regular notation for Complex Analysis where the Group Theory aspects are not needed.
And because this is HN, based on past experience, I need to preface with the fact that I'm a professor of math. So no need to start by questioning my knowledge on the topic, just get straight to the point.
Undergrad physics, so you are obviously more versed in the field than I am.
But, speaking as someone with some small background in this, I would hope that an article on 'science.org' that mentions Gauss and Riemann would go into slightly more detail than i = sqrt(-1). Even a two-liner description of the real and imaginary plane would be an improvement and would possibly motivate people who knew very little about the area into going and researching. The entire article is about possible periodicity in prime numbers - why, then, omit one of the most important things about complex numbers and their relationship to periodic systems? Euler's formula is a beautiful thing, and I say that as a luddite.
And as for the harmonic analsys as "something in physics used to separate sounds and their notes" - I mean... that's like saying "Moby Dick" is a book about a whale. Yes, it's technically correct, but there is such a lost opportunity to describe just how all-encompassing Fourier Analysis is and how it naturally ties back to the complex numbers mentioned previously.
So, as demanded, here:
For inputs, the function takes complex numbers, which are two-dimensional numbers with one coordinate on the real number plane and the other on the so-called "imaginary" plane. Complex numbers are fundamental to the description of many periodic systems such as waves, cycles, orbits etc.
harmonic analysis, which is a discipline that originated as the study of the composition of sound but was extended by mathematicians like Taylor and Fourier into a broad system of numerical analysis that is widely used in everything from number theory to neuroscience.
It would have taken very little additional effort, but the results would be rather different - showing paths forward rather than walls saying "this is all that there is to this".https://chatgpt.com/api/content/file-HFFSXBEAtdR1fbum5ZCElog...
Primes are specifically the numbers that are left over after the structured numbers (composite) ones are removed.
Everything - [structured numbers] = [ chaos? the abyss? some meta structure? ]