My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.
There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.
Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.
Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.
So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.
Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.
The authors argue (e.g. in the first comment here https://scottaaronson.blog/?p=8310#comments) that by their definition, Google still only has a fraction of one logical qubit. Their logical error rate is of order 1e-3, whereas this paper considers a logical qubit to have error of order 1e-18. Google's breakthrough here is to show that the logical error rate can be reduced exponentially as they make the system larger, but there is still a lot of scaling work to do to reach 1e-18.
So according to this paper, we are still on roughly the same track that they laid out, and therefore might expect to break RSA between 2040 and 2060. Note that there are likely a lot of interesting things one can do before breaking RSA, which is among the hardest quantum algorithms to run.
The error correcting code used in this work also has a nice intuitive explanation. Imagine a 2D grid of bits, visualized as pixels that can be black or white. Now imagine drawing a bunch of lines of black pixels, and enforcing that lines can only end on the top or bottom boundary (loops are allowed). If there is an even number of lines connecting the top to the bottom, we call that logical 0, and if there is an odd number of lines, we call that logical 1. This again has the property that as you add more bits, the probability of changing between logical 1 and 0 gets exponentially smaller, because the lines connecting top to bottom get longer (just like in the repetiton code).
This code also has the nice property that if you measure the value of a small patch of (qu)bits, there's no way to tell what the logical state is. This is important for quantum error correction, because measurement destroys quantum information. So the fact that local measurements don't reveal the logical state means that the logical state is protected. This isn't true for the repetition code, where measuring a single bit tells you the logical state.
This.
People seems to think that because something is end to end encrypted it is secure. They don't seem to grasp that the traffic and communication that is possibly dumped/recorded now in encrypted form could be used against them decades later.
https://crypto.stackexchange.com/a/61596
But ... AES is believed to be quantum-safe-ish, so with perfect forwards secrecy this exact threat can be quite well managed.
The currently best known quantum attack on AES requires a serial computation of "half of key length" (Grover's algorithm ... so if they key is 128 bit long then it requires 2^64 sequential steps)
https://www.reddit.com/r/AskNetsec/comments/15i0nzp/aes256_i...
https://cloud.google.com/blog/products/identity-security/why...
Ah, okay for iMessage, something called PQ3[1], hm, it uses Kyber. And it's also a hybrid scheme, combining ECC. And a lot of peer review.
And there's also some formal verification for Signal's PQXDH [2].
Oh, wow, not bad. Thanks!
Now let's hope a good reliable sane implementation emerges so others can also try this scheme. (And I'm very curious of the added complexity/maintenance burden and computational costs. Though I guess this mostly runs on the end users' devices, right?)
[1] https://security.apple.com/blog/imessage-pq3/ [2] https://github.com/Inria-Prosecco/pqxdh-analysis
I'm very sorry to admit it but I'm too lazy to read the entire discussion in this thread. Could you please tell me, a mere mortal, at which point the humanity should start worrying about the security of asymmetric encryption in the brave new world of quantum computing?
Use a key exchange that offers perfect forward secrecy (e.g. diffie Hellman) and you don’t need to worry about your RSA private key eventually being discovered.
The point is that you can build stuff on top of RSA today even if you expect it to be broken eventually if RSA is only for identity verification.
[1] https://security.stackexchange.com/questions/33069/why-is-ec...
There are several choices with scaling RSA too, you can push the primes which slows generation time considerably. Or the more reasonable approach is to settle on a prime size but use multiple of them (MP-RSA). The second approach scales indefinitely. Though it would only serve a purpose if you are determined to hedge against the accepted PQC algorithms (Kyber/MLKEM, McEliece) being broken at some point.
The Intel 4004, in 1971, had only 2,250 transistors.
A handful of qubits today might become a billion sooner than you think.
Unless they break ECDHE, it doesn’t matter if RSA gets popped.
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...
All perfect forward secrecy means is that you delete your own ephemeral private keys, the public keys stay in the record. And a quantum computer will recover the deleted private keys.
Also, none of the currently accepted post-quantum cryptographic algorithms offer a Diffie-Hellman construction. They use KEM (Key Encapsulation Mechanism).
ETA: Wikipedia 2330 qubits, but I'm not sure it is citing the most recent work: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#ci...
The Shor's algorithm requires binary encoding; hence, 2048 logical qubits are needed to become a nuance for cryptography. This, in turn, means that one will always be easily able to run away from a quantum adversary by paying a polynomial price on group element computations, whereas a classical adversary is exponentially bounded in computation time, and a quantum adversary is exponentially bounded with a number of physical qubits. Fascinating...
https://crypto.stackexchange.com/questions/1978/how-big-an-r...
So don't fear them due to unfounded theories.
The problem it's presenting is more about software on the wild having different behavior... And if "some people connect on the internet and use software that behaves differently from mine" is a showstopper for you, I have some really bad news.
In my opinion the correct approach here is the most liberal one for the Q, R points. One checks each point cofactor at parsing 8P=0 and then use unbatched equation for verification. This way implementations can be made group agnostic.
Having group agnostic implementations is important as it creates a proper separation of concerns between curve implementations and the code that uses them. For instance if we were to accept strict validation as the ground truth and best practice one would have enormously hard time specifying verifiers for zero knowledge proofs and would also double time and code for the implementers without any effect on soundness.
The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.
On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.
[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3
To see how it plays out consider adding a single logical qubit. First you need to increase the number of physical qubits to accommodate the new logical qubit at the same error rate. Then multiply the number of physical qubits to accommodate for exponentially decreased error rate which would be a constant factor N ( or polynomial but let’s keep things simple) by which the number of physical qubits need to be multiplied to produce a system with one additional logical qubit with an error rate to produce meaningful results.
To attain 1024 logical qubits for Schor algorithm one would need N^1024 physical qubits. The case where N<1 would be possible if error would decrease by itself without additional error correction.
Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.
About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.
Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.
We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.
Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).
The following portion from Google's announcement seems most important:
> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.
Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.
I'm curious if EC can mitigate the sub-par decoherence times.
[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...
[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068
[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...
The main EC paper referenced in this blog post showed that the logical qubit lifetime using a distance-7 code (all 105 qubits) was double the lifetime of the physical qubits of the same machine.
I'm not sure how lifetime relates to decoherence time, but if that helps please let me know.
If the logical qubit can have double the lifetime of any physical qubit, that's massive. Recall IBM's chips, with t-times of ~400microseconds. Doubling that would change the order of magnitude.
It still won't be enough to do much in the near term - like other commenters say, this seems to be a proof of concept - but the concept is very promising.
The first company to get there and make their systems easy to use could see a similar run up in value to NVIDIA after ChatGPT3. IBM seems to be the strongest in the space overall, for now.
Reminds me of the time my research director pulled me aside for defining CPU as "core processing unit" instead of "central processing unit" in a paper!
Was this actually measured and published somewhere?
> Worth spending a little time doing some long tail strategizing I’d say
any tips for starters?
https://quantum.microsoft.com/en-us/tools/quantum-katas
The first few lessons do cover complex numbers and linear algebra, so skip ahead if you want to get straight to the 'quantum' coding, but there's really no escaping the math if you really want to learn quantum.
Disclaimer: I work in the Azure Quantum team on our Quantum Development Kit (https://github.com/microsoft/qsharp) - including Q#, the Katas, and our VS Code extension. Happy to answer any other questions on it.
there's no such thing as a practical QC and there won't be for decades. this isn't a couple of years away - this is "maybe, possibly, pretty please, if we get lucky" 25-50 years away. find the above comment that alludes to "2019 estimates needing ~20 million physical qubits" and consider that this thing has 105 physical qubits. then skim the posted article and find this number
> the key quantum computational resource — are now approaching 100 µs (microseconds)
that's how long those 105 physical qubits stay coherent for. now ponder your career pivot.
source: i dabbled during my PhD - took a couple of classes from Fred Chong, wrote a paper - it's all hype.
You don't need to know quantum theory necessarily, but you will need to know some maths. Specifically linear algebra.
There are a few youtube courses on linear algebra
For a casual set of video: - https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...
For a more formal approach:
- https://youtube.com/playlist?list=PL49CF3715CB9EF31D
And the corresponding open courseware
- https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010...
Linear algebra done right comes highly recommended
1. Kaye, LaFlamme, and Mosca - An Introduction to Quantum Computing
2. Nielsen and Chuang - Quantum Computation and Quantum Information (The Standard reference source)
3. Andrew Childs's notes here [1]. Closest to the state-of-the-art, at least circa ~3 years ago.
the model of quantum mechanics, if you can afford to ignore any real-world physical system and just deal with abstract |0>, |1> qubits, is relatively easy. (this is really funny given how incredibly difficult actual quantum physics can be.)
you have to learn basic linear algebra with complex numbers (can safely ignore anything really gnarly).
then you learn how to express Boolean circuits in terms of different matrix multiplications, to capture classical computation in this model. This should be pretty easy if you have a software engineer's grasp of Boolean logic.
Then you can learn basic ideas about entanglement, and a few of the weird quantum tricks that make algorithms like Shor and Grover search work. Shor's algorithm may be a little mathematically tough.
realistically you probably will never need to know how to program a quantum computer even if they become practical and successful. applications are powerful but very limited.
"What You Shouldn't Know About Quantum Computers" is a good non-mathematical read.
From my __very__ shallow understanding, because all of the efficiency increases are in very specific areas, it might not be useful for the average computer science interested individual?
[1] - https://kvathupo.github.io/cs/quantum/457_Final_Report.pdf
I would not worry about hardware at first. But if you are interested and like physics, the simplest to understand are linear optical quantum circuits. These use components which may be familiar from high school or undergraduate physics. The catch is that the space (and component count) is exponential in the number of qubits, hence the need for more exotic designs.
I prefer his explanation to most other explanations because he starts, right away, with an analogy to ordinary probabilities. It's easy to understand how linear algebra is related to probability (a random combination of two outcomes is described by linearly combining them), so the fact that we represent random states by vectors is not surprising at all. His explanation of the Dirac bra-ket notation is also extremely well executed. My only quibble is that he doesn't introduce density matrices (which in my mind are the correct way to understand quantum states) until halfway through the notes.
But the key thing to know about quantum computing is that it is all about the mathematical properties of quantum physics, such as the way complex probabilities work.
Short enough that its reasonable to start r&d efforts on post quantum crypto.
One thing that might be problematic for a blockchain where everything has to go on the blockchain forever is that some post quantum schemes have really large signatures or key sizes.
I'm not that familiar with the details of bitcoin, but i had the impression that p2pkh is more secure against quantum computers.
[I should emphasize, im not a cryptographer and only somewhat familiar with bitcoin]
https://www.deloitte.com/nl/en/services/risk-advisory/perspe...
There are, however, analog quantum computers, e.g. by Pasqal, which hope to capitalize on this to optimize AI-like high dimension optimization problems.
At least as far as I'm aware by digital they probably mean a generally programmable QC, whereas another approach is to encode a specific class of problems in the physical structure of an analog QC so that it solves those problems much faster than classical. This latter approach is less general (so for instance you won't use it to factor primes) but much more attainable. I think D-wave or someone like that already had commercial application for optimization problems (either traveling salesman or something to do with port organization)
E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.
E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.
With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.
But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm
There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.
There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.
But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.
Cryptographers worry about big numbers. 2^80 is not considered secure.
We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.
This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.
This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).
In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.
Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.
Definitely agree with the latter, but do you have any sources on how quantum comphters make "AI" (i.e. matrix multiplication) faster?
So what are the implications if so ?
What do you mean by this?
Honestly, there's not that much to discuss on this though. The only things you can do from this strategizing is to consider even encrypted data as not safe to store, unless you're using quantum resistant encryption such as AES; and to budget time for switching to PQC as it becomes available.
So the commenter is saying that Cybersecurity needs to be planning for a near-world where traditional cryptography, including lots of existing data at rest, is suddenly as insecure as plaintext.
That level of embarrassment is frankly difficult to face. And it would be devastating to the self-image of a bunch of “practical” security gurus.
Therefore any progress must be an illusion. In the real world, the threats are predictable and mistakes don’t slowly snowball into a crisis. See also, infrastructure.
Yeah, this is pretty huge. They achieved the result with surface codes, which are general ECCs. The repetition code was used to further probe quantum ECC floor. "Just POC" likely doesn't do it justice.
(Original comment):
Also quantum dabbler (coincidentally dabbled in bitflip quantum error correction research). Skimmed the post/research blog. I believe the key point is the scaling of error correction via repetition codes, would love someone else's viewpoint.
Slightly concerning quote[2]:
"""
By running experiments with repetition codes and ignoring other error types, we achieve lower encoded error rates while employing many of the same error correction principles as the surface code. The repetition code acts as an advance scout for checking whether error correction will work all the way down to the near-perfect encoded error rates we’ll ultimately need.
"""
I'm getting the feeling that this is more about proof-of-concept, rather than near-practicality, but this is certainly one fantastic POC if true.
[1]: https://arxiv.org/abs/2408.13687
[2]: https://research.google/blog/making-quantum-error-correction...
Relevant quote from preprint (end of section 1, sorry for copy-paste artifacts):
"""
In this work, we realize surface codes operating below threshold on two superconducting processors. Using a 72-qubit processor, we implement a distance-5 surface code operating with an integrated real-time decoder. In addition, using a 105-qubit processor with similar performance, we realize a distance-7 surface code. These processors demonstrate Λ > 2 up to distance-5 and distance7, respectively. Our distance-5 quantum memories are beyond break-even, with distance-7 preserving quantum information for more than twice as long as its best constituent physical qubit. To identify possible logical error f loors, we also implement high-distance repetition codes on the 72-qubit processor, with error rates that are dominated by correlated error events occurring once an hour. These errors, whose origins are not yet understood, set a current error floor of 10−10. Finally, we show that we can maintain below-threshold operation on the 72qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1µs cycle duration.
"""
However the main scaling of error correction is via surface codes, not repetition codes. It's an important point as surface codes correct all Pauli errors, not just either bit-flips or phase-flips.
They use repetition codes as a diagnostic method in this paper more than anything, it is not the main result.
In particular, I interpret the quote you used as: "We want to scale surface codes even more, and if we were able to do the same scaling with surface codes as we are able to do with repetition codes, then this is the behaviour we would expect."
Edit: Welp, saw your edit, you came to the same conclusion yourself in the time it took me to write my comment.
Goodbye not just to Bitcoin, but also Visa, Stripe, Amazon shopping, ...
symmetric ciphers would have similar properties (AES, CHACHA20). Asymmetric encryption atm would use ECDH (which breaks) to generate a key for use with symmetric ciphers - Kyber provides a PQC KEM for this.
So, the situation isn't as bad. We're well positioned in cryptography to handle a PQC world.
https://en.wikipedia.org/wiki/Post-quantum_cryptography
https://www.microsoft.com/en-us/research/project/post-quantu...
https://www.forbes.com/councils/forbestechcouncil/2024/10/09...
https://blog.cloudflare.com/kemtls-post-quantum-tls-without-...
Signatures still have to be upgraded, but that's more difficult. We're working on it. http://blog.cloudflare.com/pq-2024/#migrating-the-internet-t...
Yup, like Bitcoin going to zero.
As to faking signatures and, e.g. stealing Satoshi's coins or just fucking up the network with fake transactions that verify, there is some concern and there are some attack vectors that work well if you have a large, fast quantum computer and want to ninja in. Essentially you need something that can crack a 256 bit ECDSA key before a block that includes a recently released public key can be inverted. That's definitely out of the reach of anyone right now, much less persistent threat actors, much less hacker hobbyists.
But it won't always be. The current state of the art plan would be to transition to a quantum-resistant UTXO format, and I would imagine, knowing how Bitcoin has managed itself so far, that will be a well-considered, very safe, multi-year process, and it will happen with plenty of time.
EDIT: it's like rainbow hashes, but every possible variation is a color, not granular like binary, but all and any are included.
More likely that other critical infrastructure failures will happen within trad-finance, much larger vulnerability footprint, and being able to trivially reverse engineer every logged SSL session is likely to be a much more impactful turn of events. I’d venture that there are significant ear-on-the-wire efforts going on right now in anticipation of a reasonable bulk SSL de cloaking solution. Right now we think it doesn’t matter who can see our “secure” traffic. I think that is going to change, retroactively, in a big way.
If the encryption on Bitcoin is broken, say goodbye to the banking system.
You mean digital signatures - and yes, we use signatures everywhere in public key cryptography.
Is that a crime? Lots of forgotten keys in there.
As for the forgotten key case, I think the only way to prove you had the key at some point would need to involve the sender vouching for you and cryptographically proving they were the sender.
Legally, the situation is the same: legal ownership is not in any way tied to the mechanism of how some system or another keeps track of ownership. Your BTC is yours via a contract, not because the BTC network says so. Of course, proving to a judge that someone else stole your BTC may be extremely hard, if not impossible.
Saying "if the protocol permits anyone who can sign a valid transaction involving a given UTXO to another address, then it technically isn't a "crime"" is like saying "traditional banking is governed by a banker checking your identity, so if someone can convince the banker they are you, then it technically isn't a "crime"".
The only thing that wouldn't be considered a crime, in both cases, is the system allowing the transaction to happen. That is, it's not a crime for the bank teller to give your money to someone else if they were legitimately fooled; and it's not a crime for the Bitcoin miners to give your money to someone else if that someone else impersonated your private key. But the person who fooled the bank teller /the miners is definitely committing a crime.
A long-term tactic of our adversaries is to capture network traffic for later decryption. The secrets in the mass of packets China assumedly has in storage, waiting for quantum tech, is a treasure trove that could lead to crucial state, corporate, and financial secrets being used against us or made public.
AI being able to leverage quantum processing power is a threat we can't even fathom right now.
Our world is going to change.
A sort of quantum commenting conundrum, I guess.
The first time I was just watching, determined not to press the button, but when I received the response, I was startled into pressing it.
The second time, I just stepped back from my keyboard, and my cat came flying out of the back room and walked on the keyboard, triggering the request.
The third time, I was holding my cat, and a train rumbled by outside, rattling my desk and apparently triggering the switch to send the request.
The fourth time, I checked the tracks, was holding my cat, and stepped back from my keyboard. Next thing I heard was a POP from my ceiling, and the request was triggered. There was a small hole burned through my keyboard when I examined it. Best I can figure, what was left of a meteorite managed to hit at exactly the right time.
I'm not going to try for a fifth time.
For a brief moment I thought this was some quantum-magical side effect you were describing and not some API error.
It's a bit like Jeopardy, really.
If you auth with the bearer token "And There Are No Friends At Dusk." then the API will call you and tell you which request you wanted to send.
Ah. Newbie mistake. You need to turn OFF your computer and disconnect from the network BEFORE sending the request. Without this step you will always receive a response before the request is issued.
I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.
Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?
I have no idea who put it there, but I can assure you the actual paper contains no such nonsense.
I would have thought whoever writes the google tech blogs is more competent than bottom tier science journalists. But in this case I think it is more reasonable to assume malice, as the post is authored by the Google Quantum AI Lead, and makes more sense as hype-boosting buzzword bullshit than as an honest misunderstanding that was not caught during editing.
No sign of a Heisenberg cut has been observed so far, even as experiments involving entanglement of larger and larger molecules are performed, which makes objective-collapse theories hard to consider seriously.
Bohmian theories are nice, but require awkward adjustments to reconcile them with relativity. But more importantly, they are philosophically uneconomical, requiring many unobservable — even theoretically — entities [0].
That leaves either many-worlds or a quantum logic/quantum Bayesian interpretations as serious contenders [1]. These interpretations aren't crank fringe nonsense. They are almost inevitable outcomes of seriously considering the implications of the theory.
I will say that personally, I find many-worlds to focus excessively on the Schrödinger-picture pure state formulation of quantum mechanics. (At least to the level that I understood it — I expect there is literature on the connection with algebraic formulations, but I haven't taken the time to understand it.) So I would lean towards quantum logic–type interpretations myself.
The point of this comment was to say that many-worlds (or "multiverses", though I dislike the term) isn't nonsense. But it also isn't exactly the kind of sci-fi thing non-physicists might picture. Given how easy it is to misinterpret the term, however, I must agree with you that a self-aware science communicator would think twice about whether the term should be included, and that there may be not-so-scrupulous intentions at play here.
Quick edit: I realise the comment I've written is very technical. I'm happy to try to answer any questions. I should preface it by stating that I'm not a professional in the field, but I studied quantum information theory at a Masters level, and always found the philosophical questions of interest.
---
[0] Many people seem to believe that many-worlds also postulates the existence of unobservable parallel universes, but this isn't true. We observe the interaction of these universe's every time we observe quantum interference.
While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function. They don't discretely separate from one another, either — it depends on your choice of basis. That choice is where decoherence comes in.
[1] And of course, there is the Copenhagen "interpretation" — preferred among physicists who would rather not think about philosophy. (A respectable choice.)
As a side note, there is still a huge gap between the largest system we've ever observed in a superposition and the smallest system we've ever observed to behave only classically. So there is still a lot of room for objective collapse theories, even though that space has shrunk by some orders of magnitude since it was first proposed. Of course, objective collapse has other, much bigger, problems, such as being incompatible with Bell's inequalities.
Edit: I'd also note some things about MWI. First, there are many versions of it, some historical, some current. Some versions, at least older ones, absolutely did involve explicit branching. And the ones that don't have a big problem still with explaining why, out of the many ways to choose the basis vectors for a measurement, we always end up with the same classical measurables in every experiment we perform on the world at large. Especially given that we know we can measure quantum systems in another other basis if we want to. It also ultimately doesn't answer the question of why we need the Born rule at all, it still postulates that an observer only has access to one possible value of the wave function and not to all at once. And of course, the problem of defining probabilities in a world where everything happens with probability 1 is another philosophically thorny issue, especially when you need the probabilities to match the amplitude of the wave function.
So the MWI is nice, and it did spawn a very useful and measurable observation, decoherence. But it's far from a single, satisfying, complete, self-consistent account of the world.
But it is not true for MWI: MWI was designed from the ground up as an interpretation of the mathematics and experimental results of quantum mechanics. It is designed specifically to not match all of the predictions of quantum mechanics, and to not make any new predictions. Other interpretations are also designed in the same way.
So, if the people creating these interpretations succeeded in their goals when making them, then they will never be experimentally verifiable.
However, I also think there is a tendency among well-educated people in physics to dismiss philosophical questions out of hand. It's fair enough when the point is "let's focus on the physics as it's hard enough", but questions of interpretation have merit in their own right.
> While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function.
I've never heard about quantum logic before. The "Bayesian" part makes sense because of how it treats the statistics, but the logic? Is that what quantum computer scientists do with their quantum circuits, or is it an actual interpretation?
If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.
Also your comment does not have any useful information. You assumed hype as the reason why they mentioned parallel computing. It's just a bias you have on looking at world. Hype does helps explain a lot of things. So it can be tempting to use it as a placeholder for anything that you don't accept based on your current set of beliefs.
I didn't "assume" hype, I hypothesized it based on the evidence before me: There is nothing in Google's paper that deals with interpretations of quantum mechanics. This only appears in the blog post, with no evidence given. And there is nothing google is doing with it's quantum chip that would discriminate between interpretations of QM, so it is simply false that "It lends credence to ... parallel universes" over another interpretation.
I can handle it, sure, and the idea of the multiverse is attractive to me from a philosophical standpoint.
But we have no evidence that there are any other universes out there, while we do have plenty of evidence that our own exists. Just because one of something exists, it doesn't automatically follow that there are others.
We have evidence for this universe though.
I can get on board with that: that there may be other, distinct universes, but I do not understand how this would lead to the suggestion they would be necessarily linked together with quantum effects.
Let me add a recommendation for David Wallace's book The Emergent Multiverse - a highly persuasive account of 'quantum theory according to the Everett Interpretation'. Aside from the technical chapters, much of it is comprehensible to non-physicists. It seems that adherents to MW do 'not know how to refute an incredulous stare'. (From a quotation)
People call it "many worlds" because we can interact only with a tiny fraction of the wavefunction at a time, i.e. other "branches" which are practically out of reach might be considered "parallel universes".
But it would be more correct to say that it's just one universe which is much more complex than what it looks like to our eyes. Quantum computers are able to tap into this complexity. They make a more complete use of the universe we are in.
A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[85]
Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[86]
In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[87] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[88]
A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[89]
A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[90] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[90]
[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation#Pol...
Reminds me of the Aorist Rods from Hitchhikers' Guide to the Galaxy.
Science is about coming up with the best explanations irrespective of whether or not a large chunk does not believe it.
And best explanations are the ones that is hard to vary. Not the one that is most widely accepted or easy to accept based on the current world view.
No, but as non-experts in a given field, the best information we have to go on is the consensus among scientists who are experts in the field.
Certainly this isn't a perfect metric, and consensus-smashing evidence sometimes comes to light, but unless and until that happens, we should assume that the people who study this sort of thing as their life's work are probably more correct than we are.
Ideally this would be true, but funding agencies are already preloaded with implicit asssumptions what constitutes a scientific progress.
MWI has not led to any verifiably-correct predictions, has it? At least not any that other interpretations can also predict, and have other, better properties.
Or is this one of those rhetorical questions?
It could be that we are borrowing qbit processing power from Russel's quantum teapot.
or you mean specifically the parallel computation view?
Unsure about those working on quantum foundations, but I think the absence of consensus is enough to claim any view as absolutely not the view.
i think if you were to ask people to make a real metaphysical speculation, majority might be partial to everett - especially if they felt confident the results were anonymous
I believe the vast majority of researchers in quantum computing* spend almost no time on metaphysical speculation,
*Well, those on the "practical side" that thinks about algorithms and engineering quantum systems like the Google Quantum AI team and others. Not the computer science theorists knee-deep in quantum computational complexity proofs nor physics theorists working on foundations of quantum mechanics. But these last two categories are outnumbered by the "practical" side.
The success on the random (quantum) circuit problem is really a valdiation of Feynman's idea, not Deutsch: classical computers need 2^n bits to simulate n qubits, so we will need quantum computers to efficiently simulate quantum phenomena.
Maybe A wasn't the most efficient algorithm for this universe to begin with?
That's in line with a religious belief. One camp believes one thing, other believes something else, others refuse to participate and say "shut up and calculate". Nothing wrong with religious beliefs of course, it's just important to know that is what it is.
A simple counterexample is superdeterminism, in which the different measurement outcomes are an illusion and instead there is always a single pre-determined measurement outcome. Note that this does not violate Bell's inequality for hidden variable theories of quantum mechanics, as Bell's inequality only applies to hidden variables uncorrelated to the choice of measurement: in superdeterminism, both are predetermined so perfectly correlated.
Just to be clear, where in the Schrödinger equation (iħψ̇ = Hψ) is the "multiverse"?
Copenhagen interpretation is just "easier" (like oops all our calculations about the univers don't seemt to fit, lets invent "dark matter") when the correct explanations makes any real world calculation practically impossible (thus ending most of physics further study) as any atom depends on every other atom at any time.
Doesn't this also mean that other universes have civilizations that could potentially borrow capacity from our universe, and if so, what would that look like?
Tangentially related, but there's a great Asimov book about this called The Gods Themselves (fiction).
That being said, I think the two most commonly preferred interpretations of quantum mechanics among physicists are 'Many Worlds' and 'I try not to think about it too hard.'
I don't know much about multiverse, but we need something external to explain the magic we uncover.
Energy and quantum mechanics are really cool but dense to get into. Like Planck, I suspect there's a link between consciousness and matter. I also think our energy doesn't cease to exist when our human carcass expires.
I used to love Popular Science magazine in middle school, but by high school I had noticed how much it's claims were hyperbole and outright nonsense. I can't fathom how or why, but most people blame the scientists for it.
Puffery is not a victimless crime.
"Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?"
In this context, your opinion and Deutsch's opinion don't matter. The question is about whether the idea is common in the field or not.
Quantum mechanics is a tool to calculate observable values, and this tool works very successfully without needing to make strong assumptions about the nature of the universe.
If it's not, what would be your explanation for this significant improvement then?
In view of these and other findings my conclusion is that Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.
On the other hand the question is what does "real QC" mean? The current QC's perform very limited and small computations, they lack things like quantum memory. The large versions are extremely impractical to use in the sense that they run for 1000ths of a second and take many hours to setup for a run. But that doesn't mean that the physical effects that they use/capture aren't real.
Just a long long way from practical.
- quantum physics are real, this isn't about debating that. The theory underpinning quantum computing is real.
- quantum annealing is theoretically real, but not the same "breakthrough" that a quantum computer would be. Z-wave and google have made these.
- All benchmark computations have been about simulating a smaller quantum computer or annealer. which these systems can do faster than a brute force classical search. These are literally the only situation where "quantum supremacy" exists.
- There is literally no claim of "productive" computation being made by a quantum computer. Only simulations of our assumptions about quantum systems.
- The critical gap is "quantum error correction", proof that they can use many error prone physical qubits to simulate a smaller system with lower error. There isn't proof yet that is actually possible.
This result they are claiming, is they have "critical error correction" is the single most groundbreaking result we could have in quantum computing. Their evidence does not satisfy the burden of proof. They also only claim to have 1 qubit, which is intrinsically useless, and doesn't examine the costs of simulating multiple interacting qubits.
https://scottaaronson.blog/?p=8310#comments
and here
https://scottaaronson.blog/?p=8329
though I bet he will have more to say now that the paper is officially out.
“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”
Achieving this turns the normal problem of scaling quantum computation upside down.
The scaling problem is multifaceted. IMHO the physical qubits are the biggest barrier to scaling.
In theory, theory and practice are the same.
Google's announcement is legit, and is in line with what theory and simulations expect.
Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?
What if it's already happening to our universe? And that is what black holes are? Or other cosmology concepts we don't understand?
Maybe a great filter is your inability to protect your universe from quantum technology from elsewhere in the multiverse ripping yours up?
Maybe the future of sentience isn't fighting for resources on a finite planet, or consuming the energy of stars, but fighting against other multiverses.
Maybe The Dark Forest Defence is a decision to isolate your universe from the multiverse - destroying it's ability to participate in quantum computation, but also extending it's lifespan.
(I don't believe ANY of this, but I'm just noting the fascinating science fiction storylines available)
DE is some sort of entropy that is being added to our cosmos in an exponential way over historic time. It began at a point a few billion in to our history.
I found it an interesting read and hadn't heard the term before, but it's exactly the kind of nerdy serendipity I come to this site for!
I think string theories ideas about extra curled up dimensions are far more likely places to look. You've already got an infinite instantaneous energy problem with multiverses let alone entropy transfer considerations.
[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation
I've followed Many worlds & Simulation theory a bit too far and I ended up back where I started.
I feel like the most likely scenario is we are in a AI (kinder)garden being grown for future purposes.
So God is real, heaven is real, and your intentions matter.
Obviously I have no proof...
How do you reach that conclusion?
Characters in The Sims games technically have us human players as gods, it doesn't mean that when we uninstall the game those characters get to come into our earthly (to them) heaven or have any consequences for actions performed during the simulation?
I'm not deep into LLMs or AI safety right now, but if you have a bad performing AI, you aren't going to use it as a base for future work.
I was about to go to bed so I was rushing through my initial comment... I was just trying to understand the motivations for trying to create a stimulated reality... Look at the resources we spend on AI?
If you imagine simulations we can build ourselves, such as video games, it's not hard to add something at the edge of the map that users are prevented from reaching and have the code send "this thing is massive and powerful" data to the players. Who's to say that the simulation isn't actually focussed on earth, and everything including the sun is actually just a fiction designed to fool us?
Where have you seen this?
If we're a simulation of a parent universe that is exactly like us just of it's past or an alternate past, then we likely should be able to achieve simulating our own universe within ourselves. Otherwise we're not actually a simulation.
There's another line of counter argument that various results in QM and computing theory would suggest that it's mathematically impossible for the universe to be simulated on a computer (i.e. the parent universe would have to look very different from ours vs ours in the future). But I don't recall the arxiv paper.
Yes, this is a MASSIVE and COMPLETELY UNTESTABLE if
Everything about simulation theory is like, science-hostile or something it seems.
- https://thomasvilhena.com/2019/11/quantum-computing-for-prog...
That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.
For a rough but well-sourced overview, see Wikipedia: https://en.wikipedia.org/wiki/Threshold_theorem
For a review paper on surface codes, see A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” Phys. Rev. A, vol. 86, no. 3, p. 032324, Sep. 2012, doi: 10.1103/PhysRevA.86.032324.
The claim about this is that correlated errors will lead to an "error floor", a certain size of error correction past which exponential reduction in errors no longer applies, due to a certain frequency of correlated errors. See figure 3a of the arxiv version of the paper: https://arxiv.org/abs/2408.13687
Not sure why you would say that? This sort of exponential suppression of errors is exactly how quantum error correction works and why we think quantum computing is viable. Source: have worked on quantum error correction for a couple of decades. Disclosure: I work on the team that did this experiment. More reading: lecture notes from back in the day explaining this exponential suppression https://courses.cs.washington.edu/courses/cse599d/06wi/lectu...
Remember macroscopic objects have 10^23=2^76 particles, so until 76 qubits are reached and exceeded, I remain skeptical that the quantum system actually exploits an exponential Hilbert space, instead of the state being classically encoded by the particles somehow. I bet Google is struggling just at this threshold and they don't announce it.
I am not sure about RCS as the benchmark as not sure how useful that is in practice. It just produced really nice numbers. If I had a few billions of pocket change around, would I buy this to run RCS really fast? -Nah, probably not. I'll get more excited when they factor numbers at a rate that would break public key crypto. For that would spend my pocket change!
The error correction is producing a single logical qubit of quantum memory, i.e. a single qubit with no gates applied to it.
Meanwhile, the random circuit sampling uses physical qubits with no error correction, and is used as a good benchmark in part because it can prove "quantumness" even in the presence of noise.[1]
[1] https://research.google/blog/validating-random-circuit-sampl...
> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use. > > They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc. > > It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.
The RCS is a common benchmark with no practical value, as is stated several times in the blog announcement as well. It's used because if a quantum computer can't do that, it can't do any other calculation either.
The main contribution here seems to be what they indeed put first, which is the error correction scaling.
She doesn't even say that this isn't a big leap (she says it's very impressive - just not the sort of leap that means that there are now practical applications for quantum computers, and that a pinch of salt is required on the claim of comparisons to a conventional computer due to the 2019 paper with a similar benchmark).
This was a fascinating watch, and not the kind of content that is easy to find. Besides videos like that one, I enjoy her videos as fun way to absorb critical takes on interesting science news.
Maybe she is controversial for being active and opinionated on social media, but we need more science influencers and educators like her, who don't just repeat the news without offering us context and interpretation.
And I can't blame her for adopting this trend, in many cases it is the difference between surviving or not on YouTube nowadays.
So, the way youtube works is that every single creator is in an adversarial competition for your attention and time. More content is uploaded than can be consumed (profitably, from Youtube's point of view). Every video you watch is a "victory" for that video's creator, and a loss for many others.
Every single time youtube shows you a screen full of thumbnails, it's running a race. Whichever video you pick will be shown to more users, while the videos you don't pick get punished and downranked in the algorithm. If a Youtube creator's video is shown to enough people without getting clicked on, ie has a low clickthrough rate, it literally stops being shown to people.
Youtube will even do this to channels you have explicitly subscribed to, which they barely use as a signal for recommendations nowadays.
Every single creator has said that clickbait thumbnails have better performance than otherwise. If other creators are using clickbait thumbnails, you will be at a natural disadvantage if you do not. There are not enough users who hate clickbait to drive any sort of signal to the algorithm(s).
If you as a creator have enough videos in a row that do not do well, you will find your entire channel basically stops getting recommended.
It's entirely a tragedy of the commons problem: If every user stopped simultaneously, nobody would suffer, but any defectors would benefit, so they won't stop simultaneously.
Youtube itself could trivially stop this, but in reality they love it, because they have absolutely run tests, and clickbait thumbnails drive more engagement than normal thumbnails. This is why they provide ample tooling to creators to A/B test thumbnails, help make better clickbait etc, and zero tooling around providing viewers a way to avoid clickbait thumbnails, which would be trivial to provide as an "alternative thumbnail" setting for creators and viewers.
Sabine is literally driving herself down an anti-science echochamber though. Maybe she can't see it, but it's very clear from the outside what is happening. She has literally said that "90% of the science that your tax dollars pay for is bullshit" which is absurd hyperbole, and something that a PHYSICIST cannot say about all fields full stop. It's literally https://xkcd.com/793/
They do? For many years, I made my living from YouTube. This was always a feature that people wanted, but that didn’t exist. It’s been a year-plus since I’ve actively engaged on YouTube as a creator. Is this a recent change?
> Of course, as happened after we announced the first beyond-classical computation in 2019, we expect classical computers to keep improving on this benchmark
As IBM showed their estimate of classical computer time is taken out of their a**es.
Problems that benefit from quantum computing as far as I'm aware have their own formal language class, so it's also not like you have to consider Sabine's or any other person's thoughts and feelings on the subject - it is formally demonstrated that such problems exist.
Whether the real world applications arrive or not, you can speculate for yourself. You really don't need to borrow the equally unsubstantiated opinion of someone else.
On the other hand it's actually not completely necessary to have a superpolynomial quantum advantage in order to have some quantum advantage. A quantum computer running in quadratic time is still (probably) more useful than a classical computer running in O(n^100) time, even though they're both technically polynomial. An example of this is classical algorithms for simulating quantum circuits with bounded error whose runtime is like n^(1/eps) where eps is the error. If you pick eps=0.01 you've got a technically polynomial runtime classical algorithm but it's runtime is gonna be n^100, which is likely very large.
> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.
Does anyone know?
But I don't really have a feel of what's going on, really. How many quantum computers there are, is there anything that is actually capable of performing anything more than just being an ongoing research prototype? Some educated guesses about how far can be some non-public projects by now? Like, is it possible that some secret CIA project is further ahead than what we know, or if it's even more unlikely and farther away than fusion power? Or maybe it's even more comparable to cold fusion?
I know, that this kinda exists as an idea, and apparently somebody's working on it, but that's pretty much it.
Not sure if they are close in terms of specs but looks like they are a viable solution and seeing an increase in utilization over the last year... Seems both are pretty interesting to keep an eye on.
Would love if someone could weight in.
>>Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe
There are a lot of critiques about academia. In particular that it’s so grant obsessed you have to stay focused on your next grant all the time. This environment doesnt seem to reward solving big problems but paper production to prove the last grant did something. Yet ostensibly we fund fundamental public research precisely for fundamental changes. The reality seems to be the traditional funding model create incremental progress within existing paradigms.
Around 50% of our time was spent working in Overleaf making small improvements to old projects so that we could submit to some new journal or call-for-papers. We were always doing peer review or getting peer reviewed. We were working with a lot of 3rd-party tools (e.g. FPGAs, IBM Q, etc). And our team was constantly churning due to people getting their degrees and leaving, people getting too busy with coursework, and people deciding they just weren't interested anymore.
Compare that to the corporate labs: They have a fully proprietary ecosystem. The people who developed that ecosystem are often the ones doing research on/with it. They aren't taking time off of their ideas to handle peer-review processes. They aren't taking time off to handle unrelated coursework. Their researchers don't graduate and start looking for professor positions at other universities.
It's not surprising in the slightest that the corporate labs do better. They're more focused and better suited for long-term research.
Because in product development, there can be short-sighted industry decisions based on quarterly returns. I've also seen a constant need to justify outcomes based on KPIs etc, and constantly justifying your work, etc.
> Because in product development, there can be short-sighted industry decisions based on quarterly returns. I've also seen a constant need to justify outcomes based on KPIs etc, and constantly justifying your work, etc.
I have seen this as well. It's extremely common (especially among publicly-owned companies) and frustrating. But it's not ubiquitous. Consider LM's Skunkworks or Apple's quiet development of the iPhone, and compare it to companies that finish a product and then focus on cutting costs / nickel-and-diming their customers.
In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.
I’ve seen lots of people dismissing this as if it isn’t impressive or important. I’ve watched one video where the author said in a deprecating manner “quantum computers are good for just two things: generating random numbers and simulating quantum systems”.
It’s like saying “the thing is good for just two things: making funny noises and producing infinite energy”.
(Also, generating random numbers is pretty useful, but I digress)
A quote from the article is especially ludicrous: > to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges
You don't need a quantum computer to do this. We can solve housing and food scarcity today, arguably our greatest challenges. Big tech has been claiming that it's going to solve all of our problems for decades now and it has yet to put up.
If you want this type of technology to be made and do actual good, we need publicly funded research institutions. Tech won't save us.
If history is any guide we'll soon see that there are problems with the fidelity (the system they use to verify that the results are "correct") or problems with the difficulty of the underlying problem, as happened with Google's previous attempt to demonstrate quantum supremacy [1].
[1] https://gilkalai.wordpress.com/2024/12/09/the-case-against-g... -- note that although coincidentally published the same day as this announcement, this is talking about Google's previous results, not Willow.
Can someone explain to me how he made the jump from "we achieved a meaninful threshold in quantum computing performance" to "The multiverse is probably real."
What computation would that be?
Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?
To me that sounds a bit like saying my "sand computer" (hourglass) is way faster than a classical computer, because it'd take a classical computer trillions of years to exactly simulate the final position of every individual grain of sand.
Sure, it proves that your quantum computer is actually a genuine quantum computer, but it's not going to be topping the LINPACK charts or factoring large semiprimes any time soon, is it?
To date all attempt to produce valid claims of quantum supremacy via this channel have failed on closer inspection, and there is no reason to assume otherwise in this case until researchers have had time to look at the paper. There's a number of skeptics in the quantum computing field that believe that this is simply not possible.
It won't be factoring large numbers yet because that computation requires the ability to perform millions of operations on thousands of qubits without any errors. You need very good error correction to do that, but luckily that's the other thing they demonstrated. Only when they do error correction, they are basically combining their system down into one effective qubit. They'll need to scale by several orders of magnitude to have hundreds of error corrected qubits to do factoring.
Ongoing research.
The main idea of quantum machine learning is that qubits make an exponentially high-dimensional space with linear resources, so can store and compute a lot of data easily.
However, getting the data in and results out of the quantum computer is tricky, and if you need many iterations in your optimization, that may destroy any advantage you have from using quantum computers.
AI is quite good in producing the meaningless drivel needed for quantum computing related press releases.
AI is limited in part by the computation available at training and runtime. If your computer is 10^X times faster, then your model is also "better". Thats why we have giant warehouses full of H100 chips pulling down a few megawatts from the grid right now. Quantum computing could theoretically allow your phone to do that.
Are there AI algorithms that would benefit from quantum?
Though the more I think about this, the more I wonder how they really would compare if you made a strictly apples-to-apples comparison.
[1] https://psychology.stackexchange.com/questions/12385/how-muc...
The article concludes by saying that the former does not have practical applications. Why are they not using benchmarks that have some?
I'm not sure how to put it quantitatively, but my impression from listening to experts give technical presentations is that the breaking-rsa-type algorithms are a decade or two away.
This is very soon from a security perspective, as all you need is to store current data and break it in the future. But it is not soon enough to use for benchmarking current systems.
It's not something that new, I like it.
A much simpler explanation is that your benchmark is severely flawed.
But to put into context, these numbers are likely accurate, but represent the time it would take for a very naive classical algorithm (possibly brute-force, I am unsure).
For example, the previous result claimed it would take Summit 10,000 years to do the same calculation as the Sycamore quantum chip. However, other researchers were able to reproduce results classically using tensor-network-based methods in 14.5 days using a "relatively small cluster". [1]
[1] G. Kalachev, P. Panteleev, P. Zhou, and M.-H. Yung, “Classical sampling of random quantum circuits with bounded fidelity,” arXiv.org, https://arxiv.org/abs/2112.15083 (accessed Dec. 9, 2024).
From chatgpt, "with n qubits a QC can be in a superposition of 2^n different states. This means that QCs can potentially perform computations on an exponential number of inputs at once"
I don't get how the first sentence in that quote leads to the second one. Any pointers to read to understand this?
Only asymmetric cryptography is threatened. There is no realistic threat to symmetric encryption like AES.
If you are encrypting your cloud data with ed25519 or RSA, then yes, a quantum computer could theoretically someday crack them.
aka everything we use daily
Not sure what you mean by the "that" when you say "if that's true", but there is nothing in this thread or by google that is anywhere close to breaking encryption.
Decryption with quantum computers is still likely decades away, as others have pointed out.
To be specific, the best know quantum factoring did 15 = 3x5, and when 35 was not able to be factored when attempted. Most experimental demonstrations have stopped in recent years due to how pointless it currently is.
Willow has 100
The other tradeoff is that quantum computers are much noisier than classical computers. The error rate of classical computers is exceedingly low, to the extent that most programmers can go their entire career without even considering it as a possibility. But you can see from the figures in this post that even in a state of the art chip, the error rates are of order ~0.03--0.3%. Hopefully this will go down over time, but it's going to be a non-negligible aspect of quantum computing for the foreseeable future.
Also of note: P is in BQP, but it is not proven that BQP != P. Some problems like factoring have a known polynomial time algorithm, and the best known classical algorithm is exponential, which is where you see these massive speedups. But we don't know that there isn't an unknown polynomial time classical factoring algorithm and we just haven't discovered it yet. It is a (widely believed) conjecture, that there are hard problems solved in BQP that are outside P.
I've pushed that off for a long time since I wasn't completely convinced that quantum computers actually worked, but I think I was wrong.
Also my almamater made the quantum enigmas series that is appropriate for high-school students (it also interesting if you have no prior knowledge about quantum computing) https://www.usherbrooke.ca/iq/quantumenigmas/ (it also use IBM online learning platform)
Quantum computing will surely have amazing applications that we cannot even conceive of right now. The earliest and maybe most useful applications might be in material science and medicine.
I'm somewhat disappointed that most discussions here focus on cryptography or even cryptocurrencies. People will just switch to post-quantum algorithms and most likely still have decades left to do so. Almost all data we have isn't important enough that intercept-now-decrypt-later really matters, and if you think you have such data, switch now...
Breaking cryptography is the most boring and useless application (among actual applications) of quantum computing. It's purely adversarial, merely an inconsequential step in a pointless arms race that we'd love to stop, if only we could learn to trust each other. To focus on this really betrays a lack of imagination.
As best I understand, it’s not clear yet whether quantum computing will ever have any practical applications.
Furthermore, there has already been a great deal of work identifying potential applications for a quantum computer, so I’d say we’ve got a fair idea of what you could do with one if it ever exists.
> After reading this article, how has your perception of Google changed? Gotten better Gotten worse Stayed the same
Otherwise there is no knowing if the accomplishment is really significant or not.
" Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s "
Standard benchmark in what sense. Well, it was chosen for task where quantum computer would have better performance.
I am not saying this is nothing. Maybe, use more reserved words, e.g. "special quantum oriented benchmark" or something.
When I think of standard benchmark, I am thinking more common scenarios, e.g. searching, sorting, matrix multiplication.
Idk enough about quantum computing to even understand this... but a technology that turns, say, AES or Blowfish, suddenly trivial to crack would very likely change the world
Systemic design is usually about how you affect the margins
I'm far more scared when tech-bros like Musk land on Mars and contaminate stuff we might not even be able to detect yet.
Makes sense, or doesn't it? What's your take on the multiverse theory?
but on a quantum computer, Grover's Algorithm allows such a search to be performed in O(N^0.5) time.
So Quantum Computing, could bring us a future where, when you perform a Google search for a word, the web pages returned actually contain the word you searched for.
Lol! I'm not gonna put a kagi plug here...
Wait... what? Google said this and not some fringe crackpot?
Current complexity theory suggests that , the class of problems solvable by quantum computers, does not encompass . Quantum computers may aid in approximations or heuristics for NPC problems but won’t fundamentally resolve them in polynomial time unless , which remains unlikely.
I do think ai algorithms could be built that quantum gates could be fast at, but I don’t have any ideas off the top of my head this morning. If you think of AI training as searching the space of computational complexity and quantum algorithms as accessing a superposition of search states I would guess there’s an intersection. Google thinks so too - the lab is called quantum ai.
Note that BQP is not "efficient" in a real-word fashion, but for theoretical study of Quantum computing, it's a good first guess
"What is their mission? Cure cancer? Eliminate poverty? Explore the universe? No, their goal: to sell another fucking Nissan." --Scott Galloway
Yet another example as to why Google is essentially not going anywhere or ‘dying’ as most have been proclaiming these days.
In this day and age, I feel an immediate sense of distrust to any technologist with the "Burning Man" aesthetic for lack of a better word. (which you can see in the author's wikipedia profile from an adjacent festival -> https://en.wikipedia.org/wiki/Hartmut_Neven, as well as in this blog itself with his wristbands and sunglasses -> https://youtu.be/l_KrC1mzd0g?si=HQdB3NSsLBPTSv-B&t=39)
In the 2000's, any embracement of alternative culture was a breath of fresh air for technologists - it showed they cared about the human element of society as much as the mathematics.
But nowadays, especially in a post-truthiness, post-COVID world, it comes off in a different way to me. Our world is now filled with quasi-scientific cults. From flat earthers to anti-vaxxers, to people focused on "healing crystals", to the resurgence of astrology.
I wouldn't be saying this about anyone in a more shall we say "classical" domain. As a technologist, your claims are pretty easily verifiable and testable, even on fuzzy areas like large language models.
But in the Quantum world? I immediately start to approach the author of this with distrust:
* He's writing about multiverses
* He's claiming a quantum performance for something that would take a classical computer septillions of years.
I'm a layman in this domain. If these were true, should they be front page news on CNN and the BBC? Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)
But no matter what I just can't help but feel like the author's choices harm the credibility of the work. Before you downvote me, consider replying instead. I'm not defending feeling this way. I'm just explaining what I feel and why.
I guess something to think about it that amongst a group like the "burners" there is huge variety in individual experience and skill. And even within a single human mind it's possible to have radically groundbreaking thoughts in one domain, and simultaneously be a total crack-pot in another. Linus Pauling and the vitamin C thing comes to mind. There's no such thing as an average person!
I guess we'll see what the quantum experts have to say about this in the weeks to come =)
> * He's writing about multiverses
> * He's claiming a quantum performance for something that would take a classical computer septillions of years.
> I'm a layman in this domain
I think your skepticism is well-founded. But as you learn more about the field, you learn what parts are marketing/hype bullshit, and what parts are not, and how to translate from the bullshit to the underlying facts.
IMO:
> He's writing about multiverses
The author's pet theory, no relevance to the actual science being done.
* He's claiming a quantum performance for something that would take a classical computer septillions of years.
The classical computer is running a very naive algorithm, basically brute-force. It is very easy to write a classical algorithm which is very slow. But still, in the field, it takes new state-of-the-art classical algorithms run on medium size clusters to get results that are on-par with recent quantum computers. Not even much better, just on-par.
> Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)
You could say that. It's not truly a breakthrough, but it is one more medium-size step in a rapidly advancing field.
The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.
Yes, Hartmut has a style, yes, he enjoys his lifestyle, no, he's not a quack. You don't have to believe me, and I don't expect that you will, but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.
Your comment here says far more about you than it says about Hartmut Neven.
I don't want to put you on the spot too much, but can you speak to why he included the part about many-worlds in this blog post?
I don't know enough about Google to say if maybe someone else less technical wrote that, or if he is being pressured to put sci-fi sounding terms in his posts, or if he believes Google's quantum computer is actually testing many-worlds, or some other reason I can't think of.
I picked my words very carefully and I would appreciate if you responded to what I said, not what you think I implied.
I specifically called out - I'm having feelings of bias. That in a field full of quack science and overpromises and underdelivery, I am extraordinarily suspicious of anyone who I feel might be associated with a shall we say "less than rigorous relationship with scientific accuracy". This person's aesthetic reminds me of this.
> The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.
You couldn't be more wrong. I'm a repeat Burner throughout the 2000's (though it's been a decade), and I've been to a dozen regional Burner events. I know many Burners both in the tech industry and outside of it.
So I actually speak with some experience. I know wonderful people who are purely artists and are not scientifically/technologically inclined - and they're great. I also know deep technologists for whom Burning man is purely an aesthetic preference - a costume not an outfit. Something to pretend to be for a little while but that otherwise has no bearing on their outside life.
And I unfortunately know those whose brainrot ends up intertwining. Crypto evangelists who find healing crystals just as groundbreaking as the blockchain. It's this latter category that I am the most suspicious of, and what I worry when I see a person presented as an authoritative leader in the Quantum Computing domain demonstrate in their external presentation.
I led with an acknowledgement that I am judging a book by it's cover, which one ought to never do. But I think it is worth pointing out because respectability in a cutting edge field is important, lest you end up achieving technological breakthroughs that don't actually change society at all (as already happened with Google Glass).
> You don't have to believe me, and I don't expect that you will,
Why would you expect that I wouldn't?
> but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.
That's fantastic to hear! You have direct evidence contradicting the assumptions generated by my first impression. This is all that matters, and all you had to say.
You're basing things on your feelings, not personally knowing the person. I know you've alluded to that, but seriously, just stop.
> I worry when I see a person presented as an authoritative leader in the Quantum Computing domain demonstrate in their external presentation.
I'm not sure why how someone dresses makes you worry, especially since you aren't even involved in QC. Stop worrying about things you can't control, especially someone else's appearance. Has Burning Man taught you nothing? If you think it taught you to be biased towards someone based on their appearance, then I think you completely missed the point.
>as already happened with Google Glass
You may not know this, but he was tapped to lead the Google Glass project, and quickly got out of it. He felt that the silicon at the time was not capable of producing the results people wanted in the form-factor they were expecting. He was right. Of course tech has improved since then and better VR/AR glasses in a convenient form factor are just now starting to be a thing, but Google Glass is long since shuttered.
He didn't just come out of nowhere, he's been involved in actual AI (not LLMs) for decades. His company was bought by Google and is the basis for their computer vision systems, which is how he ended up at Google.
As for you supposing he's into "healing crystals" or any other wooo nonsense simply based on how he dresses, I have never known him to talk about such things at all, in all our conversations throughout the decades.
> This person's aesthetic reminds me of this.
You are barking up the wrong tree, and you should maybe tone down your judginess of others. I have news for you - you can't tell a book by its cover, but you sure are trying to. You just come off as being jealous that someone can have fun and also be a pioneer in QC. No doubt any person at the top of their field has plenty of haters, based on nothing more than "he doesn't dress like I expect him to".
So really what is being claimed is that classical computers can't easily simulate quantum ones. But is that really surprising?
What would be surprising would be that kind of speedup vs classical on some kind of general optimization algorithm. I don't think that is what they are claiming though, even if it does kind of seem like it's being presented that way.