I’m a quantum dabbler so I’ll throw out an armchair reaction: this is a significant announcement.

My memory is that 256 bit keys in non quantum resistant algos need something like 2500 qubits or so; and by that I mean generally useful programmable qubits. To show a bit over 100 qubits with stability, meaning the information survives a while, long enough to be read, and general enough to run some benchmarks on is something many people thought might never come.

There’s a sort of religious reaction people have to quantum computing: it breaks so many things that I think a lot of people just like to assume it won’t happen: too much in computing and data security will change -> let’s not worry about it.

Combined with the slow pace of physical research progress (Schorrs algorithm for quantum factoring was mid 90s), and snake oil sales companies, it’s easy to ignore.

Anyway seems like the clock might be ticking; AI and data security will be unalterably different if so. Worth spending a little time doing some long tail strategizing I’d say.

You need to distinguish between "physical qubits" and "logical qubits." This paper creates a single "first-of-a-kind" logical qubit with about 100 physical qubits (using Surface Code quantum error correction). A paper from Google in 2019 estimates needing ~20 million physical qubits ("How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" - https://arxiv.org/abs/1905.09749), though recent advances probably brought this number down a bit. That's because to run Shor's algorithm at a useful scale, you need a few thousand very high quality logical qubits.

So despite this significant progress, it's probably a still a while until RSA is put out of the job. That being said, quantum computers would be able to retroactively break any public keys that were stored, so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.

Thank you for the explanation. It's still an upwards update on the qubit timelines of https://arxiv.org/pdf/2009.05045 (see Fig. 7), but not by an insane amount. We've realized their 95% expectation of qubit progress (1 logical qubit) for 2026, in 2024.92 instead.

Which to be clear is quite a bit faster than expected in 2020, but still within the realm of plausible stuff.

Nice, thanks for linking that paper. I also did below.

The authors argue (e.g. in the first comment here https://scottaaronson.blog/?p=8310#comments) that by their definition, Google still only has a fraction of one logical qubit. Their logical error rate is of order 1e-3, whereas this paper considers a logical qubit to have error of order 1e-18. Google's breakthrough here is to show that the logical error rate can be reduced exponentially as they make the system larger, but there is still a lot of scaling work to do to reach 1e-18.

So according to this paper, we are still on roughly the same track that they laid out, and therefore might expect to break RSA between 2040 and 2060. Note that there are likely a lot of interesting things one can do before breaking RSA, which is among the hardest quantum algorithms to run.

How to understand that logical error rate can reduce exponentially when the system becomes larger? Does it mean each physical qubit will generate more logical qubit when the total number of physical qubits increases?
A simple classical example where this is true is a repetition code. If you represent 1 as 11...1 and 0 as 00...0, randomly flip some bits, and then recover the logical state with majority voting, the probability of a logical error occurring shrinks exponentially as you add more bits to it. This is because a logical error requires flipping at least half the bits, and the probability of that happening decreases exponentially.

The error correcting code used in this work also has a nice intuitive explanation. Imagine a 2D grid of bits, visualized as pixels that can be black or white. Now imagine drawing a bunch of lines of black pixels, and enforcing that lines can only end on the top or bottom boundary (loops are allowed). If there is an even number of lines connecting the top to the bottom, we call that logical 0, and if there is an odd number of lines, we call that logical 1. This again has the property that as you add more bits, the probability of changing between logical 1 and 0 gets exponentially smaller, because the lines connecting top to bottom get longer (just like in the repetiton code).

This code also has the nice property that if you measure the value of a small patch of (qu)bits, there's no way to tell what the logical state is. This is important for quantum error correction, because measurement destroys quantum information. So the fact that local measurements don't reveal the logical state means that the logical state is protected. This isn't true for the repetition code, where measuring a single bit tells you the logical state.

> so there's a case to be made for switching to quantum-resistant cryptography (like lattice-based cryptography) sooner rather than later.

This.

People seems to think that because something is end to end encrypted it is secure. They don't seem to grasp that the traffic and communication that is possibly dumped/recorded now in encrypted form could be used against them decades later.

  • pas
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Well. Yes, but currently there are no well tested (ie. recommended by the ITsec community) post-quantum cryptosystems as far as I understand.

https://crypto.stackexchange.com/a/61596

But ... AES is believed to be quantum-safe-ish, so with perfect forwards secrecy this exact threat can be quite well managed.

The currently best known quantum attack on AES requires a serial computation of "half of key length" (Grover's algorithm ... so if they key is 128 bit long then it requires 2^64 sequential steps)

https://www.reddit.com/r/AskNetsec/comments/15i0nzp/aes256_i...

  • _tk_
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Google uses NTRU-HRSS internally, which seems reasonable.

https://cloud.google.com/blog/products/identity-security/why...

Signal and Apple both use post-quantum.
  • pas
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I read about Signal's double-trouble tactics, but I haven't heard about Apple's.

Ah, okay for iMessage, something called PQ3[1], hm, it uses Kyber. And it's also a hybrid scheme, combining ECC. And a lot of peer review.

And there's also some formal verification for Signal's PQXDH [2].

Oh, wow, not bad. Thanks!

Now let's hope a good reliable sane implementation emerges so others can also try this scheme. (And I'm very curious of the added complexity/maintenance burden and computational costs. Though I guess this mostly runs on the end users' devices, right?)

[1] https://security.apple.com/blog/imessage-pq3/ [2] https://github.com/Inria-Prosecco/pqxdh-analysis

This is correct. I worked in quantum research a little.
any books for beginners that you recommend?
This is an outstanding resource, thanks!
Dear rhubarbtree,

I'm very sorry to admit it but I'm too lazy to read the entire discussion in this thread. Could you please tell me, a mere mortal, at which point the humanity should start worrying about the security of asymmetric encryption in the brave new world of quantum computing?

Funny turn of phrase, but you probably need 20 million physical qubits (or at least you did a few years ago, this number may have dropped somewhat) to do anything significant. I don't think we can be confident we'll get there any time soon.
> quantum computers would be able to retroactively break any public keys that were stored

Use a key exchange that offers perfect forward secrecy (e.g. diffie Hellman) and you don’t need to worry about your RSA private key eventually being discovered.

Diffie-Hellman isn't considered to be post-quantum safe: https://en.wikipedia.org/wiki/Shor%27s_algorithm#Feasibility...
> Forward secrecy is designed to prevent the compromise of a long-term secret key from affecting the confidentiality of past conversations. However, forward secrecy cannot defend against a successful cryptanalysis of the underlying ciphers being used, since a cryptanalysis consists of finding a way to decrypt an encrypted message without the key, and forward secrecy only protects keys, not the ciphers themselves.[8] A patient attacker can capture a conversation whose confidentiality is protected through the use of public-key cryptography and wait until the underlying cipher is broken (e.g. large quantum computers could be created which allow the discrete logarithm problem to be computed quickly). This would allow the recovery of old plaintexts even in a system employing forward secrecy.

https://en.wikipedia.org/wiki/Forward_secrecy#Attacks

I’m talking specifically about RSA being eventually broken. If just RSA is broken and you were using ECDHE for symmetric keying, then you’re fine.

The point is that you can build stuff on top of RSA today even if you expect it to be broken eventually if RSA is only for identity verification.

The relevant RSA break is sufficiently powerful quantum computers, which also break ECDH (actually, ECDH is easier than classically equivalent-strength RSA for quantum computers[1]), so no, you’re not fine.

[1] https://security.stackexchange.com/questions/33069/why-is-ec...

I would actually expect RSA to see a resurgence due to this. Especially because you can technically scale RSA to very high levels potentially pushing the crack date to decades later than any ECC construction. With the potential that such a large quantum computer may never even arrive.

There are several choices with scaling RSA too, you can push the primes which slows generation time considerably. Or the more reasonable approach is to settle on a prime size but use multiple of them (MP-RSA). The second approach scales indefinitely. Though it would only serve a purpose if you are determined to hedge against the accepted PQC algorithms (Kyber/MLKEM, McEliece) being broken at some point.

If you don't mind a one terabyte public key. https://eprint.iacr.org/2017/351.pdf
also that paper (IMO) is ridiculously conservative. Just using 1GB keys is plenty sufficient since it would require a quantum computer with a billion bits to decrypt.
  • Vecr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
How long does it take to generate a key that big? What probabilities do you need to put on generating a composite number and not a prime? Does the prime need extra properties?
Based on https://eprint.iacr.org/2017/351.pdf it would be about 1900 core hours (but I'm pretty sure optimized implementations could bring it down a bunch). No extra properties needed and moderate probability is sufficient.
Although I know it’s an apocryphal quote, I am reminded of “640K should be enough for anybody.”

The Intel 4004, in 1971, had only 2,250 transistors.

A handful of qubits today might become a billion sooner than you think.

it took until 2011 before Sandy bridge cracked 2 billion. If we get 40 years of quantum resistance from 1GB RSA, that would be pretty great.
Perfect forward secrecy doesn't work that well when NSA motto is - store everything now decrypt later. If they intercept the ephemeral key exchange now they can decrypt the message 10 or 50 years later.
Diffie Hellman doesn’t ever send the key over the wire, that’s the point. There is nothing to decrypt in the packets that tells you the key both sides derived.

Unless they break ECDHE, it doesn’t matter if RSA gets popped.

Diffie Hellman to the best of my understanding also relies on the same hard problems that make the public key cryptography possible. If you trivialize factoring of big numbers, you break both RSA and the original DHE. Not sure how it will work for elliptic curves, but my instinct tells me that if you make the fundamental ECC problem easy, the exchange will also go down.
According to the top image on the Wikipedia page, Diffie Hellman does send the public key over the wire.

https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...

  • bsaul
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
wouldn't be surprised if ecdhe isn't quantum resistant.
Something tells me that by the end of the century only the one-time pads will still be holding their secrets.
Even for that to work you need a good random number generator
That's pretty trivial. xor a video camera with AES and no one is decrypting that ever.
And, famously, the camera is pointing at a lava lamp, ha ha.
Honestly not sure how they created one-time pads 100 years ago.
  • rdtsc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
In one case it was people just banging on typewriters randomly.
  • lxgr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Forward secrecy is orthogonal to post-quantum safety.
Perfect forward secrecy requires the exchange of ephemeral keys. If you use either ECC or RSA for this and the traffic is captured a quantum computer will break it.

All perfect forward secrecy means is that you delete your own ephemeral private keys, the public keys stay in the record. And a quantum computer will recover the deleted private keys.

Also, none of the currently accepted post-quantum cryptographic algorithms offer a Diffie-Hellman construction. They use KEM (Key Encapsulation Mechanism).

Not exactly, they can just reverse the entire chain.
What chain are you talking about?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The required number of qubits to execute Shor’s algorithm is way larger than 2500 qubits as the error ceiling for logical qubits must decrease exponentially with every logical qubit added to produce meaningful results. Hence, repeated applications of error correction or an increase in the surface code would be required. That would significantly blow up the number of physical qubits needed.
He’s quoting the number of logical qubits (which is 1024 IIRC, not 2500), after error correction.

ETA: Wikipedia 2330 qubits, but I'm not sure it is citing the most recent work: https://en.wikipedia.org/wiki/Elliptic-curve_cryptography#ci...

I actually thought the number of logical qubits needed was around 20 for factorisation as the state space size is 2^(2^n) and hence did not recognise them as the number of logical qubits required. It is often misunderstood that error correction needs to be done only once, as with classical quantum computers, and the numbers would fit together with one pass of error correction.

The Shor's algorithm requires binary encoding; hence, 2048 logical qubits are needed to become a nuance for cryptography. This, in turn, means that one will always be easily able to run away from a quantum adversary by paying a polynomial price on group element computations, whereas a classical adversary is exponentially bounded in computation time, and a quantum adversary is exponentially bounded with a number of physical qubits. Fascinating...

1024 is for RSA-1024, which is believed to be broken by classical means at this point. Everyone doing anything with RSA is on 4k or larger.
2048. There is no plausible conventional attack on 2048; whatever breaks 2048 is probably going to break 4096, as I understand it.

https://crypto.stackexchange.com/questions/1978/how-big-an-r...

> Everyone doing anything with RSA is on 4k or larger.

The Let's Encrypt intermediate certificates R10 and R11 seem to be only 2048 bit.

A signature (not encryption) with short-term valid life is fine at 2048 still.
  • Vecr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They are? The short term recommendation is 3072, and I still see lots of 2048. Actually, it's mostly 2048.
Reminder to anyone if DKIM keys haven't been rotated in a while they might still be 1024. Eg., Google Workspace but new keys are 2048 now.
I took this conversation to be about ECC, not RSA.
My completely unfounded tin foil hat at the moment is that ECC was pushed as a standard not because it was faster/ smaller, but the smaller bit size makes it less quantum resistant and is more prone to be broken first (if not already) via quantum supremacy.
ECC is easier to implement right. Even the older ones that are full of footguns have orders of magnitude less footguns than RSA.

So don't fear them due to unfounded theories.

Is there consensus on what signature validation is correct? https://hdevalence.ca/blog/2020-10-04-its-25519am
IMO, that article has a clear conclusion that you should aim your software at the libsodium-1.0.16 pattern (no edge cases).

The problem it's presenting is more about software on the wild having different behavior... And if "some people connect on the internet and use software that behaves differently from mine" is a showstopper for you, I have some really bad news.

An intersting read. So the general idea as I understand is that one can make signatures that pass or does not pass validation depending on the implementation while all implementations do protect against forgery.

In my opinion the correct approach here is the most liberal one for the Q, R points. One checks each point cofactor at parsing 8P=0 and then use unbatched equation for verification. This way implementations can be made group agnostic.

Having group agnostic implementations is important as it creates a proper separation of concerns between curve implementations and the code that uses them. For instance if we were to accept strict validation as the ground truth and best practice one would have enormously hard time specifying verifiers for zero knowledge proofs and would also double time and code for the implementers without any effect on soundness.

You're right, that's pretty unfounded.
Discrete log doesn't use Shor's algorithm, and appears to need more qubits to break (per key bit).
Is it broken? Seems still no one solved RSA-1024 challenge.
The state actors aren't going to reveal their capabilities by solving an open challenge.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The most recent result on reducing the number of logical qubits is [1]. They show how to use residue arithmetic to factor n bit numbers using n/2 + o(n) logical qubits (they give the example of 1730 qubits to factor a 2048 bit number).

[1]: https://eprint.iacr.org/2024/222

Well, this is embarrassing. I just realised I had wrongly interpreted the result in [1]. I made an error on how Shor's algorithm encodes the numbers, wrongly assuming that numbers are encoded into quantum state space, which is 2^(2^n), where instead, there is one bit encoded into one qubit, which is also more practical.

The result shall be interpreted directly with the error rate for logical qubits to decrease as ~n^(-1/3). This, in turn, means that factorisation of a 10000-bit number would only require an error rate of 1/10th of the number of the logical qubits for a 10-bit number. This is practical given that one can make a quantum computer with around 100k qubits and correct errors on them.

On the other hand, a sibling comment already mentioned the limited connectivity that those quantum computers now have. This, in turn, requires a repeated application of SWAP gates to get the interaction one needs. I guess this would add a linear overhead for the noise; hence, the scaling of the error rate for logic qubits is around ~n^(-4/3). This, in turn, makes 10000-bit factorisation require a logical error rate of 1/10000 that of 10-bit number factorisation. Assuming that 10 physical qubits are used to reduce error by order, it can result in around 400k physical qubits.

[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3

Isn't that what they are claiming is true now? That the errors do decrease exponentially with each qubit added?
What they claim is that adding physical qubits reduce error rate of the logical qubits exponentially. For the Schor algorithm the error rate of the logical qubits must decrease exponentially with every single logical qubit added to make the system produce meaningful results.

To see how it plays out consider adding a single logical qubit. First you need to increase the number of physical qubits to accommodate the new logical qubit at the same error rate. Then multiply the number of physical qubits to accommodate for exponentially decreased error rate which would be a constant factor N ( or polynomial but let’s keep things simple) by which the number of physical qubits need to be multiplied to produce a system with one additional logical qubit with an error rate to produce meaningful results.

To attain 1024 logical qubits for Schor algorithm one would need N^1024 physical qubits. The case where N<1 would be possible if error would decrease by itself without additional error correction.

The error rates given are still horrendous and nowhere near low enough for the Quantum Fourier Transform used by Shor's algorithm. Taking qubit connectivity into account, a single CX between 2 qubits that are 10 edges aways gives an error rate of 1.5%.

Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses. Exponentially so. Qubit connectivity is still ridiculously low (~3) and does not seem to be improving at all.

About AI, what algorithm(s) do you think might have an edge over classical supercomputers in the next 30 years? I'm really curious, because to me it's all (quantum) snake oil.

In addition to that, the absolutely enormous domains that the Fourier Transform sums over (essentially, one term in the sum for each possible answer), and the cancellations which would have to occur for that sum to be informative, means that a theoretically-capable Quantum Computer will be testing the predictions of Quantum Mechanics to a degree of precision hundreds of orders of magnitude greater than any physics experiment to date. (Or at least dozens of orders of magnitude, in the case of breaking Discrete Log on an Elliptic Curve.) It demands higher accuracy in the probability distributions predicted by QM than could be confirmed by naive frequency tests which used the entire lifetime of the entire universe as their laboratory!

Imagine a device conceived in the 17th century, the intended functionality of which would require a physical sphere which matches a perfect, ideal, geometric sphere in Euclidean space to thousands of digits of precision. We now know that the concept of such a perfect physical sphere is incoherent with modern physics in a variety of ways (e.g., atomic basis of matter, background gravitational waves.) I strongly suspect that the cancellations required for the Fourier Transform in Shor's algorithm to be cryptographically relevant will turn out to be the moral equivalent of that perfect sphere.

We'll probably learn some new physics in the process of trying to build a Quantum Computer, but I highly doubt that we'll learn each others' secrets.

Beautiful analogy.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Re: AI, it's a long way off still. The big limitation to anything quantum is always going to be decoherence and t-time [0]. To do anything with ML, you'll need whole circuit (more complex than shor's) just to initialize the data on the quantum device; the algorithms to do this are complex (exponential) [1]. So, you have to run a very expensive data-initialization circuit, and only then can you start to run your ML circuit. All of this needs to be done within the machine's t-time limit. If you exceed that limit, then the measured state of a qubit will have more to do with outside-world interactions than interactions with your quantum gates.

Google's willow chip has t-times of about 60-100mu.s. That's not an impressive figure -- in 2022, IBM announced their Eagle chip with t-times of around 400mu.s [2]. Google's angle here would be the error correction (EC).

The following portion from Google's announcement seems most important:

> With 105 qubits, Willow now has best-in-class performance across the two system benchmarks discussed above: quantum error correction and random circuit sampling. Such algorithmic benchmarks are the best way to measure overall chip performance. Other more specific performance metrics are also important; for example, our T1 times, which measure how long qubits can retain an excitation — the key quantum computational resource — are now approaching 100 µs (microseconds). This is an impressive ~5x improvement over our previous generation of chips.

Again, as they lead with, their focus here is on error correction. I'm not sure how their results compare to competitors, but it sounds like they consider that to be the biggest win of the project. The RCS metric is interesting, but RCS has no (known) practical applications (though it is a common benchmark). Their T-times are an improvement over older Google chips, but not industry-leading.

I'm curious if EC can mitigate the sub-par decoherence times.

[0]: https://www.science.org/doi/abs/10.1126/science.270.5242.163...

[1]: https://dl.acm.org/doi/abs/10.5555/3511065.3511068

[2]: https://www.ibm.com/quantum/blog/eagle-quantum-processor-per...

> I'm curious if EC can mitigate the sub-par decoherence times.

The main EC paper referenced in this blog post showed that the logical qubit lifetime using a distance-7 code (all 105 qubits) was double the lifetime of the physical qubits of the same machine.

I'm not sure how lifetime relates to decoherence time, but if that helps please let me know.

That's very useful, I missed that when I read through the article.

If the logical qubit can have double the lifetime of any physical qubit, that's massive. Recall IBM's chips, with t-times of ~400microseconds. Doubling that would change the order of magnitude.

It still won't be enough to do much in the near term - like other commenters say, this seems to be a proof of concept - but the concept is very promising.

The first company to get there and make their systems easy to use could see a similar run up in value to NVIDIA after ChatGPT3. IBM seems to be the strongest in the space overall, for now.

I'm sorry if this is nitpicky but your comment is hilarious to me - doubling something is doubling something, "changing the order of magnitude" would entail multiplication by 10.
Hahaha not at all, great catch. Sometimes my gray matter just totally craps out... like thinking of "changing order of magnitude" as "adding 1 extra digit".

Reminds me of the time my research director pulled me aside for defining CPU as "core processing unit" instead of "central processing unit" in a paper!

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Wouldn’t thoose increased decoherence times need to be viewed in relation to the time it takes to execute a basic gate? If the time to execute a gate also increases it may overtake practicality of having less noisy logical qubits.
  • sesm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Also, the more qubits you have/the more instructions are in your program, the faster the quantum state collapses.

Was this actually measured and published somewhere?

How can I, a regular software engineer, learn about quantum computing without having to learn quantum theory?

> Worth spending a little time doing some long tail strategizing I’d say

any tips for starters?

If you're a software engineer, then the Quantum Katas might fit your learning style. The exercises use Q#, which is quantum specific programming language.

https://quantum.microsoft.com/en-us/tools/quantum-katas

The first few lessons do cover complex numbers and linear algebra, so skip ahead if you want to get straight to the 'quantum' coding, but there's really no escaping the math if you really want to learn quantum.

Disclaimer: I work in the Azure Quantum team on our Quantum Development Kit (https://github.com/microsoft/qsharp) - including Q#, the Katas, and our VS Code extension. Happy to answer any other questions on it.

  • auto
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is there a reasonable pivot for someone well versed in the software engineering space to get in, or is it still the playground of relevant Ph.Ds and the like? I've been up and down the stack from firmware to the cloud, going on 14 years in the industry, have a Master's in CS, am the technical lead for a team, yada yada, but have been flirting with the idea of getting out of standard product development and back into the nitty gritty of the space I first pursued during undergrad.
> Is there a reasonable pivot for someone well versed in the software engineering space to get in, or is it still the playground of relevant Ph.Ds and the like?

there's no such thing as a practical QC and there won't be for decades. this isn't a couple of years away - this is "maybe, possibly, pretty please, if we get lucky" 25-50 years away. find the above comment that alludes to "2019 estimates needing ~20 million physical qubits" and consider that this thing has 105 physical qubits. then skim the posted article and find this number

> the key quantum computational resource — are now approaching 100 µs (microseconds)

that's how long those 105 physical qubits stay coherent for. now ponder your career pivot.

source: i dabbled during my PhD - took a couple of classes from Fred Chong, wrote a paper - it's all hype.

Start here: https://youtu.be/F_Riqjdh2oM

You don't need to know quantum theory necessarily, but you will need to know some maths. Specifically linear algebra.

There are a few youtube courses on linear algebra

For a casual set of video: - https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...

For a more formal approach:

- https://youtube.com/playlist?list=PL49CF3715CB9EF31D

And the corresponding open courseware

- https://ocw.mit.edu/courses/18-06-linear-algebra-spring-2010...

Linear algebra done right comes highly recommended

- https://linear.axler.net/

+1 for 18-06 and Axler. Another, more concrete, option (not sure how much it will help with quantum theory) is Stephen Boyd's "Introduction to Applied Linear Algebra" available online here:

https://web.stanford.edu/~boyd/vmls/

Isn't there a Python library that abstracts most of it away with a couple of gigantic classes with incompatible dependencies?
The bar for entry is surprisingly low, you just need to brush up on intro abstract algebra. I recommend the following:

1. Kaye, LaFlamme, and Mosca - An Introduction to Quantum Computing

2. Nielsen and Chuang - Quantum Computation and Quantum Information (The Standard reference source)

3. Andrew Childs's notes here [1]. Closest to the state-of-the-art, at least circa ~3 years ago.

[1] - https://www.cs.umd.edu/~amchilds/qa/

specifically avoid resources written by and for physicists.

the model of quantum mechanics, if you can afford to ignore any real-world physical system and just deal with abstract |0>, |1> qubits, is relatively easy. (this is really funny given how incredibly difficult actual quantum physics can be.)

you have to learn basic linear algebra with complex numbers (can safely ignore anything really gnarly).

then you learn how to express Boolean circuits in terms of different matrix multiplications, to capture classical computation in this model. This should be pretty easy if you have a software engineer's grasp of Boolean logic.

Then you can learn basic ideas about entanglement, and a few of the weird quantum tricks that make algorithms like Shor and Grover search work. Shor's algorithm may be a little mathematically tough.

realistically you probably will never need to know how to program a quantum computer even if they become practical and successful. applications are powerful but very limited.

"What You Shouldn't Know About Quantum Computers" is a good non-mathematical read.

https://arxiv.org/abs/2405.15838

I recommend this book I studied it in Undergrad and I never took a quantum theory course. https://www.amazon.com/Quantum-Computing-Computer-Scientists...
Are there any insights that you can give based off the info you've learned about quantum computation that you might not have been able to reach if you hadn't learned about it?

From my __very__ shallow understanding, because all of the efficiency increases are in very specific areas, it might not be useful for the average computer science interested individual?

Nearly all of quantum computation is theoretical algorithms and the hard engineering problems haven't been solved. Most of the math though has a large amount of overlap of AI / ML and all of deep learning to the point that Quantum computers could be used as "ML accelerators" by using algorithms (this is called Quantum Machine learning) [1]. Quantum computing could be learned with a limited understanding of Quantum theory unless you are trying to engineer the hardware.

https://en.wikipedia.org/wiki/Quantum_machine_learning

Possibly of interest, but I wrote a (hopefully approachable) report on quantum perceptrons a few years back [1]. Perhaps it's found elsewhere, but I was surprised by how, at least in this quantum algo's case, the basis of training was game theoretic not gradient descent!

[1] - https://kvathupo.github.io/cs/quantum/457_Final_Report.pdf

  • ajb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The simplest algorithm to understand is probably Grover's algorithm. Knowing that shows you how to get an sqrt(N) speedup on many classical algorithms. Then have a look at shor's algorithm which is the classic factoring algorithm.

I would not worry about hardware at first. But if you are interested and like physics, the simplest to understand are linear optical quantum circuits. These use components which may be familiar from high school or undergraduate physics. The catch is that the space (and component count) is exponential in the number of qubits, hence the need for more exotic designs.

  • cevi
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I always recommend Watrous's lecture notes: https://cs.uwaterloo.ca/~watrous/QC-notes/QC-notes.pdf

I prefer his explanation to most other explanations because he starts, right away, with an analogy to ordinary probabilities. It's easy to understand how linear algebra is related to probability (a random combination of two outcomes is described by linearly combining them), so the fact that we represent random states by vectors is not surprising at all. His explanation of the Dirac bra-ket notation is also extremely well executed. My only quibble is that he doesn't introduce density matrices (which in my mind are the correct way to understand quantum states) until halfway through the notes.

There is a course mentioned in the article, but I'm not clear on how "theory" it is.

https://coursera.org/learn/quantum-error-correction

  • sshb
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Might be worth checking out: https://quantum.country/
If you want to learn about what theoretical quantum computers might be able to do faster than classical ones and what they might not, you can try to read about quantum complexity theory, or some of what Scott Aaronson puts out on his blog if you don't want to go that in depth.

But the key thing to know about quantum computing is that it is all about the mathematical properties of quantum physics, such as the way complex probabilities work.

These lessons might be of help: https://youtu.be/3-c4xJa7Flk?si=krrpXMKh3X5ktrzT
First learn about eigenvalues.
How long until this can derive a private key from its public key in the cryptocurrency space? Is this an existential threat to crypto?
Long enough you don't need to panic or worry.

Short enough that its reasonable to start r&d efforts on post quantum crypto.

Is there a way to fork and add a quantum-proof encryption layer on the existing cryptocurrency paradigm i.e. Bitcoin 2.0?
You could replace ECDSA with a post quantum algorithm. Keep in mind that many crypto primitives are safe, so there are large parts of bitcoin where you don't have to do anything. Digital signatures is going to be where the main problem is for bitcoin. But thinks like the hash algorithm should be fine (at most quantum gives a square root speed up for hashing, which isn't enough to be really concerning).

One thing that might be problematic for a blockchain where everything has to go on the blockchain forever is that some post quantum schemes have really large signatures or key sizes.

I'm not that familiar with the details of bitcoin, but i had the impression that p2pkh is more secure against quantum computers.

[I should emphasize, im not a cryptographer and only somewhat familiar with bitcoin]

I had the same question and this article was really helpful in explaining the threat models

https://www.deloitte.com/nl/en/services/risk-advisory/perspe...

This is the same as climate change. Something might happen sometime in the future.
Data security okay. But AI? How will that change?
Aren't quantum computers expected to be like digitally read analog computers for high dimension optimization problems, and AI is like massive high dimension optimization problems?
  • Yoric
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Google is betting on digital quantum computers.

There are, however, analog quantum computers, e.g. by Pasqal, which hope to capitalize on this to optimize AI-like high dimension optimization problems.

But aren't qubits sort of analog? I thought the bits have figurative angles of rotation or something continuous that would resonate together and/or cancel out like neurons in fully-connected parts of neural nets that would be read out at the end of the pipeline.
Why do quantum computers need to be analog to be applied to such problems?
They don't, but if I interpreted the post above correctly analog QCs aremuch easier to build

At least as far as I'm aware by digital they probably mean a generally programmable QC, whereas another approach is to encode a specific class of problems in the physical structure of an analog QC so that it solves those problems much faster than classical. This latter approach is less general (so for instance you won't use it to factor primes) but much more attainable. I think D-wave or someone like that already had commercial application for optimization problems (either traveling salesman or something to do with port organization)

AI is essentially search. Quantum computers are really good at search.
Unstructured search is only a √n improvement. You need to find some way for algorithmically significant interference/cancellation of terms in order for a qc to potentially (!) have any benefit.
Search of what?
Anything. Everything. In domains where the search space is small enough to physically enumerate and store or evaluate every option, search is commonly understood as a process solved by simple algorithms. In domains where the search space is too large to physically realize or index, search becomes "intelligence."

E.g. winning at Chess or Go (traditional AI domains) is searching through the space of possible game states to find a most-likely-to-win path.

E.g. an LLM chat application is searching through possible responses to find one which best correlates with expected answer to the prompt.

With Grover's algorithm, quantum computers let you find an answer in any disordered search space with O(sqrt(N)) operations instead of O(N). That's potentially applicable to many AI domains.

But if you're so narrow minded as to only consider connectionist / neural network algorithms as "AI", then you may be interested to know that quantum linear algebra is a thing too: https://en.wikipedia.org/wiki/HHL_algorithm

Grover's algorithm is useful for very few things in practice, because for most problems we have a better technique than checking sqrt(N) of all possible solutions, at least heuristicly.

There is, at present, no quantum algorithm which looks like it would beat the state of the art on Chess, Go, or NP-complete problems in general.

O(sqrt(N)) is easily dominated by the relative ease of constructing much bigger classical computers though.
Uh, no? Not for large N.

There are about 2^152 possible legal chess states. You cannot build a classical computer large enough to compute that many states. Cryptography is generally considered secure when it involves a search space of only 2^100 states.

But you could build a computer to search though sqrt(2^152) = 2^76 states. I mean it'd be big--that's on the order of total global storage capacity. But not "bigger than the universe" big.

Doing 2^76 iterations is huge. That's a trillion operations a second for two and a half thousand years if I've not slipped up and missed a power of ten.
Maybe 100 years from now we can do 2^18 quantum ops/sec and solve chess in a day, whereas a classical computer could do 2^36 ops/sec and still take longer than the lifetime of the universe to complete.
Google's SHA-1 collision took 2^63.1 hash operations to find. Given that a single hash operation takes more than 1000 cycles, that's only less than three doublings away.

Cryptographers worry about big numbers. 2^80 is not considered secure.

It's early so I'm thinking out loud here but I don't think the algorithm scales like this, does it?

We're talking about something that can search a list of size N in sqrt(N) iterations. Splitting the problem in two doesn't halve the compute required for each half. If you had to search 100 items on one machine it's taken 10x iterations but split over two it'd take ~7x on each or ~14 in total.

If an algorithm has a complexity class of O(sqrt(N)), by definition it means that it can do better if run on all 100 elements than by splitting the list into two elements and running it on each 50.

This is not at all a surprising property. The same things happens with binary search: it has complexity O(log(N)), which means that running it on a list of size 1024 will take about 10 operations, but running it in parallel on two lists of size 512 will take 2 * 9 operations = 18.

This is actually easy to intuit when it comes to search problems: the element you're looking for is either in the first half of the list or in the second half, it can't be in both. So, if you are searching for it in parallel in both halves, you'll have to do extra work that just wasn't necessary (unless your algorithm is to look at every element in order, in which case it's the same).

In the case of binary search, with the very first comparison, you can already tell in which half of the list your element is: searching the other half is pointless. In the case of Grober's algorithm, the mechanism is much more complex, but the basic point is similar: Grover's algorithm has a way to just not look at certain elements of the list, so splitting the list in half creates more work overall.

That only helps for a relative small range of N. Chess happens to sort of fit into this space. Go is way out, even a sqrt(N) is still in the "galaxy-sized computer" range. So again, there are few problems for which Grover's algorithms really takes us from practically uncomputable to computable.

Even for chess, 2^76 operations is still waaaaay more time than anyone will ever wait for a computation to finish, even if we assumed quantum computers could reach the OPS of today's best classical computers.

No-one would solve chess by checking every possible legal chess state -- also checking 'all the states' wouldn't solve chess, you need a sequence of moves, and that pushes you up to an even bigger number. But again, you can easily massively prune that, as many moves are forced, or you can check you are in a provable end-game situation.
training an ai model is essentially searching for parameters that can make a function really accurate at making predictions. in the case of LLMs, they predict text.
They showed a logical qubit that can last entangled for an hour, but to do that they had to combine their hundred or so physical qubits into a single one, so in some sense they have, right now, a single (logical) qubit
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> AI and data security will be unalterably different if so

Definitely agree with the latter, but do you have any sources on how quantum comphters make "AI" (i.e. matrix multiplication) faster?

Exploring via Search can become O(1) instead of M^N
That is definitely not true.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> AI and data security will be unalterably different if so

So what are the implications if so ?

> Worth spending a little time doing some long tail strategizing I’d say.

What do you mean by this?

"long tail" typically refers to the tail of a normal distribution - basically it's a sciencey, but common, way of saying "very unlikely event". So, the OP was saying that it's worth spending some time strategizing about the unlikely event that a practical RSA-breaking QC appears in the near future, even though it's still a "long tail" (very unlikely) event.

Honestly, there's not that much to discuss on this though. The only things you can do from this strategizing is to consider even encrypted data as not safe to store, unless you're using quantum resistant encryption such as AES; and to budget time for switching to PQC as it becomes available.

The only thing I know or understand about quantum computing is its ability to "crack" traditional encryption algorithms.

So the commenter is saying that Cybersecurity needs to be planning for a near-world where traditional cryptography, including lots of existing data at rest, is suddenly as insecure as plaintext.

I think some element of it might be: Shor’s algorithm has been known of for 30 years, and hypothetically could be used to decrypt captured communications, right? So, retroactively I will have been dumb for not having switched to a quantum-resistant scheme. And, dumb in a way that a bunch of academic nerds have been pointing out for decades.

That level of embarrassment is frankly difficult to face. And it would be devastating to the self-image of a bunch of “practical” security gurus.

Therefore any progress must be an illusion. In the real world, the threats are predictable and mistakes don’t slowly snowball into a crisis. See also, infrastructure.

What would you switch to? There hasn’t been post quantum systems to use until very very recently.
All you encrypted communication from the 90s (SSL anyways) can probably be decrypted with classical means anyways. 90s SSL was pretty bad.
Edit after skimming arxiv preprint[1]:

Yeah, this is pretty huge. They achieved the result with surface codes, which are general ECCs. The repetition code was used to further probe quantum ECC floor. "Just POC" likely doesn't do it justice.

(Original comment):

Also quantum dabbler (coincidentally dabbled in bitflip quantum error correction research). Skimmed the post/research blog. I believe the key point is the scaling of error correction via repetition codes, would love someone else's viewpoint.

Slightly concerning quote[2]:

"""

By running experiments with repetition codes and ignoring other error types, we achieve lower encoded error rates while employing many of the same error correction principles as the surface code. The repetition code acts as an advance scout for checking whether error correction will work all the way down to the near-perfect encoded error rates we’ll ultimately need.

"""

I'm getting the feeling that this is more about proof-of-concept, rather than near-practicality, but this is certainly one fantastic POC if true.

[1]: https://arxiv.org/abs/2408.13687

[2]: https://research.google/blog/making-quantum-error-correction...

Relevant quote from preprint (end of section 1, sorry for copy-paste artifacts):

"""

In this work, we realize surface codes operating below threshold on two superconducting processors. Using a 72-qubit processor, we implement a distance-5 surface code operating with an integrated real-time decoder. In addition, using a 105-qubit processor with similar performance, we realize a distance-7 surface code. These processors demonstrate Λ > 2 up to distance-5 and distance7, respectively. Our distance-5 quantum memories are beyond break-even, with distance-7 preserving quantum information for more than twice as long as its best constituent physical qubit. To identify possible logical error f loors, we also implement high-distance repetition codes on the 72-qubit processor, with error rates that are dominated by correlated error events occurring once an hour. These errors, whose origins are not yet understood, set a current error floor of 10−10. Finally, we show that we can maintain below-threshold operation on the 72qubit processor even when decoding in real time, meeting the strict timing requirements imposed by the processor’s fast 1.1µs cycle duration.

"""

You got the main idea, it's a proof-of-concept: that a class of error-correcting code on real physical quantum chips obey the threshold theorem, as is expected based on theory and simulations.

However the main scaling of error correction is via surface codes, not repetition codes. It's an important point as surface codes correct all Pauli errors, not just either bit-flips or phase-flips.

They use repetition codes as a diagnostic method in this paper more than anything, it is not the main result.

In particular, I interpret the quote you used as: "We want to scale surface codes even more, and if we were able to do the same scaling with surface codes as we are able to do with repetition codes, then this is the behaviour we would expect."

Edit: Welp, saw your edit, you came to the same conclusion yourself in the time it took me to write my comment.

Haha, classic race condition, but I appreciate your take nonetheless!
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Google could put themselves and everyone else out of business if the algorithms that underpin our ability to do e-commerce and financial transactions can be defeated.

Goodbye not just to Bitcoin, but also Visa, Stripe, Amazon shopping, ...

bitcoin proof of work is not as impacted by quantum computers - grover's algorithm provides a quadratic speedup for unstructured search - so SHA256 ends up with 128 bits of security for pre-image resistance. BTC can easily move to SHA512.

symmetric ciphers would have similar properties (AES, CHACHA20). Asymmetric encryption atm would use ECDH (which breaks) to generate a key for use with symmetric ciphers - Kyber provides a PQC KEM for this.

So, the situation isn't as bad. We're well positioned in cryptography to handle a PQC world.

It seems you can get TLS 1.3 (or atlest slighty modified 1.3) to be quantum secure, but it increases the handshake size by roughly 9x. Cloudflare unfortunately didn't mention much about the other downsides though.

https://blog.cloudflare.com/kemtls-post-quantum-tls-without-...

About one third of traffic with Cloudflare is already using post-quantum encryption. https://x.com/bwesterb/status/1866459174697050145

Signatures still have to be upgraded, but that's more difficult. We're working on it. http://blog.cloudflare.com/pq-2024/#migrating-the-internet-t...

Yes-ish. They're not enabled yet, but post-quantum signatures & KEMs are available in some experimental versions of TLS. None are yet standardized, but I'd expect a final version well before QCs can actually break practical signatures or key exchanges.
One third of all human traffic with Cloudflare is using a post-quantum KEM. I'd say that counts as enabled. We want that to be 100% of course. Chrome (and derivates) enabled PQ by default. https://radar.cloudflare.com/adoption-and-usage
It's currently believed that quantum computers cannot break all forms of public key cryptography. Lattice based cryptography is a proposed replacement to RSA that would let us keeping buying things online no problem.
Why is no one else talking about this? I came here to see a discussion about this and encryption.
Because this result is still very far from anything related to practical decryption.
And if they were, would they tell the world?
If they had a QC that could run Shor's algorithm to factor the number 1000, I'd guarantee you they'd tell the whole world. And it would still be a long, long time from there to having a QC that can factor 2048-bit numbers.
> Worth spending a little time doing some long tail strategizing I’d say.

Yup, like Bitcoin going to zero.

I'm a little more in my wheelhouse here -- without an algo change, Grover's algorithm would privilege quantum miners significantly, but not any more than the industry has seen in the last 13 years (C code on CPU -> GPU -> Large Geometry ASIC -> Small Geometry ASIC are similarly large shifts in economics for miners probably).

As to faking signatures and, e.g. stealing Satoshi's coins or just fucking up the network with fake transactions that verify, there is some concern and there are some attack vectors that work well if you have a large, fast quantum computer and want to ninja in. Essentially you need something that can crack a 256 bit ECDSA key before a block that includes a recently released public key can be inverted. That's definitely out of the reach of anyone right now, much less persistent threat actors, much less hacker hobbyists.

But it won't always be. The current state of the art plan would be to transition to a quantum-resistant UTXO format, and I would imagine, knowing how Bitcoin has managed itself so far, that will be a well-considered, very safe, multi-year process, and it will happen with plenty of time.

You fool! And I say that affectionately. Another fool says: the security of Bitcoin relies on the inability to (among other things) derive a private key from a public key. This is just basic cryptography, like Turning vs enigma. This machine can "calculate" solutions to problems in time frames that break the whole way that cryptocurrency works. You better believe that what we hear about is old. These types of systems, and there must be non-public versions, could solve a private key from a public key, in easy less than O(fu) time.

EDIT: it's like rainbow hashes, but every possible variation is a color, not granular like binary, but all and any are included.

I think you’re going to need about 10,000,000 qbits to divert a transaction, but that’s still within foreseeable scale. I think it’s extreme likely that the foundation will have finished their quantum resistance planning before we get to 10MM coherent qbits, but still, it’s a potential scenario.

More likely that other critical infrastructure failures will happen within trad-finance, much larger vulnerability footprint, and being able to trivially reverse engineer every logged SSL session is likely to be a much more impactful turn of events. I’d venture that there are significant ear-on-the-wire efforts going on right now in anticipation of a reasonable bulk SSL de cloaking solution. Right now we think it doesn’t matter who can see our “secure” traffic. I think that is going to change, retroactively, in a big way.

I agree that the scary scenario is stored SSL frames from 20 years of banking. That's nuclear meltdown scenarios.
To do what? Replay? Just curious on an attack vector.
Hopefully replay attacks will not be useful, but confidential information will be abundant. There will be actionable information mixed in there , and it will be a lot of data. Just imagine if everything whispered suddenly became shouted.
It could be true. But pearls are rare and storage is expensive.
It is.. and I don’t see a way to avoid it.
  • sekai
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Yup, like Bitcoin going to zero.

If the encryption on Bitcoin is broken, say goodbye to the banking system.

[pedantic hat on] Bitcoin doesn't use encryption.

You mean digital signatures - and yes, we use signatures everywhere in public key cryptography.

  • socks
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
not really. banking systems have firewalls and access controls. quantum computations would be useless.
  • webXL
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Those don't really mean anything when an attacker can eavesdrop on customer and employee comms and possibly redirect transactions (MITM).
Banking communications and transactions will all be protected by quantum-resistant protocols and ciphers well before that will become a problem. Most of these already exist, and some of them can even be deployed.
  • m101
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Bitcoin will just fork to a quantum proof encryption scheme and there will be something called "bitcoin classic" that is the old protocol (which few would care about)
last time they did that people stuck with bitcoin classic instead of the larger block size variant of bitcoin which today is known as bitcoin-cash.
eh, they will add a quantum-resistant signature scheme (already a well-understood thing) then people can transfer their funds to the new addresses before it is viable to crack the existing addresses
So the first company that can break bitcoin addresses using quantum computers gets a prize of how many billion(?) dollars by stealing all the non-migrated addresses.

Is that a crime? Lots of forgotten keys in there.

  • webXL
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
A very interesting philosophical and moral can of worms you just opened there. Bitcoin is governed by the protocol, so if the protocol permits anyone who can sign a valid transaction involving a given UTXO to another address, then it technically isn't a "crime". Morally I'm not sure I'd be able to sleep well at night if I unilaterally took what I didn't exchange value for.

As for the forgotten key case, I think the only way to prove you had the key at some point would need to involve the sender vouching for you and cryptographically proving they were the sender.

Morally, there is no quandary: it's obviously morally wrong to take someone else's things, and knowing their private key changes nothing.

Legally, the situation is the same: legal ownership is not in any way tied to the mechanism of how some system or another keeps track of ownership. Your BTC is yours via a contract, not because the BTC network says so. Of course, proving to a judge that someone else stole your BTC may be extremely hard, if not impossible.

Saying "if the protocol permits anyone who can sign a valid transaction involving a given UTXO to another address, then it technically isn't a "crime"" is like saying "traditional banking is governed by a banker checking your identity, so if someone can convince the banker they are you, then it technically isn't a "crime"".

The only thing that wouldn't be considered a crime, in both cases, is the system allowing the transaction to happen. That is, it's not a crime for the bank teller to give your money to someone else if they were legitimately fooled; and it's not a crime for the Bitcoin miners to give your money to someone else if that someone else impersonated your private key. But the person who fooled the bank teller /the miners is definitely committing a crime.

  • webXL
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Traditional banking is governed by men with guns who depend on votes (for appearances). They always have recourse and motivation to intervene with private transactions. Not so much the case with bitcoin, which is extralegal for the most part and doesn't depend on them.
https://en.wikipedia.org/wiki/BGP_hijacking#Public_incidents

A long-term tactic of our adversaries is to capture network traffic for later decryption. The secrets in the mass of packets China assumedly has in storage, waiting for quantum tech, is a treasure trove that could lead to crucial state, corporate, and financial secrets being used against us or made public.

AI being able to leverage quantum processing power is a threat we can't even fathom right now.

Our world is going to change.

They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request, is there a way of handling that with try{} predestined{} blocks? Or do I need to use the Bootstrap Paradox library?
Have you tried using the Schrödinger Exception Handler? It catches errors both before and after they occur simultaneously, until you observe the stack trace.
  • oblio
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I swear I can't tell which of these comments are sarcastic/parody and which are actual answers.

A sort of quantum commenting conundrum, I guess.

They are both, just until the moment you try to read them
This subthread are among the best comments I've read in this website.
I read them as sarcastic. Please reply here with your output.
Since you read them as sarcastic, I also read them as sarcastic. Quantum entanglement at work.
What happens when you don't send the request after receiving the response? Please try and report back.
No matter what we've tried so far, the request ends up being sent.

The first time I was just watching, determined not to press the button, but when I received the response, I was startled into pressing it.

The second time, I just stepped back from my keyboard, and my cat came flying out of the back room and walked on the keyboard, triggering the request.

The third time, I was holding my cat, and a train rumbled by outside, rattling my desk and apparently triggering the switch to send the request.

The fourth time, I checked the tracks, was holding my cat, and stepped back from my keyboard. Next thing I heard was a POP from my ceiling, and the request was triggered. There was a small hole burned through my keyboard when I examined it. Best I can figure, what was left of a meteorite managed to hit at exactly the right time.

I'm not going to try for a fifth time.

You unlock the "You've met a terrible fate." achievement [1]

[1] https://outerwilds.fandom.com/wiki/Achievements

  • atoav
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I love myself a good Zelda reference
Please report back and try.*
Looks like we don't have a choice.
Finally, INTERCAL’s COME FROM statement has a practical use.
>They opened the API for it and I'm sending requests but the response always comes back 300ms before I send the request

For a brief moment I thought this was some quantum-magical side effect you were describing and not some API error.

Isn't that.... the joke?
Write the catch clause before the try block
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Try using inverse promises. You get back the result you wanted, but if you don't then send the request the response is useless.

It's a bit like Jeopardy, really.

Did you try staring on your IP packets while sending the requests?
You are getting that response 300ms beforehand because your request is denied.

If you auth with the bearer token "And There Are No Friends At Dusk." then the API will call you and tell you which request you wanted to send.

Pretty sure you just need to use await-async (as opposed to async-await)
The answer is yes and no, simultaneously
Help! Every time I receive the response, an equal number of bits elsewhere in memory are reported as corrupt by my ECC RAM.
Update: I tried installing the current Boostrap Paradox library but it says I have to uninstall next years version first.
> I'm sending requests but the response always comes back 300ms before I send the request

Ah. Newbie mistake. You need to turn OFF your computer and disconnect from the network BEFORE sending the request. Without this step you will always receive a response before the request is issued.

I'm trying to write a new version of Snake game in Microsoft Q# but it keeps eating its own tail.
What does Gemini say?
It responds with 4500 characters: https://hst.sh/olahososos.md
It think you are supposed to use a "past" object to get your results before calling the API.
Try setting up a beam-splitter router and report back with the interference pattern. If you don't see a wave pattern it might be because someone is spying on you.
  • jawns
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse

I see the evidence, and I see the conclusion, but there's a lot of ellipses between the evidence and the conclusion.

Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

I was also really taken aback by this quote.

I have no idea who put it there, but I can assure you the actual paper contains no such nonsense.

I would have thought whoever writes the google tech blogs is more competent than bottom tier science journalists. But in this case I think it is more reasonable to assume malice, as the post is authored by the Google Quantum AI Lead, and makes more sense as hype-boosting buzzword bullshit than as an honest misunderstanding that was not caught during editing.

There are compelling arguments to believe in the many-worlds interpretation.

No sign of a Heisenberg cut has been observed so far, even as experiments involving entanglement of larger and larger molecules are performed, which makes objective-collapse theories hard to consider seriously.

Bohmian theories are nice, but require awkward adjustments to reconcile them with relativity. But more importantly, they are philosophically uneconomical, requiring many unobservable — even theoretically — entities [0].

That leaves either many-worlds or a quantum logic/quantum Bayesian interpretations as serious contenders [1]. These interpretations aren't crank fringe nonsense. They are almost inevitable outcomes of seriously considering the implications of the theory.

I will say that personally, I find many-worlds to focus excessively on the Schrödinger-picture pure state formulation of quantum mechanics. (At least to the level that I understood it — I expect there is literature on the connection with algebraic formulations, but I haven't taken the time to understand it.) So I would lean towards quantum logic–type interpretations myself.

The point of this comment was to say that many-worlds (or "multiverses", though I dislike the term) isn't nonsense. But it also isn't exactly the kind of sci-fi thing non-physicists might picture. Given how easy it is to misinterpret the term, however, I must agree with you that a self-aware science communicator would think twice about whether the term should be included, and that there may be not-so-scrupulous intentions at play here.

Quick edit: I realise the comment I've written is very technical. I'm happy to try to answer any questions. I should preface it by stating that I'm not a professional in the field, but I studied quantum information theory at a Masters level, and always found the philosophical questions of interest.

---

[0] Many people seem to believe that many-worlds also postulates the existence of unobservable parallel universes, but this isn't true. We observe the interaction of these universe's every time we observe quantum interference.

While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function. They don't discretely separate from one another, either — it depends on your choice of basis. That choice is where decoherence comes in.

[1] And of course, there is the Copenhagen "interpretation" — preferred among physicists who would rather not think about philosophy. (A respectable choice.)

I think the key point that makes the quoted statement sciencey gibberish is that the Many Worlds Interpretation is just that - an interpretation. There is no way to prove or disprove it (except if you proved that the world is not actually quantum mechanical, in which case MWI might not be a valid interpretation of the new theory). Saying "this is more evidence for MWI" is thus true of any quantum mechanical experiment, but anything that is evidence for MWI is also exactly as much evidence for Pilot Waves (well, assuming it is possible to reconcile with quantum field theory), the Copenhagen Interpretation, QBism, and so on.

As a side note, there is still a huge gap between the largest system we've ever observed in a superposition and the smallest system we've ever observed to behave only classically. So there is still a lot of room for objective collapse theories, even though that space has shrunk by some orders of magnitude since it was first proposed. Of course, objective collapse has other, much bigger, problems, such as being incompatible with Bell's inequalities.

Edit: I'd also note some things about MWI. First, there are many versions of it, some historical, some current. Some versions, at least older ones, absolutely did involve explicit branching. And the ones that don't have a big problem still with explaining why, out of the many ways to choose the basis vectors for a measurement, we always end up with the same classical measurables in every experiment we perform on the world at large. Especially given that we know we can measure quantum systems in another other basis if we want to. It also ultimately doesn't answer the question of why we need the Born rule at all, it still postulates that an observer only has access to one possible value of the wave function and not to all at once. And of course, the problem of defining probabilities in a world where everything happens with probability 1 is another philosophically thorny issue, especially when you need the probabilities to match the amplitude of the wave function.

So the MWI is nice, and it did spawn a very useful and measurable observation, decoherence. But it's far from a single, satisfying, complete, self-consistent account of the world.

I would argue because we can't postulate a means of testing it now does not mean it is thereby impossible to prove; merely currently.
This would be true if we were talking about something like String Theory, or Loop Quantum Gravity.

But it is not true for MWI: MWI was designed from the ground up as an interpretation of the mathematics and experimental results of quantum mechanics. It is designed specifically to not match all of the predictions of quantum mechanics, and to not make any new predictions. Other interpretations are also designed in the same way.

So, if the people creating these interpretations succeeded in their goals when making them, then they will never be experimentally verifiable.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think the point about it being unscientific is completely fair, as far as a press release aiming to appear scientific is concerned.

However, I also think there is a tendency among well-educated people in physics to dismiss philosophical questions out of hand. It's fair enough when the point is "let's focus on the physics as it's hard enough", but questions of interpretation have merit in their own right.

MWI or Parallel Worlds is an interpretation of QM, it is one of the 15-20 major interpretations of QM. Nothing at all wrong with MWI. Sean Carrol speaks kindly towards WMI and I have tended to agree with his views over the years. I don't see any wild claims being made that would warrant a major reaction, but I would agree Willow's results are so impressive that it should lead one to consider at minimum that it counts as evidence in favor of the WMI. I don't see how this doesn't count as evidence for MWI.
Thank you for this clarification -- for me it addresses a good part of the crank/fringe/sci-fi aspect

> While we're here, we can clear up the misconception about "branching" — there is no branching in many-worlds, just the coherent evolution of the universal wave function. The many worlds are projections out of that wave function.

  • Vecr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
That's right, I agree that Multiple Worlds isn't any less correct/falsifiable than quantum mechanics as a whole.

I've never heard about quantum logic before. The "Bayesian" part makes sense because of how it treats the statistics, but the logic? Is that what quantum computer scientists do with their quantum circuits, or is it an actual interpretation?

  • sesm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
"Many-world interpretation" is just a religion, it has nothing to do with physics. Pilot Wave is an example of a physical theory, Copenhagen is an administrative agreement.
I'm pretty sure pilot wave is the same kind of unfalsifiable interpretation of the experimental results that MWI is. Also I think people are making too big a deal out of the comment in the article. I took it as kind of tongue-in-cheek. An expert would know MWI is unfalsifiable and inconsequential.
  • sesm
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
No, Bell Inequality test falsified it.
Bell's inequality refutes the many-worlds interpretation? Where is it written?
I'm sure they meant it refutes pilot-wave theory, though it seems that's not precisely true if you consider a non-local hidden variable to explain instantaneous interaction.
Oh my mistake.
[dead]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Quantum computation done in done multiple universes is the explanation given by David Deutsch the father of Quantum Computing. He invented the idea of a quantum computer to test the idea of parallel universes.

If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.

Also your comment does not have any useful information. You assumed hype as the reason why they mentioned parallel computing. It's just a bias you have on looking at world. Hype does helps explain a lot of things. So it can be tempting to use it as a placeholder for anything that you don't accept based on your current set of beliefs.

I disagree that it is "the best explanation we have". It's a nice theory, but like all theories in quantum foundations / interpretations of quantum mechanics, it is (at least currently) unfalsifiable.

I didn't "assume" hype, I hypothesized it based on the evidence before me: There is nothing in Google's paper that deals with interpretations of quantum mechanics. This only appears in the blog post, with no evidence given. And there is nothing google is doing with it's quantum chip that would discriminate between interpretations of QM, so it is simply false that "It lends credence to ... parallel universes" over another interpretation.

From what I understand, David Deutsch invented the idea of quantum computer as a way to test Parallel Universes. And later people went on and built the quantum computer. Are you saying that the implementation of a quantum computer does not require any kind of assumption on computations being run in parallel universes?
  • Vecr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's just not how it works. All this type of quantum computer can do is test some of the more dubious objective collapse theories. Those are wrong anyway, so all theories that are still in the running agree.
That's right, it doesn't. The implementation of a quantum computer does not prove or disprove the existence of parallel universes.
In short, no.
> If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.

I can handle it, sure, and the idea of the multiverse is attractive to me from a philosophical standpoint.

But we have no evidence that there are any other universes out there, while we do have plenty of evidence that our own exists. Just because one of something exists, it doesn't automatically follow that there are others.

If you are okay with a single universe coming to existence out of nothing you should be able to handle parallel universes as well just fine.

We have evidence for this universe though.

I believe their point was that, if you accept the reality of _this_ universe being created from nothing, why wouldn't you also accept the notion of _other_ universes similarly existing too.

I can get on board with that: that there may be other, distinct universes, but I do not understand how this would lead to the suggestion they would be necessarily linked together with quantum effects.

Disagree with that. The fact that we reasonably accept a well-proven theory (ie the observed universe exists) that has some unexplained parts (we don't currently have a reasonable explanation for where does that universe comes from) doesn't mean that we should therefore accept any unproven theory, especially a unfalsifiable one.
Presumably the 'nonsense' is the supposed link between the chip and MW theory.

Let me add a recommendation for David Wallace's book The Emergent Multiverse - a highly persuasive account of 'quantum theory according to the Everett Interpretation'. Aside from the technical chapters, much of it is comprehensible to non-physicists. It seems that adherents to MW do 'not know how to refute an incredulous stare'. (From a quotation)

Everett interpretation simply asserts that quantum wavefunctions are real and there's no such thing as "wavefunction collapse". It's the simplest interpretation.

People call it "many worlds" because we can interact only with a tiny fraction of the wavefunction at a time, i.e. other "branches" which are practically out of reach might be considered "parallel universes".

But it would be more correct to say that it's just one universe which is much more complex than what it looks like to our eyes. Quantum computers are able to tap into this complexity. They make a more complete use of the universe we are in.

This might turn into a debate of defining "simplest", but I think the ensemble/statistical interpretation is really the most minimal in terms of fancy ideas or concepts like "wavefunction collapse" or "multiverses". It doesn't need a wavefunction collapse nor does it need multiverses.
  • gaze
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I'm upset they put this in because this is absolutely not the view of most quantum foundations researchers.
From Wikipedia[1]:

A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[85]

Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[86]

In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[87] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[88]

A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[89]

A 2011 poll of 33 participants at an Austrian conference on quantum foundations found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[90] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[90]

[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation#Pol...

i think if these polls were anonymous, copenhagen would lose share. there's a reason why MWI is disproportionately popular among people who basically have no professional worries because they are already uber-distinguished.
  • klipt
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Soon: "are alien universes slowing down your internet? Click here to learn more!"

Reminds me of the Aorist Rods from Hitchhikers' Guide to the Galaxy.

Well there has to be some reason I'm not getting the "gigabit" speeds I was quoted.
You were probably quoted "up to" gigabit speed. Which means anything from zero to gigabit is acceptable.
Are negative speeds acceptable too?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Science is not based on consensus seeking.

Science is about coming up with the best explanations irrespective of whether or not a large chunk does not believe it.

And best explanations are the ones that is hard to vary. Not the one that is most widely accepted or easy to accept based on the current world view.

> Science is not based on consensus seeking.

No, but as non-experts in a given field, the best information we have to go on is the consensus among scientists who are experts in the field.

Certainly this isn't a perfect metric, and consensus-smashing evidence sometimes comes to light, but unless and until that happens, we should assume that the people who study this sort of thing as their life's work are probably more correct than we are.

David is this you?
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Actually it is exactly based on hypotheses that are verified.
I think the idea here is that the choice on which hypothesis to verify is based on the risk assessment of the scientist whose goal is to optimize successful results and hence better theories are more likely to surface. In this way one does not need to form a consensus around the theory but instead make consensus on what constitutes a successful result.

Ideally this would be true, but funding agencies are already preloaded with implicit asssumptions what constitutes a scientific progress.

And how do you verify hypothesis? What is the process to do that?
I assume this is a rhetorical question, since you are perfectly capable of doing a search for "the scientific method" on your own.

MWI has not led to any verifiably-correct predictions, has it? At least not any that other interpretations can also predict, and have other, better properties.

  • fwip
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
You use the hypotheses to make predictions and design experiments. Then you carry those out and see if they support the hypothesis.

Or is this one of those rhetorical questions?

Okay. I have a hypothesis that the rain is controlled by a god called Ringo. If you pray to Ringo and he listens to your prayer it will rain in next 24 hours. If he doesn't listen it won't rain. You can also test this experimentally by praying and observing the outcomes.
Credibility of the article plummeted when I got to that sentence, and especially since using name dropping.
One of the biggest problems with such an assertion is that it's not falsifiable.

It could be that we are borrowing qbit processing power from Russel's quantum teapot.

the everettian view is absolutely not the view? i am not so sure.

or you mean specifically the parallel computation view?

In my opinion the "shut up and calculate" view is the most common among actual quantum computing researchers.

Unsure about those working on quantum foundations, but I think the absence of consensus is enough to claim any view as absolutely not the view.

i don’t really view “shut up and calculate” or very restrained copenhagenism as a real view at all.

i think if you were to ask people to make a real metaphysical speculation, majority might be partial to everett - especially if they felt confident the results were anonymous

I agree, but that kind of goes to my point:

I believe the vast majority of researchers in quantum computing* spend almost no time on metaphysical speculation,

*Well, those on the "practical side" that thinks about algorithms and engineering quantum systems like the Google Quantum AI team and others. Not the computer science theorists knee-deep in quantum computational complexity proofs nor physics theorists working on foundations of quantum mechanics. But these last two categories are outnumbered by the "practical" side.

Seeing to believe is indeed a view, as they would have to view it in order to believe!
  • gaze
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
sorry -- the results don't add weight to one view or the other. The interpretations are equivalent.
not metaphysically equivalent. also, i’m not so certain it will always be untestable. i would have thought the same thing about hidden variables but i underestimated the cleverness of experimentalists
I think "experimentally equivalent" is what GP meant, and as of today, it holds true. Google's results are predicted by other interpretations just as well as by Everett. Maybe someday there will be a clever experiment to distinguish the models but just "we have a good QC" is not that.
i think you're arguing against a point i never made in any of my comments
You don't even have to get to the point where you're reading a post off Scott Aaronson's blog[1] at all; his headline says "If you take nothing else from this blog: quantum computers won't solve hard problems instantly by just trying all solutions in parallel."

[1]: https://scottaaronson.blog/

In the same way people believe P != NP, most quantum computing people believe BQP != NP, and NP-complete problems will still take exponential time on quantum computers. But if we had access to arbitrary parallel universes then presumably that shouldn't be an issue.

The success on the random (quantum) circuit problem is really a valdiation of Feynman's idea, not Deutsch: classical computers need 2^n bits to simulate n qubits, so we will need quantum computers to efficiently simulate quantum phenomena.

Does access to arbitrary parallel universes imply that they divide up the computation and the correct answer is distributed to all of the universes or in such a collection, there will be sucker universes which will always receive wrong answers ?
Good question! The whole magic of quantum computation versus parallel computation is that the “universe” probabilities interfere with each other so that wrong answers cancel each other out. So I suppose the wrong ”universes” still exist somewhere. But it’s a whole lot less confusing if you view QC as taking place in one universe which is fundamentally probabilistic.
  • paxys
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't understand the jump from: classical algorithm takes time A -> quantum algorithm takes time B -> (A - B) must be borrowed from a parallel universe.

Maybe A wasn't the most efficient algorithm for this universe to begin with?

Right, and that's part of the argument against quantum computing being a proof (or disproof) of the many-worlds interpretation. Sure, "(A-B) was borrowed from parallel universes" is a possible explanation for why quantum computing can be so fast, but it's by far not the only possible explanation.
  • rdtsc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

That's in line with a religious belief. One camp believes one thing, other believes something else, others refuse to participate and say "shut up and calculate". Nothing wrong with religious beliefs of course, it's just important to know that is what it is.

The Schrödinger equation inherently contains a multiverse. The disagreement is about whether the wave function described by the equation collapses to a single universe upon measurement (i.e. whether the equation stops holding upon measurement), or whether the different branches continue to exist (i.e. the equation continues to hold at all times), each with a different measurement outcome. Regardless, between measurements the different branches exist in parallel. It’s what allows quantum computation to be a thing.
> The Schrödinger equation inherently contains a multiverse.

A simple counterexample is superdeterminism, in which the different measurement outcomes are an illusion and instead there is always a single pre-determined measurement outcome. Note that this does not violate Bell's inequality for hidden variable theories of quantum mechanics, as Bell's inequality only applies to hidden variables uncorrelated to the choice of measurement: in superdeterminism, both are predetermined so perfectly correlated.

> The Schrödinger equation inherently contains a multiverse.

Just to be clear, where in the Schrödinger equation (iħψ̇ = Hψ) is the "multiverse"?

When taking the entire universe as a quantum system governed by the Schrödinger equation, then ψ is the universal wavefunction, and its state vector can be decomposed into pointer states that represent the “branches” of MW.
Non of that honkey ponkey is needed if you just give up locality and a hard deterministic explanation like De-Broglie-Bohm gives all the same correct measurements and conclusions like Copenhagen interpretation without multiverses and "wave function collapses".

Copenhagen interpretation is just "easier" (like oops all our calculations about the univers don't seemt to fit, lets invent "dark matter") when the correct explanations makes any real world calculation practically impossible (thus ending most of physics further study) as any atom depends on every other atom at any time.

De Broglie–Bohm doesn’t remove anything from the wave function, and thus all the pointer states therein contained are still present. The theory basically claims that one of them is special and really exists, whereas the others only mathematically exist, but philosophically it’s not clear what the difference would be. The Broglie–Bohm ontology is bigger than MW rather than smaller.
I suspect the real issue is that Big Tech investors and executives (including Sundar Pichai) are utterly hopped up on sci-fi, and this sort of stuff convinces them to dedicate resources to quantum computing.
That explains metaverse funding at least.
>Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

Doesn't this also mean that other universes have civilizations that could potentially borrow capacity from our universe, and if so, what would that look like?

  • ko27
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's a perfectly legit interpretation of what's happening, and many physicists share the same opinion. Of course the big caveat is that you need to interfere those worlds so that they cancel out, which necessarily requires a lower algorithmic bound which prevents you from doing infinite amount of computation in an instant.
> Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?

Tangentially related, but there's a great Asimov book about this called The Gods Themselves (fiction).

I’m partial to Anathem by Stephenson on this topic as well
Thanks for the recommendation!
This is a viable interpretation of quantum mechanics, but currently there is no way to scientifically falsify or confirm any particular interpretation. The boundary between philosophy and science is fuzzy at times, but this question is solidly on the side of philosophy.

That being said, I think the two most commonly preferred interpretations of quantum mechanics among physicists are 'Many Worlds' and 'I try not to think about it too hard.'

It doesn’t make sense to me because if we can borrow capacity to perform calculations then we can “borrow” an infinite amount of energy.
Climate change solved: steal energy from adjacent universes, pipe our carbon waste into theirs.
There’s a fun short story in qntm’s “Valuable Humans In Transit” about a scenario like this.
Remind me of The Expanse, where the ring space is syphoning energy from some other universe to keep the gates open.
It’s “out of the environment”.
We’re taking negative externalities to a whole new dimension!
Well. If you study quantum physics and the folks who found it like Max Planck, they believed in "a conscious and intelligent non-visible living energy force .. . the matrix mind of all matter".

I don't know much about multiverse, but we need something external to explain the magic we uncover.

Energy and quantum mechanics are really cool but dense to get into. Like Planck, I suspect there's a link between consciousness and matter. I also think our energy doesn't cease to exist when our human carcass expires.

Yes this is deeply unserious tangent in supposedly landmark technology announcement.
It's just marketing.
"Just marketing" in science journalism and publications is basically at the root of the anti-intellectualism movement right now (other than the standard hyper-fundamentalist Christians that need to convince people that science in general is all fraud), everyone sees all these wild claims and conclusions made by "science journalists" with zero scientific training and literal university PR departments that are trivially disproved in the layman's mind simply by the fact that they don't happen, and they lose faith not in the journalists who have no idea what they are writing about, but in science itself

I used to love Popular Science magazine in middle school, but by high school I had noticed how much it's claims were hyperbole and outright nonsense. I can't fathom how or why, but most people blame the scientists for it.

Puffery is not a victimless crime.

Oh, are they selling these?
The quantum computer idea was literally invented by David Deutsche to test the many universes theory of quantum physics.
You've mentioned this in another comment. I have to point out, even if this is his opinion, and he has been influential in the field, it does not mean that this specific idea of his has been influential.
Sorry. I don't care whether an idea was influential or not. All I care is whether someone has a better explanation.
I'll remind you of the quote that started this thread:

"Do quantum computing folks really think that we are borrowing capacity from other universes for these calculations?"

In this context, your opinion and Deutsch's opinion don't matter. The question is about whether the idea is common in the field or not.

Okay. I just don't understand. Are you saying Quantum Computers are also implemented without assuming the computations run in parallel universe?
Correct. The laws of quantum mechanics (used for building quantum computers among other things) make very little assumptions on the nature of the universe, and support multiple interpretations, many-worlds being only one of them.

Quantum mechanics is a tool to calculate observable values, and this tool works very successfully without needing to make strong assumptions about the nature of the universe.

  • Vecr
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don't know what he's saying, but I'm saying that the answer to your question is "Yes," unless quantum computers behave differently than expected.
It is not useful to spam this comment repeatedly under different people who question or disagree with many worlds. Pick one place to make your case.
  • m3kw9
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So are we now concerned with the environment of another universe? Like climate activitist but for multiverses?
> It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years.

If it's not, what would be your explanation for this significant improvement then?

Quantum computing can perform certain calculations much faster than classical computing in the same way classical computing can perform certain calculations much faster than an abacus
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I mean, that's like saying GPUs operate in parallel universes because they can do certain things thousands of times faster than CPUs.
In the past five years I participated in a project (with Yosi Rinott and Tomer Shoham) to carefully examine the Google's 2019 "supremacy" claim. A short introduction to our work is described here: https://gilkalai.wordpress.com/2024/12/09/the-case-against-g.... We found in that experiment statistically unreasonable predictions (predictions that were "too good to be true") indicating methodological flaws. We also found evidence of undocumented global optimization in the calibration process.

In view of these and other findings my conclusion is that Google Quantum AI’s claims (including published ones) should be approached with caution, particularly those of an extraordinary nature. These claims may stem from significant methodological errors and, as such, may reflect the researchers’ expectations more than objective scientific reality.

I've heared claims before that quantum computers are not real. But I didn't understand it. Can anybody explain the reasoning behind the criticism? Are they just simulated?
I think that this is now a very fringe position - the accumulation of reported evidence is such that if folks were/are fooling themselves it would now be a case of an entire community conspiring to keep things quiet. I mean, it's not impossible that could happen, but I think it would be extraordinary.

On the other hand the question is what does "real QC" mean? The current QC's perform very limited and small computations, they lack things like quantum memory. The large versions are extremely impractical to use in the sense that they run for 1000ths of a second and take many hours to setup for a run. But that doesn't mean that the physical effects that they use/capture aren't real.

Just a long long way from practical.

So the basics:

- quantum physics are real, this isn't about debating that. The theory underpinning quantum computing is real.

- quantum annealing is theoretically real, but not the same "breakthrough" that a quantum computer would be. Z-wave and google have made these.

- All benchmark computations have been about simulating a smaller quantum computer or annealer. which these systems can do faster than a brute force classical search. These are literally the only situation where "quantum supremacy" exists.

- There is literally no claim of "productive" computation being made by a quantum computer. Only simulations of our assumptions about quantum systems.

- The critical gap is "quantum error correction", proof that they can use many error prone physical qubits to simulate a smaller system with lower error. There isn't proof yet that is actually possible.

This result they are claiming, is they have "critical error correction" is the single most groundbreaking result we could have in quantum computing. Their evidence does not satisfy the burden of proof. They also only claim to have 1 qubit, which is intrinsically useless, and doesn't examine the costs of simulating multiple interacting qubits.

I'm skeptical only because they have AI in the name.
What experimental data would you need to see to change your mind? Either from Google or from another player?
I wonder if anyone else will be forced to wait on https://scottaaronson.blog/ to tell us if this is significant.
He told when the preprint was published

https://scottaaronson.blog/?p=8310

He's already blogged about it a bit here

https://scottaaronson.blog/?p=8310#comments

and here

https://scottaaronson.blog/?p=8329

though I bet he will have more to say now that the paper is officially out.

I was about to add a similar comment. Definitely interested to read his evaluation and whether there is more hype than substance here, though I'm guessing it may take some time.
The slightly mind blowing bit is detailed here: > https://research.google/blog/making-quantum-error-correction...

“the first quantum processor where error-corrected qubits get exponentially better as they get bigger”

Achieving this turns the normal problem of scaling quantum computation upside down.

  • bgnn
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Hmm why? I thought the whole idea was this would work eventually. Physical qubit vs logical qubit distinction is there already for a long time.

The scaling problem is multifaceted. IMHO the physical qubits are the biggest barrier to scaling.

> Hmm why?

In theory, theory and practice are the same.

It also breaks a fundamental law of quantum theory, that the bigger a system in a quantum state is, the faster it collapses, exponentially so. Which should at least tell you to take Google's announcement with z grain of salt.
This is not a "fundamental law of quantum theory", as evidenced by the field of quantum error correcting codes.

Google's announcement is legit, and is in line with what theory and simulations expect.

> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

Processing in multiverse. Would that mean we are inyecting entropy into those other verses? Could we calculate how many are there from the time it takes to do a given calculation? We need to cool the quantum chip in our universe, how are the (n-1)verses cooling on their end?

What if we are? And by injecting entropy into it, we are actually hurrying (in small insignificant ways) the heat death of those universes? What if we keep going and scale out and in the future it causes a meaningful impact to that universe in a way that it's residents would be extremely unhappy with, and would want to take revenge?

What if it's already happening to our universe? And that is what black holes are? Or other cosmology concepts we don't understand?

Maybe a great filter is your inability to protect your universe from quantum technology from elsewhere in the multiverse ripping yours up?

Maybe the future of sentience isn't fighting for resources on a finite planet, or consuming the energy of stars, but fighting against other multiverses.

Maybe The Dark Forest Defence is a decision to isolate your universe from the multiverse - destroying it's ability to participate in quantum computation, but also extending it's lifespan.

(I don't believe ANY of this, but I'm just noting the fascinating science fiction storylines available)

Getting strong vibes of Asimov’s novel "The Gods Themselves" here ! For those who haven’t read it I recommend it. It’s a nice little self-contained book, not a grandiose series and universe, but I love it.
This is exactly it, love that book.
I’d say it’s more akin to Dark Energy than anything Black Hole related.

DE is some sort of entropy that is being added to our cosmos in an exponential way over historic time. It began at a point a few billion in to our history.

Thanks for throwing in references like https://en.wikipedia.org/wiki/Dark_forest_hypothesis even though this was a silly response to the science fiction implications.

I found it an interesting read and hadn't heard the term before, but it's exactly the kind of nerdy serendipity I come to this site for!

AFAIK a fundamental step in any quantum computing algorithm is bringing the qubits back to a state with a nonrandom outcome (specifically, the answer to the problem being solved). Thus a "good" quantum computer does not bifurcate the wavefunction at a macro level, ie there is no splitting of the "multiverse" after the calculation.
Isn't it really just processing with matrices of imaginary numbers? Which is why you need error correction in the first place because the higher temperature the system the less coherent the phases all become? Thus having absolutely nothing to do with multiverse theory?

I think string theories ideas about extra curled up dimensions are far more likely places to look. You've already got an infinite instantaneous energy problem with multiverses let alone entropy transfer considerations.

The many-worlds interpretation of quantum theory [1] is widely considered unfalsifiable and therefore mostly pseudoscientific. This article is way in over it's head in claiming such nonsense.

[1] https://en.wikipedia.org/wiki/Many-worlds_interpretation

That depends on whether you think that MWI is making claims beyond what QM claims, or if you think that MWI is just a restatement of QM from another perspective. If the latter then the falsifiability of MWI is exactly the same as the falsifiability of QM itself.
Unfalsifiable does not mean pseudoscientific. Plenty of things might be unfalsifiable for humans (e.g. humans lack the cognitive capacity to conceive of ways to falsify them), but then easily falsifiable for whatever comes after humans.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If we are in a simulation. This seems like a good path to getting our process terminated for consuming too much compute.
AI too.

I've followed Many worlds & Simulation theory a bit too far and I ended up back where I started.

I feel like the most likely scenario is we are in a AI (kinder)garden being grown for future purposes.

So God is real, heaven is real, and your intentions matter.

Obviously I have no proof...

> "and your intentions matter."

How do you reach that conclusion?

Characters in The Sims games technically have us human players as gods, it doesn't mean that when we uninstall the game those characters get to come into our earthly (to them) heaven or have any consequences for actions performed during the simulation?

Sure it would. If you had Sims that went around killing other Sims, there's no way in hell you would promote them, or use their stimulated experiences as a bases for more complex/serious projects.

I'm not deep into LLMs or AI safety right now, but if you have a bad performing AI, you aren't going to use it as a base for future work.

I was about to go to bed so I was rushing through my initial comment... I was just trying to understand the motivations for trying to create a stimulated reality... Look at the resources we spend on AI?

  • djhn
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
One would have to be rather optimistic and patient if one was to hold out hope for the humanity experiment to not be destined for the Trash bin in this scenario, with our track record.
  • a2128
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I doubt we would even register as a blip. The universe is absolutely massive and there's celestial events that are unthinkably massive and complex. Black hole mergers, supernovae, galaxies merging. Hell, think of what chaos happens in the inside of our own sun, and multiply that by 100 billion stars in a galaxy, and multiply that by 100 billion galaxies. Humanity is ultimately inconsequential.
Surely it would depend on what the simulation actually was?

If you imagine simulations we can build ourselves, such as video games, it's not hard to add something at the edge of the map that users are prevented from reaching and have the code send "this thing is massive and powerful" data to the players. Who's to say that the simulation isn't actually focussed on earth, and everything including the sun is actually just a fiction designed to fool us?

The common trait that all hypothetical high-fidelity simulated universes possess is the ability to produce high-fidelity simulated universes. And since our current world does not possess this ability, it would mean that either humans are in the real universe, and therefore simulated universes have not yet been created, or that humans are the last in a very long chain of simulated universes, an observation that makes the simulation hypothesis seem less probable.
>The common trait that all hypothetical high-fidelity simulated universes possess is the ability to produce high-fidelity simulated universes.

Where have you seen this?

https://youtu.be/pmcrG7ZZKUc?t=220

If we're a simulation of a parent universe that is exactly like us just of it's past or an alternate past, then we likely should be able to achieve simulating our own universe within ourselves. Otherwise we're not actually a simulation.

There's another line of counter argument that various results in QM and computing theory would suggest that it's mathematically impossible for the universe to be simulated on a computer (i.e. the parent universe would have to look very different from ours vs ours in the future). But I don't recall the arxiv paper.

>If we're a simulation of a parent universe that is exactly like us just of it's past or an alternate past

Yes, this is a MASSIVE and COMPLETELY UNTESTABLE if

Everything about simulation theory is like, science-hostile or something it seems.

Of course it is. Scientifically the simulation “hypothesis” is actually the simulation idea and isn’t scientifically valid yet seems to be treated as such for some reason.
For me the interesting thing is, assuming miny worlds AND simulation theory are both true. Many worlds would seem to be a way to essentially run a/b tests on the simulation. So how would you separate out/simplify details of your simulation like far away planet stars and galaxies? The speed of light and light cones, don't seem to be enough to make a difference except for on the largest scales.
I really wish the release videos made things a ~tad~ bit less technical. I know quantum computers are still very early so the target audience is technical for this kind of release, but I can’t help wonder how many more people would be excited and pulled in if they made the main release video more approachable.
  • tcgv
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If you have programming experience, you might find this interesting: back in 2019, when Google announced achieving quantum supremacy, I worked on a personal project to study the basics of quantum computing and share my learnings with others in my blog:

- https://thomasvilhena.com/2019/11/quantum-computing-for-prog...

Excellent thank you!
>the more qubits we use in Willow, the more we reduce errors, and the more quantum the system becomes

That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.

It's really not so extraordinary, exponential reduction in logical errors when the physical error rate is below a threshold (for certain types of error correcting codes_ is well accepted an both theoretical and computational grounds.

For a rough but well-sourced overview, see Wikipedia: https://en.wikipedia.org/wiki/Threshold_theorem

For a review paper on surface codes, see A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” Phys. Rev. A, vol. 86, no. 3, p. 032324, Sep. 2012, doi: 10.1103/PhysRevA.86.032324.

Does this not assume uncorrelated errors?
It does. It's up to engineering to make errors uncorrelated. The google paper being referenced actually makes an "error budget" to see what the main sources of errors are, and also run tests to find sources of correlated errors.

The claim about this is that correlated errors will lead to an "error floor", a certain size of error correction past which exponential reduction in errors no longer applies, due to a certain frequency of correlated errors. See figure 3a of the arxiv version of the paper: https://arxiv.org/abs/2408.13687

>That's an EXTRAORDINARY claim and one that contradicts the experience of pretty much all other research and development in quantum error correction over the course of the history of quantum computing.

Not sure why you would say that? This sort of exponential suppression of errors is exactly how quantum error correction works and why we think quantum computing is viable. Source: have worked on quantum error correction for a couple of decades. Disclosure: I work on the team that did this experiment. More reading: lecture notes from back in the day explaining this exponential suppression https://courses.cs.washington.edu/courses/cse599d/06wi/lectu...

I wouldn't call it extraordinary, as this has been expected since the first quantum error correcting codes were worked out theoretically. But it is a strong claim, backed up with comparably strong evidence. Figure 1d of the paper shows exactly this https://arxiv.org/html/2408.13687v1, and unlike many other comparable works, there are no hat tricks like post-selection to boost the numbers.
  • pjs_
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's a categorically new experimental regime but it's exactly what was supposed to happen. I think it's an awesome result.
it... doesn't? threshold theorems are well known.
You learn a lot by what isn't mentioned. Willow had 101 qubits in the quantum error correction experiment, yet only mere 67 qubits in the random circuit sampling experiment. Why did they not test random circuit sampling with the full set of qubits? Maybe when turning on the full 101 set of qubits, qubits fidelity dropped.

Remember macroscopic objects have 10^23=2^76 particles, so until 76 qubits are reached and exceeded, I remain skeptical that the quantum system actually exploits an exponential Hilbert space, instead of the state being classically encoded by the particles somehow. I bet Google is struggling just at this threshold and they don't announce it.

I met julian touring UCSB as perspective grad students. We sat together at dinner and he was really smart, kind, and outgoing. Great to see him presenting this work!
  • rdtsc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The main part for me is reducing error faster as they scale. This was a major road-block, known as "below threshold”. That's a major achievement.

I am not sure about RCS as the benchmark as not sure how useful that is in practice. It just produced really nice numbers. If I had a few billions of pocket change around, would I buy this to run RCS really fast? -Nah, probably not. I'll get more excited when they factor numbers at a rate that would break public key crypto. For that would spend my pocket change!

The implication seems to be that they can implement other gates. As my gen z kids say: huge if true.
It's really important to note that the error correction test and the random circuit test are separate tests.

The error correction is producing a single logical qubit of quantum memory, i.e. a single qubit with no gates applied to it.

Meanwhile, the random circuit sampling uses physical qubits with no error correction, and is used as a good benchmark in part because it can prove "quantumness" even in the presence of noise.[1]

[1] https://research.google/blog/validating-random-circuit-sampl...

Take the announcement with a grain of salt. From German physicist Sabine Hoffenfelder:

> The particular calculation in question is to produce a random distribution. The result of this calculation has no practical use. > > They use this particular problem because it has been formally proven (with some technical caveats) that the calculation is difficult to do on a conventional computer (because it uses a lot of entanglement). That also allows them to say things like "this would have taken a septillion years on a conventional computer" etc. > > It's exactly the same calculation that they did in 2019 on a ca 50 qubit chip. In case you didn't follow that, Google's 2019 quantum supremacy claim was questioned by IBM pretty much as soon as the claim was made and a few years later a group said they did it on a conventional computer in a similar time.

https://x.com/skdh/status/1866352680899104960

TBH you need to take the youtube influencer Sabine Hoffenfelder with a bigger grain of salt. She has converted to mainly posting clickbait youtube stuff over the last years (unfortunately, she was interesting to listen to earlier).

The RCS is a common benchmark with no practical value, as is stated several times in the blog announcement as well. It's used because if a quantum computer can't do that, it can't do any other calculation either.

The main contribution here seems to be what they indeed put first, which is the error correction scaling.

  • Closi
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I think simplifying her to 'youtube influencer' is unfair - she is a doctor of theoretical physics with a specialism in quantum gravity who produces science content for youtube. She knows the field enough to comment.

She doesn't even say that this isn't a big leap (she says it's very impressive - just not the sort of leap that means that there are now practical applications for quantum computers, and that a pinch of salt is required on the claim of comparisons to a conventional computer due to the 2019 paper with a similar benchmark).

As a counterpoint, she recently reviewed a paper in one of her recent videos and completely tore it to shreds as apparently the math was full of absolute nonsense.

This was a fascinating watch, and not the kind of content that is easy to find. Besides videos like that one, I enjoy her videos as fun way to absorb critical takes on interesting science news.

Maybe she is controversial for being active and opinionated on social media, but we need more science influencers and educators like her, who don't just repeat the news without offering us context and interpretation.

I think I know which you mean, but TBH that paper read as an AI auto-generated troll paper that any physics undergrad should be able to dissect :) It was a bit fun to watch though and sure sometimes you need to provide some fun content as well!
I have observed the change in approach to use aggressive (clickbaity) thumbnails, but I do think the quality of the content has not changed.

And I can't blame her for adopting this trend, in many cases it is the difference between surviving or not on YouTube nowadays.

How would she not survive on YouTube? Would they block her posts if she didn't use a misleading title and thumbnail (shout-out to dearrow for those who despise this practice)?
>Would they block her posts if she didn't use a misleading title and thumbnail

So, the way youtube works is that every single creator is in an adversarial competition for your attention and time. More content is uploaded than can be consumed (profitably, from Youtube's point of view). Every video you watch is a "victory" for that video's creator, and a loss for many others.

Every single time youtube shows you a screen full of thumbnails, it's running a race. Whichever video you pick will be shown to more users, while the videos you don't pick get punished and downranked in the algorithm. If a Youtube creator's video is shown to enough people without getting clicked on, ie has a low clickthrough rate, it literally stops being shown to people.

Youtube will even do this to channels you have explicitly subscribed to, which they barely use as a signal for recommendations nowadays.

Every single creator has said that clickbait thumbnails have better performance than otherwise. If other creators are using clickbait thumbnails, you will be at a natural disadvantage if you do not. There are not enough users who hate clickbait to drive any sort of signal to the algorithm(s).

If you as a creator have enough videos in a row that do not do well, you will find your entire channel basically stops getting recommended.

It's entirely a tragedy of the commons problem: If every user stopped simultaneously, nobody would suffer, but any defectors would benefit, so they won't stop simultaneously.

Youtube itself could trivially stop this, but in reality they love it, because they have absolutely run tests, and clickbait thumbnails drive more engagement than normal thumbnails. This is why they provide ample tooling to creators to A/B test thumbnails, help make better clickbait etc, and zero tooling around providing viewers a way to avoid clickbait thumbnails, which would be trivial to provide as an "alternative thumbnail" setting for creators and viewers.

Sabine is literally driving herself down an anti-science echochamber though. Maybe she can't see it, but it's very clear from the outside what is happening. She has literally said that "90% of the science that your tax dollars pay for is bullshit" which is absurd hyperbole, and something that a PHYSICIST cannot say about all fields full stop. It's literally https://xkcd.com/793/

This is in a world where people view youtube through the interface which is obviously most people. I have feeds that I subscribe to and some of them are must-see for me, so no matter what they use as their bait I am watching it. Fortunately there exists smarttube, dearrow and sponsorblock for people like me who just want to watch the stuff they've subscribed to and not whatever advertises the best.
> they provide ample tooling to creators to A/B test thumbnails

They do? For many years, I made my living from YouTube. This was always a feature that people wanted, but that didn’t exist. It’s been a year-plus since I’ve actively engaged on YouTube as a creator. Is this a recent change?

Just because she is a YouTuber doesn't diminish her other credentials, just as she is incetivised to do clickbait, so are actual scientific communication outlets such as nature, and the more clicky they are the more downloads and citation they will acquire. Incentives change content but don't directly detract from someone's expertise. See: the fact that most universities now publish some lectures on YouTube, it doesn't make the content any less true.
  • cowl
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
she was right the first time when they announced this in 2019 and this time even they admit in their own press release:

> Of course, as happened after we announced the first beyond-classical computation in 2019, we expect classical computers to keep improving on this benchmark

As IBM showed their estimate of classical computer time is taken out of their a**es.

They explicitly cover all of these caveats in the announcement.

Problems that benefit from quantum computing as far as I'm aware have their own formal language class, so it's also not like you have to consider Sabine's or any other person's thoughts and feelings on the subject - it is formally demonstrated that such problems exist.

Whether the real world applications arrive or not, you can speculate for yourself. You really don't need to borrow the equally unsubstantiated opinion of someone else.

The formal class is called BQP, in analogy with the classical complexity clas BPP. BQP contains BPP but there is no proof that it is stictly bigger (such a proof would imply P != NP). There are problems in BQP we expect are not in BPP but its not clear if there are any useful problems in BQP and not in BPP, other than essentially Shor's algorithm.

On the other hand it's actually not completely necessary to have a superpolynomial quantum advantage in order to have some quantum advantage. A quantum computer running in quadratic time is still (probably) more useful than a classical computer running in O(n^100) time, even though they're both technically polynomial. An example of this is classical algorithms for simulating quantum circuits with bounded error whose runtime is like n^(1/eps) where eps is the error. If you pick eps=0.01 you've got a technically polynomial runtime classical algorithm but it's runtime is gonna be n^100, which is likely very large.

Not to defend Google, but they end up saying much the same:

> The next challenge for the field is to demonstrate a first "useful, beyond-classical" computation on today's quantum chips that is relevant to a real-world application. We’re optimistic that the Willow generation of chips can help us achieve this goal. So far, there have been two separate types of experiments. On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.

  • chvid
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It would be interesting to see what "standard benchmark computation" was used and what its implementation would like in a traditional computer language.

Does anyone know?

and the arXiv preprint, which isn't paywalled https://arxiv.org/html/2408.13687v1
And contains more information, figures, and lots of supplementary material.
  • xnx
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is anyone else even close to Google in this space? (e.g. on the "System Metrics" the blog defines)
Main players IMHO are IBM and Quantinuum, the latter employing different platform (ions). Neither could perform the same experiment I think, but have their own advantages. QuEra also looks good but are not as mature yet imho.
  • krick
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I would be very grateful if somebody can point to an actually good blogpost/video that would summarize the current state of the domain. I mean, I remember some "quantum annealing" providers as far as some 10 years ago (I mean, as a service, as such D-Wave exists for 25 years now), but I never actually learned if they are truly useful for anything (like, real numbers, what amount of computation these thing can perform and if it's truly cheaper/faster than throwing a bunch of GPUs on it). From time to time there are some news feturing dope photos, about some new chip from IBM, that is useful for nothing, but a big breakthrough for reasons I don't understand.

But I don't really have a feel of what's going on, really. How many quantum computers there are, is there anything that is actually capable of performing anything more than just being an ongoing research prototype? Some educated guesses about how far can be some non-public projects by now? Like, is it possible that some secret CIA project is further ahead than what we know, or if it's even more unlikely and farther away than fusion power? Or maybe it's even more comparable to cold fusion?

I know, that this kinda exists as an idea, and apparently somebody's working on it, but that's pretty much it.

  • taf2
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
IonQ - they are powering AWS solution here: https://aws.amazon.com/braket/quantum-computers/ionq/

Not sure if they are close in terms of specs but looks like they are a viable solution and seeing an increase in utilization over the last year... Seems both are pretty interesting to keep an eye on.

I'm wondering how this Google announcement will have an impact on their future revenue, and how easily competitors can replicate this breakthrough?
  • paxys
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Give it a few more years and a smaller, more focused company will come in and launch a successful product based on their research.
I would expect IBM, but I can't find any information on their system metrics based on a quick google search.

Would love if someone could weight in.

not publicly
  • hi41
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
What benchmark is being referred here?

>>Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion (that is, 1025) years — a number that vastly exceeds the age of the Universe

With this plus the weather model announcement. I’m curious what people think about the meta question on why corporate labs like Google DeepMind etc seem to make more progress on big problems than academia?

There are a lot of critiques about academia. In particular that it’s so grant obsessed you have to stay focused on your next grant all the time. This environment doesnt seem to reward solving big problems but paper production to prove the last grant did something. Yet ostensibly we fund fundamental public research precisely for fundamental changes. The reality seems to be the traditional funding model create incremental progress within existing paradigms.

I did quantum computing research in university. We did meaningful work and published meaningful research.

Around 50% of our time was spent working in Overleaf making small improvements to old projects so that we could submit to some new journal or call-for-papers. We were always doing peer review or getting peer reviewed. We were working with a lot of 3rd-party tools (e.g. FPGAs, IBM Q, etc). And our team was constantly churning due to people getting their degrees and leaving, people getting too busy with coursework, and people deciding they just weren't interested anymore.

Compare that to the corporate labs: They have a fully proprietary ecosystem. The people who developed that ecosystem are often the ones doing research on/with it. They aren't taking time off of their ideas to handle peer-review processes. They aren't taking time off to handle unrelated coursework. Their researchers don't graduate and start looking for professor positions at other universities.

It's not surprising in the slightest that the corporate labs do better. They're more focused and better suited for long-term research.

I wonder what makes research different than product development at a company?

Because in product development, there can be short-sighted industry decisions based on quarterly returns. I've also seen a constant need to justify outcomes based on KPIs etc, and constantly justifying your work, etc.

Research is product development. Successful companies treat it with respect.

> Because in product development, there can be short-sighted industry decisions based on quarterly returns. I've also seen a constant need to justify outcomes based on KPIs etc, and constantly justifying your work, etc.

I have seen this as well. It's extremely common (especially among publicly-owned companies) and frustrating. But it's not ubiquitous. Consider LM's Skunkworks or Apple's quiet development of the iPhone, and compare it to companies that finish a product and then focus on cutting costs / nickel-and-diming their customers.

I can only speak for the weather models (since it is in my domain). The answer is that the issues are much more engineering and scaling/infra issues than theoretical issues, and google is good at engineering (or attracts people that are).
Where exactly do you think the idea for a quantum computer came from in the first place?
Am I oversimplifying in thinking that they’ve demonstrated that their quantum computer is better than at simulating a quantum system than a classical computer?

In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.

Simulating a quantum system is a hard challenge and it's actually how Feynman proposed the quantum computing paradigm in the first place. It's basically the original motive.
Exactly.

I’ve seen lots of people dismissing this as if it isn’t impressive or important. I’ve watched one video where the author said in a deprecating manner “quantum computers are good for just two things: generating random numbers and simulating quantum systems”.

It’s like saying “the thing is good for just two things: making funny noises and producing infinite energy”.

(Also, generating random numbers is pretty useful, but I digress)

Do Americans still want to breakup the big US tech companies like Google? With proper regulation it feels like their positive externalities, like this, is good for humanity.
  • skort
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The key words here are "proper regulation". In an era where industries have captured governmental bodies, there will likely be no such regulation, and these tech companies will continue to siphon up resources and funnel them to a handful at the top.

A quote from the article is especially ludicrous: > to benefit society by advancing scientific discovery, developing helpful applications, and tackling some of society's greatest challenges

You don't need a quantum computer to do this. We can solve housing and food scarcity today, arguably our greatest challenges. Big tech has been claiming that it's going to solve all of our problems for decades now and it has yet to put up.

If you want this type of technology to be made and do actual good, we need publicly funded research institutions. Tech won't save us.

Regulation is the handle of the chainsaw aka corporation. Right now it is a bit wobbly which explains the cuts on the society we see around us. The faster we make the blades turn due to technological advances, the firmer the handle needs to be.
"Proper regulation" may involve breaking companies into pieces such that they cannot dominate industries and deprive the public the option of choosing a different provider for the services they provide. Does an Alphabet subsidiary working on quantum computer research require 90% of search traffic to go through Google? Or for Android handsets to send a really phenomenal amount of telemetry back to the mothership with no real recourse for the average user?
This is yet another attempt to posit NISQ results (Noisy Intermediate Scale Quantum) as demonstrations of quantum supremacy. This does not allow us to do useful computational work; it's just making the claim that a bathtub full of water can do fluid dynamic simulations faster than a computer with a bathtub-full-of-water-number-of-cores can do the same computation.

If history is any guide we'll soon see that there are problems with the fidelity (the system they use to verify that the results are "correct") or problems with the difficulty of the underlying problem, as happened with Google's previous attempt to demonstrate quantum supremacy [1].

[1] https://gilkalai.wordpress.com/2024/12/09/the-case-against-g... -- note that although coincidentally published the same day as this announcement, this is talking about Google's previous results, not Willow.

Some of these results have been on the arxiv for a few months (https://arxiv.org/abs/2408.13687) -- are there any details on new stuff besides this blog post? I can't find anything on the random circuit sampling in the preprint (or its early access published version).
The peer-reviewed version at Nature has more technical details about the processor itself.
> It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

Can someone explain to me how he made the jump from "we achieved a meaninful threshold in quantum computing performance" to "The multiverse is probably real."

  • dekhn
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The explanation is that Hartmut Neven has a bunch of sci-fi beliefs and somehow has managed to hold onto his job and even get to write parts of press releases.
My money is on edibles.
> Willow performed a standard benchmark computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years — a number that vastly exceeds the age of the Universe.

What computation would that be?

Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?

  • ra7
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It's in the article. Random circuit sampling benchmark: https://research.google/blog/validating-random-circuit-sampl...
  • crote
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is it really fair to call that "computation"? I am definitely not an expert, but it seems they are just doing a meaningless operation which happens to be trivial on a quantum computer but near-impossible to simulate on a classical computer.

To me that sounds a bit like saying my "sand computer" (hourglass) is way faster than a classical computer, because it'd take a classical computer trillions of years to exactly simulate the final position of every individual grain of sand.

Sure, it proves that your quantum computer is actually a genuine quantum computer, but it's not going to be topping the LINPACK charts or factoring large semiprimes any time soon, is it?

As they say explicitly in the article, this is like criticizing the first rocket to reach the edge of space for not getting anywhere useful.
  • crote
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
It seems like they went from lighting a candle to firing off some fireworks. Impressive, but there's still a long way to go until they can land a human on the moon - let alone travel to Alpha Centauri like some people are claiming.
Yes, this is exactly what it is doing [1]. The area of research is Noisy Intermediate Scale Quantum, and has arisen specifically to prove that quantum supremacy is possible in practice. It is currently the focus of pretty much all quantum computing research because attempts to produce a generalized quantum computer have all failed miserably. Existing practical quantum computers (like DWave) perform various annealing tasks but have basically proven to be inferior to probablistic algorithms computing the same task.

To date all attempt to produce valid claims of quantum supremacy via this channel have failed on closer inspection, and there is no reason to assume otherwise in this case until researchers have had time to look at the paper. There's a number of skeptics in the quantum computing field that believe that this is simply not possible.

[1] https://news.ycombinator.com/item?id=42369463

Are DWave still in the running? They used my PerfectTablePlan table seating software back in 2007 as the front end to demonstrate solving a combinatorial seating problem: https://www.perfecttableplan.com/newsletters/newsletter10_we...
It's different from your hourglass in that the computer is controllable. Each sampled random circuit requires choosing all of the operations that the computer will perform. You have no control over what operation the hourglass does.

It won't be factoring large numbers yet because that computation requires the ability to perform millions of operations on thousands of qubits without any errors. You need very good error correction to do that, but luckily that's the other thing they demonstrated. Only when they do error correction, they are basically combining their system down into one effective qubit. They'll need to scale by several orders of magnitude to have hundreds of error corrected qubits to do factoring.

> Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?

Ongoing research.

The main idea of quantum machine learning is that qubits make an exponentially high-dimensional space with linear resources, so can store and compute a lot of data easily.

However, getting the data in and results out of the quantum computer is tricky, and if you need many iterations in your optimization, that may destroy any advantage you have from using quantum computers.

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?

AI is quite good in producing the meaningless drivel needed for quantum computing related press releases.

"Also, what is the relationship, if any, between quantum computing and AI? Are these technologies complementary?"

AI is limited in part by the computation available at training and runtime. If your computer is 10^X times faster, then your model is also "better". Thats why we have giant warehouses full of H100 chips pulling down a few megawatts from the grid right now. Quantum computing could theoretically allow your phone to do that.

A quantum computer is not just a 10^X faster normal computer.

Are there AI algorithms that would benefit from quantum?

Makes sense. My brain is able to do that work on milliwatts.
Actually about 20 W — if you ignore the 80 W used by the rest of the body (which seems debatable). And clearly far more than this was required to 'train' the human brain to the level of intelligence we have today.[1] But this still probably doesn't take away from your point. The human brain seems to be many orders of magnitude more efficient than our most advanced AI technology.

Though the more I think about this, the more I wonder how they really would compare if you made a strictly apples-to-apples comparison.

[1] https://psychology.stackexchange.com/questions/12385/how-muc...

Newbie's question: how far is the RCS benchmark from a more practical challenge such as breaking RSA?

The article concludes by saying that the former does not have practical applications. Why are they not using benchmarks that have some?

Very far.

I'm not sure how to put it quantitatively, but my impression from listening to experts give technical presentations is that the breaking-rsa-type algorithms are a decade or two away.

This is very soon from a security perspective, as all you need is to store current data and break it in the future. But it is not soon enough to use for benchmarking current systems.

There are already companies selling 5,000+ qubit quantum systems to the NSA. I would assume they are already using it for that in the huge data center in Utah.
Which companies? And how big an RSA key do 5,000+ qubit quantum systems break?
Source?
They renamed quantum supremacy to "beyond-classical"? That's something.
Quantum supremacy was an absolutely awful name for what it was (ability to do something, anything, better than a classical computer, which remains 'supreme' on all problems of any practical interest).
Sure, but it sounded much cooler
https://arxiv.org/abs/1705.06768

It's not something that new, I like it.

>Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

A much simpler explanation is that your benchmark is severely flawed.

"Severely flawed" is a matter of interpretation, and I don't want to argue for or against.

But to put into context, these numbers are likely accurate, but represent the time it would take for a very naive classical algorithm (possibly brute-force, I am unsure).

For example, the previous result claimed it would take Summit 10,000 years to do the same calculation as the Sycamore quantum chip. However, other researchers were able to reproduce results classically using tensor-network-based methods in 14.5 days using a "relatively small cluster". [1]

[1] G. Kalachev, P. Panteleev, P. Zhou, and M.-H. Yung, “Classical sampling of random quantum circuits with bounded fidelity,” arXiv.org, https://arxiv.org/abs/2112.15083 (accessed Dec. 9, 2024).

Genuinely curious: does this make US regulators second-guess breaking up Google? Having a USA company be the first to develop quantum computing would be a major national security advantage.
Trying to understand how compute happens in Quantum computers. Is there a basic explanation of how superposition leads to computing?

From chatgpt, "with n qubits a QC can be in a superposition of 2^n different states. This means that QCs can potentially perform computations on an exponential number of inputs at once"

I don't get how the first sentence in that quote leads to the second one. Any pointers to read to understand this?

In other words, get off the cloud so nobody has your encrypted data which they will be able to crack in a few minutes five or ten years from now?
It depends on the algorithm you use to encrypt your data.

Only asymmetric cryptography is threatened. There is no realistic threat to symmetric encryption like AES.

If you are encrypting your cloud data with ed25519 or RSA, then yes, a quantum computer could theoretically someday crack them.

  • Havoc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> Only asymmetric cryptography is threatened.

aka everything we use daily

  • 7e
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Quantum mechanics is a computational shortcut that makes our simulation cost-effective. Mass adoption of chips like these is going to make the particular situation we live in unprofitable for hosts, resulting in the firey and dramatic end of the world for us. Simulating ancestors is fun, but not after your cloud bill skyrockets. Thank you, Google, for bringing about the apocalypse.
More misleading random circuit sampling benchmarks. All it proves is that Google has built a quantum computer that does quantum things.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • wslh
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
ELI5: what I could do if I have this chip at home?
Probably just research on quantum computers? I don't think it's big enough to let you solve any practical problems, but maybe someone can correct me
IF (and that's a big if) that's true then it means they can factorize number into primes with this quantum computer and break encryption.
No, that's not what that means.

Not sure what you mean by the "that" when you say "if that's true", but there is nothing in this thread or by google that is anywhere close to breaking encryption.

How are you so sure? If something that takes years is completed in minutes, how is encryption safe?
Because the "something" in question is not decryption. It's actually specifically something with no useful result, just a benchmark.

Decryption with quantum computers is still likely decades away, as others have pointed out.

To be specific, the best know quantum factoring did 15 = 3x5, and when 35 was not able to be factored when attempted. Most experimental demonstrations have stopped in recent years due to how pointless it currently is.

The amount of cubits required for a practical application of shors algorithm to break modern encryption is known and it's around 2500 qubits

Willow has 100

No, this quantum computer cannot factorize the large composite numbers that we use for modern RSA. Even for the numbers that it can factor, I don't think it will be faster than a decent classical computer.
[flagged]
  • dang
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Please don't do this here.
Got it, sorry !
So one of the interesting comparisons between Quantum computing vs classical in the video: 5 mins vs 10^25 years. So are there any tradeoffs or specific cases in which the use cases for Quantum computing works or is this generic for "all" computing use cases? if later then this will change everything and would change the world.
There are only certain kinds of computing tasks which are amenable to an exponential speedup from quantum computing. For many classical algorithms the best you get from a quantum computer is an improvement by a factor of sqrt(N) by using Grover's algorithm.

The other tradeoff is that quantum computers are much noisier than classical computers. The error rate of classical computers is exceedingly low, to the extent that most programmers can go their entire career without even considering it as a possibility. But you can see from the figures in this post that even in a state of the art chip, the error rates are of order ~0.03--0.3%. Hopefully this will go down over time, but it's going to be a non-negligible aspect of quantum computing for the foreseeable future.

It is specific to cases where a quantum algorithm exists that provides speedup, it is not at all generic. The complexity class of interest is BQP: https://en.wikipedia.org/wiki/BQP

Also of note: P is in BQP, but it is not proven that BQP != P. Some problems like factoring have a known polynomial time algorithm, and the best known classical algorithm is exponential, which is where you see these massive speedups. But we don't know that there isn't an unknown polynomial time classical factoring algorithm and we just haven't discovered it yet. It is a (widely believed) conjecture, that there are hard problems solved in BQP that are outside P.

Its for a very, very, very narrow set of algorithms AFAIUI.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • htrp
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
105 qubits
Interesting; it might be time for me to load up a quantum simulator and star learning how to program these things.

I've pushed that off for a long time since I wasn't completely convinced that quantum computers actually worked, but I think I was wrong.

IBM has well structured learning material and a quantum simulator at https://learning.quantum.ibm.com/

Also my almamater made the quantum enigmas series that is appropriate for high-school students (it also interesting if you have no prior knowledge about quantum computing) https://www.usherbrooke.ca/iq/quantumenigmas/ (it also use IBM online learning platform)

  • EGreg
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Any chance people will actually start reversing SHA1 hashes in the next few years? Is there any quantum algorithm for reversing one-way functions like that? (I mention SHA1 because they can find collissions.)
  • dom96
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Can anyone comment on how this chip is built? What does the hardware look like?
Relatives were asking for a basic explainer. Here's a good one by Hannah Fry: https://youtu.be/1_gJp2uAjO0
Notice how the most interesting part, the image with their quantum computing roadmap, has too low a resolution to read the relevant text at the bottom. Come on Google.
This is a great technical achievement. It gives me some hope to see that the various companies are able to invest into what is still very basic science, even if it were mostly as vanity projects for advertising purposes.

Quantum computing will surely have amazing applications that we cannot even conceive of right now. The earliest and maybe most useful applications might be in material science and medicine.

I'm somewhat disappointed that most discussions here focus on cryptography or even cryptocurrencies. People will just switch to post-quantum algorithms and most likely still have decades left to do so. Almost all data we have isn't important enough that intercept-now-decrypt-later really matters, and if you think you have such data, switch now...

Breaking cryptography is the most boring and useless application (among actual applications) of quantum computing. It's purely adversarial, merely an inconsequential step in a pointless arms race that we'd love to stop, if only we could learn to trust each other. To focus on this really betrays a lack of imagination.

“Quantum computing will surely have amazing applications that we cannot even conceive of right now…”

As best I understand, it’s not clear yet whether quantum computing will ever have any practical applications.

Furthermore, there has already been a great deal of work identifying potential applications for a quantum computer, so I’d say we’ve got a fair idea of what you could do with one if it ever exists.

The first (and important enough) practical application (apart from breaking RSA) seems to be simulation of quantum systems. But, this has some problems since alot of quantum models can be simulated with Deep Learning methods (getting better) to a good approximation. Probably, some quantum systems wont be good for (classical) Deep Learning, those might be the first candidates for application of quantum computers.
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
  • bn-l
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This is weird. I got this pop up halfway through reading:

> After reading this article, how has your perception of Google changed? Gotten better Gotten worse Stayed the same

How much support infrastructure does this thing need? (eg. Cryogenic cooling?) How big is a whole 'computer' and how much power draw?
  • hi41
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
If error is getting corrected, doesn’t it mean lower entropy? If so where else is entropy increasing, if it is a valid question to be asked.
Probably the same as normal chips: waste heat?
What does it mean when they say that the computations are happening in multiverse? I didn't know we are that advanced already :)
I don't know why i got downvoted. "It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch." -- this is from the article itself.
Can't wait for Google to release a breakthrough paper in 5 years just for the authors to leave and build OpenQuant
  • robot
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
there is so much skepticism on quantum computing that instead of inflated marketing words one should always start by what the biggest problems are, how they are not still solved yet, and then introduce what the new improvement is.

Otherwise there is no knowing if the accomplishment is really significant or not.

Especially if you consider how they choose the words and how they can be interpreted

" Second, Willow performed a standard benchmark computation in under five minutes that would take one of today’s "

Standard benchmark in what sense. Well, it was chosen for task where quantum computer would have better performance.

I am not saying this is nothing. Maybe, use more reserved words, e.g. "special quantum oriented benchmark" or something.

When I think of standard benchmark, I am thinking more common scenarios, e.g. searching, sorting, matrix multiplication.

  • mvkel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Seems to coincide with Google's annual quantum announcement, which has happened each fall since 2014
annual perf cycle probably
We need to seriously think if our systems/society are even remotely ready for this.
  • _benj
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
This comment reminded me of the TV show I've been recently watching on Netflix, Pantheon. It's about a different technical breakthrough (I don't want to put any spoilers), but it's also something that completely alters society, no security is able to deal with that new technology, first thing that happens is that the technology is weaponized... etc.

Idk enough about quantum computing to even understand this... but a technology that turns, say, AES or Blowfish, suddenly trivial to crack would very likely change the world

This is how I feel about drones.
  • riiii
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They'll be banned for public use in the next 2-3 years. I'm not advocating for the ban, just saying it'll happen.
It's easier to make a "ghost drone" than a "ghost gun." Banning will not be feasible for bad actors.
Banning anything is never about stopping theoretical "bad actors", it's about reducing the number of normal people who pick up "bad thing" and making mere ownership a suspicious enough act to compel investigation. It's about making things harder. A gun on a black market will be more expensive in a place that makes guns in general illegal. Small time Timmy will not be able to afford a gun to back up his desire to rob the local liquor store.

Systemic design is usually about how you affect the margins

In order to evolve, forbiding evolution is the wrong path. Just use and study and learn from new things and accumulate experience is the way to go.
Biological evolution occurs on the backs of millions of deaths.
and a few extinctions!
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
They're not. What's there to think about?
As if "thinking about it" will ever stop people from acting first.

I'm far more scared when tech-bros like Musk land on Mars and contaminate stuff we might not even be able to detect yet.

i'm not even remotely 'far more scared' about that. i think you are insufficiently scared about crypto being broken
Quantum computer becoming available / powerful does not mean all cryptography will get broken. People who have actual knowledge and expertise are already busy working on various aspects of PQC.
>It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.

Makes sense, or doesn't it? What's your take on the multiverse theory?

  • Havoc
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
So is it time to 100x key length on browsers yet or not?
About 8x just for key agreement and 40x for signatures. It's a lot. For key agreement it's worth it, and now about 1/3 of browsers in the wild use it. http://radar.cloudflare.com/adoption-and-u…
In what ways could Google monetize quantum computing?
Searching through an unstructured data set of size N on a classical computer takes O(N) time

but on a quantum computer, Grover's Algorithm allows such a search to be performed in O(N^0.5) time.

So Quantum Computing, could bring us a future where, when you perform a Google search for a word, the web pages returned actually contain the word you searched for.

  • _benj
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
> So Quantum Computing, could bring us a future where, when you perform a Google search for a word, the web pages returned actually contain the word you searched for.

Lol! I'm not gonna put a kagi plug here...

  • ·
  • 2 weeks ago
  • ·
  • [ - ]
'It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.'

Wait... what? Google said this and not some fringe crackpot?

  • nuz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Every time this comes up people say they're not actually useful for ML. Is that true? And if not what would they be useful for
No, a true quantum computer will not necessarily solve NP-complete (NPC) problems efficiently. Quantum algorithms like Grover’s provide quadratic speedups, but this is insufficient to turn exponential-time solutions into polynomial-time ones. While quantum computers excel in specific tasks (e.g., Shor’s algorithm for factoring), there’s no evidence they can solve all NP-complete problems efficiently.

Current complexity theory suggests that , the class of problems solvable by quantum computers, does not encompass . Quantum computers may aid in approximations or heuristics for NPC problems but won’t fundamentally resolve them in polynomial time unless , which remains unlikely.

Factoring. Reversing ECC operations. Decrypting all the data thought to be safely stored at rest in any non quantum resistant storage.

I do think ai algorithms could be built that quantum gates could be fast at, but I don’t have any ideas off the top of my head this morning. If you think of AI training as searching the space of computational complexity and quantum algorithms as accessing a superposition of search states I would guess there’s an intersection. Google thinks so too - the lab is called quantum ai.

breaking crypto, for one
In principal NP complete problems is my guess.
It is unknown whether quantum computing makes NP-complete problems easier to solve. There is a complexity class for problems that can be solved "efficiently" on using quantum computing, called BQP. How BQP and NP are related is unknown. In particular, if an NP-complete problem was shown to be solvable efficiently with Quantum Computing (and thus in BQP), this open (and hard) research question would be solved (or at least half of it).

Note that BQP is not "efficient" in a real-word fashion, but for theoretical study of Quantum computing, it's a good first guess

AFAIK, which is not much, I believe it is problems that you can turn in to a cycle. Right now we pull out answers from quantum computers at random, but typically do not know what the inputs were that got that answer. But if you can get the answers from the quantum computer to be cyclical you can use that symmetry to get all the information you need.
what is this actually useful for?
Imagine your civilization develops quantum computing technology and it's for... advertising.

"What is their mission? Cure cancer? Eliminate poverty? Explore the universe? No, their goal: to sell another fucking Nissan." --Scott Galloway

That's how you monetize attention, digital consumption. If you aren't paying for it, you are the product being sold.
  • NoOn3
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
In modern world, sometimes, even if you pay for it, it doesn't always give guarantees... :(
  • taf2
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
Is this using ionq or is this in-house from google?
They say in-house with their own US fab in the announcement.
QTUM
Contains MSTR and not GOOG...
Point is to get in before goog gets in
Now, this is really make other countries nervous. Basically, existing cryptography technology is in danger.
  • rvz
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
I don’t usually say this often, unlike most here but this is actually a huge achievement in quantum computing.

Yet another example as to why Google is essentially not going anywhere or ‘dying’ as most have been proclaiming these days.

  • _joel
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
But can it run Crysis?
I don't want to judge people by their cover, but I want to confess to having those feelings right now.

In this day and age, I feel an immediate sense of distrust to any technologist with the "Burning Man" aesthetic for lack of a better word. (which you can see in the author's wikipedia profile from an adjacent festival -> https://en.wikipedia.org/wiki/Hartmut_Neven, as well as in this blog itself with his wristbands and sunglasses -> https://youtu.be/l_KrC1mzd0g?si=HQdB3NSsLBPTSv-B&t=39)

In the 2000's, any embracement of alternative culture was a breath of fresh air for technologists - it showed they cared about the human element of society as much as the mathematics.

But nowadays, especially in a post-truthiness, post-COVID world, it comes off in a different way to me. Our world is now filled with quasi-scientific cults. From flat earthers to anti-vaxxers, to people focused on "healing crystals", to the resurgence of astrology.

I wouldn't be saying this about anyone in a more shall we say "classical" domain. As a technologist, your claims are pretty easily verifiable and testable, even on fuzzy areas like large language models.

But in the Quantum world? I immediately start to approach the author of this with distrust:

* He's writing about multiverses

* He's claiming a quantum performance for something that would take a classical computer septillions of years.

I'm a layman in this domain. If these were true, should they be front page news on CNN and the BBC? Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)

But no matter what I just can't help but feel like the author's choices harm the credibility of the work. Before you downvote me, consider replying instead. I'm not defending feeling this way. I'm just explaining what I feel and why.

I don't share your mistrust of the aesthetic, but I think it's pretty natural to be skeptical of the out-group, so to speak, doubly so if you have no practical way of verifying their claims. At least you're honest about it!

I guess something to think about it that amongst a group like the "burners" there is huge variety in individual experience and skill. And even within a single human mind it's possible to have radically groundbreaking thoughts in one domain, and simultaneously be a total crack-pot in another. Linus Pauling and the vitamin C thing comes to mind. There's no such thing as an average person!

I guess we'll see what the quantum experts have to say about this in the weeks to come =)

> I immediately start to approach the author of this with distrust:

> * He's writing about multiverses

> * He's claiming a quantum performance for something that would take a classical computer septillions of years.

> I'm a layman in this domain

I think your skepticism is well-founded. But as you learn more about the field, you learn what parts are marketing/hype bullshit, and what parts are not, and how to translate from the bullshit to the underlying facts.

IMO:

> He's writing about multiverses

The author's pet theory, no relevance to the actual science being done.

* He's claiming a quantum performance for something that would take a classical computer septillions of years.

The classical computer is running a very naive algorithm, basically brute-force. It is very easy to write a classical algorithm which is very slow. But still, in the field, it takes new state-of-the-art classical algorithms run on medium size clusters to get results that are on-par with recent quantum computers. Not even much better, just on-par.

> Or is this just how technology breakthroughs start (after all the Transformer paper wasn't)

You could say that. It's not truly a breakthrough, but it is one more medium-size step in a rapidly advancing field.

The hall of great scientists is packed with holders of strange beliefs. Half of Newton's writings were on religious speculation, alchemy, and the occult. One of Einstein's very favorite books was Blavatsky's "Isis Unveiled". Just about every key person in early QM was deep into the Vedas. Kary Mullis was an AIDS denialist, and questioned the utility of his own test as a virus detector. If you really think about it, you will see that this phenomenon arises more from necessity than coincidence.
I know Hartmut Neven personally and professionally, and have for decades. He's not anything like you claim he is. Attacking him for wearing a wristband? That's an ad hominem attack, and not worthy of my time to counter you on.

The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.

Yes, Hartmut has a style, yes, he enjoys his lifestyle, no, he's not a quack. You don't have to believe me, and I don't expect that you will, but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.

Your comment here says far more about you than it says about Hartmut Neven.

> I know Hartmut Neven personally and professionally, and have for decades

I don't want to put you on the spot too much, but can you speak to why he included the part about many-worlds in this blog post?

I don't know enough about Google to say if maybe someone else less technical wrote that, or if he is being pressured to put sci-fi sounding terms in his posts, or if he believes Google's quantum computer is actually testing many-worlds, or some other reason I can't think of.

> Attacking him for wearing a wristband? That's an ad hominem attack, and not worthy of my time to counter you on.

I picked my words very carefully and I would appreciate if you responded to what I said, not what you think I implied.

I specifically called out - I'm having feelings of bias. That in a field full of quack science and overpromises and underdelivery, I am extraordinarily suspicious of anyone who I feel might be associated with a shall we say "less than rigorous relationship with scientific accuracy". This person's aesthetic reminds me of this.

> The fact is that "Burners" are everywhere, nothing about Burning Man means someone is automatically a quack. Your distrust seems misplaced and colored by your own personal biases. The list of prominent people in tech that are also "burners" would likely shock you. I doubt you've ever been to Burning Man, but you're going to judge people who have? Maybe you're just feeling a little bit too "square" and are threatened by people who live differently than you do.

You couldn't be more wrong. I'm a repeat Burner throughout the 2000's (though it's been a decade), and I've been to a dozen regional Burner events. I know many Burners both in the tech industry and outside of it.

So I actually speak with some experience. I know wonderful people who are purely artists and are not scientifically/technologically inclined - and they're great. I also know deep technologists for whom Burning man is purely an aesthetic preference - a costume not an outfit. Something to pretend to be for a little while but that otherwise has no bearing on their outside life.

And I unfortunately know those whose brainrot ends up intertwining. Crypto evangelists who find healing crystals just as groundbreaking as the blockchain. It's this latter category that I am the most suspicious of, and what I worry when I see a person presented as an authoritative leader in the Quantum Computing domain demonstrate in their external presentation.

I led with an acknowledgement that I am judging a book by it's cover, which one ought to never do. But I think it is worth pointing out because respectability in a cutting edge field is important, lest you end up achieving technological breakthroughs that don't actually change society at all (as already happened with Google Glass).

> You don't have to believe me, and I don't expect that you will,

Why would you expect that I wouldn't?

> but I've talked at length with him about his work, and about a great many other topics, and he is not as you think he is.

That's fantastic to hear! You have direct evidence contradicting the assumptions generated by my first impression. This is all that matters, and all you had to say.

>who I feel

You're basing things on your feelings, not personally knowing the person. I know you've alluded to that, but seriously, just stop.

> I worry when I see a person presented as an authoritative leader in the Quantum Computing domain demonstrate in their external presentation.

I'm not sure why how someone dresses makes you worry, especially since you aren't even involved in QC. Stop worrying about things you can't control, especially someone else's appearance. Has Burning Man taught you nothing? If you think it taught you to be biased towards someone based on their appearance, then I think you completely missed the point.

>as already happened with Google Glass

You may not know this, but he was tapped to lead the Google Glass project, and quickly got out of it. He felt that the silicon at the time was not capable of producing the results people wanted in the form-factor they were expecting. He was right. Of course tech has improved since then and better VR/AR glasses in a convenient form factor are just now starting to be a thing, but Google Glass is long since shuttered.

He didn't just come out of nowhere, he's been involved in actual AI (not LLMs) for decades. His company was bought by Google and is the basis for their computer vision systems, which is how he ended up at Google.

As for you supposing he's into "healing crystals" or any other wooo nonsense simply based on how he dresses, I have never known him to talk about such things at all, in all our conversations throughout the decades.

> This person's aesthetic reminds me of this.

You are barking up the wrong tree, and you should maybe tone down your judginess of others. I have news for you - you can't tell a book by its cover, but you sure are trying to. You just come off as being jealous that someone can have fun and also be a pioneer in QC. No doubt any person at the top of their field has plenty of haters, based on nothing more than "he doesn't dress like I expect him to".

  • Aeium
  • ·
  • 2 weeks ago
  • ·
  • [ - ]
The quantum performance thing is real, but that the random circuit sampling problem they are tabling as the benchmark here is for a quantum circuit.

So really what is being claimed is that classical computers can't easily simulate quantum ones. But is that really surprising?

What would be surprising would be that kind of speedup vs classical on some kind of general optimization algorithm. I don't think that is what they are claiming though, even if it does kind of seem like it's being presented that way.

[dead]
Here is a free preview (lecture 6): https://youtu.be/6rf-hjyNl4U
[flagged]
Nice. I like how HN now has AI summary bots.
[flagged]
Generated comments are against HN guidelines.
[flagged]
I bet Vimeo videos will still chug on it
Is anyone fine-tuning llama to write Q#? I feel LLMs can be a helpful tool in learning how to code quantum systems.