Genuine question - why is Bitcoin not collapsing upon this news? Surely big money that control Bitcoin have high enough IQ to extrapolate inti the future and come to decision on the inevitable implications.
The critics claim the noise collapses quantum states so quickly that it's not possible to make full use of the quantum effects. The burden of proof is on the chip makers. I think they have not convinced the critics yet.
Why use the subtext? In the peer review (pp30-31), the authors
>acknowledge [that] Hamiltonian learning of an actual physical system has not been performed [and] have therefore further
de-emphasized the quantum advantage claim in the revised manuscript
This excerpt misses the wider point of the paper. The paragraph immediately following the one you quote still does make claims of quantum advantage:
"Our second order OTOC circuits introduce a
minimal structure while remaining sufficiently generic that circumvents this challenge by
exploiting the exponential advantage provided by including time inversion in the
measurement protocol, see arXiv:2407.07754."
The advantage claimed by the paper isn't about Hamiltonian learning (i.e extracting model parameters from observational data), but instead about computing the expectation value of a particular observable. They acknowledge that the advantage isn't provable (even the advantage of Shor's algorithm isn't provable), but they argue that there likely is an advantage.
Shor’s algorithm’s advantage isn’t proven, but a proof that integer factorization doesn’t admit a classical algorithm faster than O((log N)^3) could be found. The same applies for Google’s artificial problem.
An analogy which is closer to Google's experiment: measuring versus calculating the energy gaps in benzene to an arbitrary accuracy.
It is faster to measure those with state of the art "quantum tools" but that does not improve our understanding of other aromatic molecules.
(we may still get some insights about anthracene however)
The googles' advantage can be satisfactorily summarised as "not having to write the problem in terms of a classical basis" -- the overhead of having to represent qubits as bits.
I do believe that whatever "informational advantage" we can get from these experiments can be used to improve classical calculations.
Eg In the arxiv paper linked above they talk about provably-efficient shallow classical shadows
So, "verifiable" here means "we ran it twice and got the same result"?
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
Normally, where I come from anyway, verifiability would refer to the ability to prove to a classical skeptic that the quantum device did what it's supposed to, cf. e.g. Mahadev (https://arxiv.org/abs/1804.01082), Aaronson (https://arxiv.org/abs/2209.06930), in a strong, theoretical, sense. And that's indeed relevant in the context of proving advantage, as the earlier RCS experiments lacked that ability, so “demonstrating verifiable quantum advantage” would be quite the step forward. That doesn't appear to be what they did at all though. Indeed, the paper appears to barely touch on verifiability at all. And – unlike the press release – it doesn't claim to achieve advantage either; only to indicate “a viable path towards” it.
It is not very clear from the text and from what I can say there is no "verifiability" concept in the papers they link.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
At least they claim that: «Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result. This repeatable, beyond-classical computation is the basis for scalable verification.» (emph. mine)
But apparently they haven't demonstrated the actual portability between two different quantum computers.
A key result here is the first demonstration of quantum supremacy; from TFA
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
"(1) The observable can be experimentally measured with the proper accuracy, in our case with an SNR above unity. More formally, the observable is in the bounded-error quantum polynomial-time (BQP) class.
(2) The observable lies beyond the reach of both exact classical simulation and heuristic methods that trade accuracy for efficiency.
[...]
(3) The observable should yield practically relevant information about the quantum system.
[...] we have made progress towards (1) and (2). Moreover, a proof-of-principle for (3) is demonstrated with a dynamic learning problem."
So none of the criteria they define for "practical quantum advantage" are fully met as far as I understand it.
The key word is "practical" - you can get quantum advantage from precisely probing a quantum system with enough coherent qubits that it would be intractable on a classical computer. But that's exactly because a quantum computer is a quantum system; and because of superposition and entanglement, a linear increase in the number of qubits means an exponential increase in computational complexity for a classical simulation. So if you're able to implement and probe a quantum system of sufficient complexity (in this case ~40 qubits rather than the thousands it would take for Shor's algorithm), that is ipso facto "quantum advantage".
It's still an impressive engineering feat because of the difficulty in maintaining coherence in the qubits with a precisely programmable gate structure that operates on them, but as far as I can see (and I've just scanned the paper and had a couple of drinks this evening) what it really means is that they've found a way to reliably implement in hardware a quantum system that they can accurately extract information from in a way that would be intractable to simulate on classical machines.
I might well be missing some subtleties because of aforementioned reasons and I'm no expert, but it seems like the press release is unsurprisingly in the grayzone between corporate hype and outright deceit (which as we know is a large and constantly expanding multi-dimensional grayzone of heretofore unimagined fractal shades of gray)
The quantum algorithm that would break certain kinds of public key cryptography schemes (not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers) will take days to weeks to break a single key [0]. This is another reason why we will have plenty of warning before quantum computing causes any major disruptions to daily life.
What I would start worrying about is the security of things like messages sent via end-to-end encrypted services like WhatsApp and Signal. Intercepted messages can be saved now and decrypted any time in the future, so it's better to switch to more robust cryptography sooner rather than later. Signal has taken steps in this direction recently: https://arstechnica.com/security/2025/10/why-signals-post-qu....
Usually, the crypto should have Forward Secrecy already even without being PQ-safe (e.g., via https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm) so in practice the attacker would need to break many successive session keys - which rotates every time a new message is sent.
> not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers
Um, what? Shor’s algorithm can take the public key of a wallet (present on any outgoing transaction in the ledger) and produce its private key. So now you can hijack any wallet that has transferred any Bitcoin. Notably only one successful run of the algorithm is needed per wallet, so you could just pick a big one if it takes weeks.
It probably wouldn’t help you mine in practice, sure. Technically it would give you better asymptotic mining performance (via Grover’s algorithm) but almost certainly worse in practice for the foreseeable future.
Quantum is a known threat. There is enough time to fix it. Folks are working on the fixes.
Cryptocurrencies would be the last thing I worry about w.r.t Quantum crypto attacks. Everything would be broken. Think banks, brokerage accounts, email, text messages - everything.
I think that’s backwards: most of the stuff you mentioned is using TLS and can switch to post-quantum algorithms with a config change, and do so incrementally with no user-visible impact - e.g. right now I’m already using PQC for many sites and about half of the traffic Cloudflare sees is using PQC:
In contrast, cryptocurrencies have to upgrade the entire network all at once or it’s effectively a painful fork. That effort appears to just be getting talked about now, without even starting to discuss timing:
> In contrast, cryptocurrencies have to upgrade the entire network all at once or it’s effectively a painful fork
Bitcoin is much more centralized than the popular imagination would have you believe, both in terms of the small number of controlling interests behind the majority of the transaction capacity, and just as importantly the shared open source software running those nodes. Moreover, the economic incentives for the switch are strongly, perhaps even perfectly, aligned among the vast majority of node operators. Bitcoin is already dangerously close to, if not beyond, the possibility of a successful Byzantine attack; it just doesn't happen precisely because of the incentive alignment--if you're that large, you don't want to undermine trust in the network, and you're an easy target for civil punishment.
(I know that you understand this, but just highlighting it)
In fairness, the original Bitcoin white paper referenced both (1) distributed compute and (2) the self-defeating nature of a Byzantine attack as the means of protection. It's not as though (2) is just lucky happenstance.
I definitely agree that the major players will want to move forward, but it seems like there's a legacy system kind of problem where it can stall if you get some slackers who either don't update (what happens to cold wallets?) or if some group has ideological disagreements about the solution. None of that is insurmountable, of course, but it seems like it has to be slower than something where you personally can upgrade your HTTPS servers to support PQC any time you want without needing to coordinate with anyone else on the internet.
I can't remember which chain it was but I'm sure I've seen stats on in-progress rollouts of protocol changes where the network took something like weeks or months to all get upgraded to the new version. You can design for tolerating both for a time.
Clients need to be updated, too, since what's happening is that the server and client need to agree on a common algorithm they both support, but that's been in progress for years and support is now pretty widespread in the current versions of most clients.
Stragglers are a problem, of course, but that's why I thought this would be a harder problem for Bitcoin: for me to use PQC for HTTPS, only my browser and the server need to support it and past connections don't matter, whereas for a blockchain you need to upgrade the entire network to support it for new transactions _and_ have some kind of data migration for all of the existing data. I don't think that's insurmountable – Bitcoin is rather famously not as decentralized as the marketing would have you believe — but it seems like a harder level of coordination.
The world has already migrated through so many past now-insecure cryptography setups. If quantum computers start breaking things, people will transition to more secure systems.
In HTTPS for example, the server and client must agree on how to communicate, and we’ve already had to deprecate older, now-insecure cryptography standards. More options get added, and old ones will have to be deprecated. This isn’t a new thing, just maybe some cryptographic schemes will get rotated out earlier than expected.
> If quantum computers start breaking things, people will transition to more secure systems.
that's not really the issue, the real interesting part is existing encrypted information that three letter agencies likely have dutifully stored in a vault and that's going to become readable. A lot of that communication was made under the assumption that it's secure.
Yeah, all the encrypted messages collected when illegal markets got seized will be decrypted. Many of them uses RSA 2048 so by 2030 its gonna be broken according to the timelines.
Its actually something we will notice. Arrests will be announced.
Every time I mention quantum computing as a threat to crypto (which I have been for years), I get downvoted to oblivion. I guess we have a lot of HODLers here. A bet on crypto is a bet against quantum computing.
I haven't once even thought of investing in crypto, and think that the technology is mostly useless and proof of work schemes should be banned on environmental grounds.
Even so, I don't agree that quantum is a threat to crypto. There are already well known quantum-resistant encryption schemes being deployed live in browsers, today. Crypto can just start adopting one of these schemes today, and we're still probably decades away from a QC that can factor the kinds of primes that crypto security uses. The transition will be slightly more complex for proof of work schemes, since those typically have dedicated hardware - but other types of crypto coins can switch in months, most likely, if they decide to, at least by offering new wallet types or something.
>There are already well known quantum-resistant encryption schemes being deployed live in browsers, today. Crypto can just start adopting one of these schemes today, and we're still probably decades away from a QC that can factor the kinds of primes that crypto security uses.
It's very strange that some people act like switching over to a post quantum cryptography scheme is trivial. Did you watch the video I replied to, which is a talk by an actual quantum computing researcher?
I haven't seen anyone post any progress on factoring large numbers with quantum computers in a while. Annealers won't do it efficiently, but probably still hold the record anyway, for a relatively small number you could do classical hardware. Gate model machines with enough qubits to do it are still ages off. Bitcoin should find a way to transition to a post-quantum algorithm, but that's about it. As long as they do it before anyone has a big enough QPU, they're fine, and nobody is even close, it seems.
Unless advances in QC could rewrite the blockchain then there's not much to worry about. If the crypto algorithms are compromised, you coins are pretty much frozen on the chain until a new algorithms are implemented. Are you're arguing QC makes signatures/verification/mining impossible?
Think twice. Everyone who hosts the blockchain would decide to stop because he invested in crypto, at least with some hardware costs. Beside of the small group of people that owns a quantum computer. I don't expect that this group is >50% of the people that hosts the blockchain.
You don't need >50% of bad actors to compromise the blockchain, but rather >50% of the total hashing power. This could very well be achievable by a small group of people with QC at some point.
"surpassing even the fastest classical supercomputers (13,000x faster)"
"Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result."
"The results on our quantum computer matched those of traditional NMR, and revealed information not usually available from NMR, which is a crucial validation of our approach."
It certainly seems like this time, there finally is a real advantage?
I’ve only skimmed the paper but it seems like the “information not usually available” from NMR is the Jacobian and Hessian of the Hamiltonian of the system.
So basically you’re able to go directly from running the quantum experiment to being able to simulate the dynamics of the underlying system, because the Jacobian and Hessian are the first and second partial derivatives of the system with respect to all of its parameters in matrix form.
That might take a few days or weeks, it seems like they put in some decent effort into it this time. From skimming the supplement I wouldn’t be suprised if the speedup is only 100x though. That’s still significant, but clearly less than they claim. For example I am not entirely convinced that 20% flops efficiency is really the upper limit or that the slicing overhead of 5x is really needed here.
This is quite different from their previous random circuit sampling (RCS) experiments that have made headlines a few times in the past. The key difference from an applied standpoint is that the output of RCS is a random bitstring which is different every time you run the algorithm. These bitstrings are not reproducible, and also not particularly interesting, except for the fact that only a quantum computer can generate them efficiently.
The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'
As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).
This is not the RCS problem or indeed anything from number theory.
The announcement is about an algorithm which they are calling Quantum Echoes, where you set up the experiment, perturb one of the qbits and observe the “echoes” through the rest of the system.
They use it to replicate a classical experiment in chemistry done using nuclear magnetic resonance imaging. They say they are able to reproduce the results of that conventional experiment and gather additional data which is unavailable via conventional means.
It’s really hard to parse both the announcement (too much hyperbola) and the article (too technical), however as I understand it, this is what quantum computing should be good at. Not at making classical algorithms faster, but at simulating quantum physics experiments. This is a good direction and I find it more plausible than “we factored numbers faster”.
This SHOULD be the main application of the tech. It’s hard to tell if it is because authors can’t do science communication, they can apparently only do sales.
My understanding is that this one is "verifiable" which means you get a reproducible result (i.e. consistent result comes out of a computation that would take much longer to do classically).
Non-verifiable computations include things like pulling from a hard-to-compute probability distribution (i.e. random number generator) where it is faster, but the result is inherently not the same each time.
This is as would be expected if it were real. Advantage isn't a black and white thing, because the comparison starts against 'any task done the best we know how to do using the most resources we happen to be willing to throw at it, even if we don't have a means to check that the output was correct', and ends at 'useful output you can formally verify where you have a strong reason to believe no classical algorithm would be effective.'
I’m no expert either, so I hope one can corroborate or correct me…
My understanding though is that these steps are really the very beginning. Using a quantum computer with quantum algorithms to prove that it’s possible.
Once proven (which maybe article this is claiming?) the next step is actually creating a computer with enough qubits and entanglable pairs and low enough error rates that it can be used to solve larger problems at scale.
Because my current understanding with claims like these is that they are likely true, but in the tiny.
It’d be like saying “I have a new algorithm for factoring primes that is 10000x faster than the current best, but can only factor numbers up to 103.”
It's what happens when companies are driven by profit rather than making accurate scientific statements that reputation is built by and further research funding is predicated on.
Hyperbolic claims like this are for shareholders who aren't qualified to judge for themselves because they're interested in future money and not actual understanding. This is what happens when you delegate science to corporations.
Quantum computing is mostly a scheme to get more grants/funding for quantum research, it doesn’t have any real world application and most likely won’t have any in the foreseeable future.
Like verifiable means i can run 3x3 on any quantum computer and always get 9 as result - but quantum computers cant even do that.
Don’t get me wrong; it’s very cool theoretical research, and more power to the scientist.
But thinking it will have any impact on the real world in our lifetime is a pipe dream imho
FWIU a 6-stage RISC processor is sufficient to run Linux.
Things like CUDA-Q may be faster on classical computers than on quantum computers for forever; though what CUDA-Q solves for is also an optimization problem?
the quantum chip iirc only runs a subset of algorithms due to limited gates implementable on the quantum chip; is the quantum chip a universal computer?
This is more the case for D-Wave's machines which are specialised for quantum annealing, allowing for greater numbers of qubits. Google and most other major hardware players make chips which can implement a universal quantum gate set allowing for arbitrary quantum operations to be performed (in principle). The issue with these chips is that quantum error correction is not fully implemented yet so computations are effectively time-limited due to the build up of noise from imperfect implementation and finite-temperature effects. A big part of current punts at quantum advantage is figuring out how to squeeze every last drop out of these currently faulty devices.
The article states: “...13,000 times faster on Willow than the best classical algorithm on one of the world’s fastest supercomputers...”
I agree it's not very precise without knowing which of the world's fastest supercomputers they're talking about, but there was no need to leave out this tidbit.
I bought a hundred shares of D-Wave about six months ago, betting on this fact.
A part of me thinks quantum computing is mostly bullshit, but I would be very happy to be wrong. I should probably learn Q# or something just to be sure.
The rule of thumb is that a working quantum computer that can run Grover's algorithm reduces the security of a symmetric cipher to half of its key size. That is, AES-128 should be considered to have a 64 bit key size, which is why it's not considered "quantum-safe."
Edit: An effective key space of 2^64 is not secure according to modern-day standards. It was secure at the times of DES.
AES-128 is quantum safe (more or less). 64 bit security in the classical domain isn't safe because you can parallelize across 2^20 computers trivially. Grover gives you 2^64 AES operations on a quantum coputer (probably ~2^70 gates or so before error correction or ~2^90 after error correction) that can't be parallelized efficiently. AES-128 is secure for the next century (but you might as well switch to aes-256 because why not)
> Quantum computing-enhanced NMR could become a powerful tool in drug discovery, helping determine how potential medicines bind to their targets, or in materials science for characterizing the molecular structure of new materials like polymers, battery components or even the materials that comprise our quantum bits (qubits)
There is a section in the article about future real world application, but I feel like these articles about quantum "breakthroughs" are almost always deliberately packed with abstruse language. As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
> As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
not really. but that doesn't mean it's not worth striving for. Breakthrough to commercial application are notoriously hard to predict. The only way to find out is to keep pushing at the frontier.
The last time I heard a similar news from Google, it turned out they were solving a quantum phenomenon using a quantum phenomenon. It seems to be the same pattern here. Not to say it's not progress, but kind of feels like overhyped.
Idk. I get this is the median take across many comments, I don’t mean to be disagreeable with a crowd. But I don’t know why using quantum phenomena is a sign something’s off. It’s a quantum computer! But I know something is off with this take if it didn’t strike you that way.
To me, it matters because it's a sign that it might not be particularly transferable as a method of computation.
A wind tunnel is a great tool for solving aerodynamics and fluid flow problems, more efficiently than a typical computer. But we don't call it a wind-computer, because it's not a useful tool outside of that narrow domain.
The promise of quantum computing is that it can solve useful problems outside the quantum realm - like breaking traditional encryption.
Good point, I guess that's why I find this comments section boring and not representative of the HN I've known for 16 years: there's a sort of half-remembering it wasn't powerful enough to do something plainly and obviously useful yesterday.
Then, we ignore today, and launder that into a gish-gallop of free-association, torturing the meaning of words to shoehorn in the idea that all the science has it wrong and inter alia, the quantum computer uses quantum phenomena to computer so it might be a fake useless computer, like a wind tunnel. shrugs
It's a really unpleasant thing to read, reminds me of the local art school dropout hanging on my ear about crypto at the bar at 3 am in 2013.
I get that's all people have to reach for, but personally, I'd rather not inflict my free-association on the world when I'm aware I'm half-understanding, fixated on the past when discussing something current, and I can't explain the idea I have as something concrete and understandable even when I'm using technical terms.
I know what you're talking about, but I think you happened to pick a bad example to pick on here. This wind tunnel analogy resembles a common criticism of the prior experiments that were done by Google and others over the last few years. Those experiments ran highly unstructured, arbitrary circuits that don't compute anything useful. They hardly resembled the kind of results that you would expect from a general purpose, programmable computer. It's a valid criticism, and it seems like the above commenter came to this conclusion on their own.
To that comment, the present result is a step up from these older experiments in that they
a) Run a more structured circuit
b) Use the device to compute something reproducible (as opposed to sampling randomly from a certain probability distribution)
c) The circuits go toward simulating a physical system of real-world relevance to chemistry.
Now you might say that even c) is just a quantum computer simulating another quantum thing. All I'll say is that if you would only be convinced by a quantum computer factoring a large number, don't hold your breath: https://algassert.com/post/2500
If you assume everyone else is wrong from the start, then you won't like the comments, sure.
And what the hell are you calling a gish gallop? They wrote four sentences explaining a single simple argument. If you design a way to make qubits emulate particle interactions, that's a useful tool, but it's not what people normally think of as a "computer".
And whatever you're saying about anyone claiming "all the science has it wrong" is an argument that only exists inside your own head.
The fact an earlier demo was an RNG also, and this demo uses quantum phenomena (qubits) to look at quantum phenomena (molecules) does not mean quantum computing can't be a useful computer, a la a wind tunnel.
It's not that I don't "agree with it", there's nothing to agree with. "Not even wrong", in the Pauli sense.
I'd advise that when you're conjuring thoughts in other people's heads to make them mean, so you can go full gloves off and tell them off for what thoughts were in their head, and motivated their contributions to this forum, you pause, and consider a bit more. Especially in context of where you're encountering the behavior, say, a online discussion forum vs. a dinner party where you're observing a heated discussion among your children.
Of course it doesn't mean a quantum computer is restricted to that.
But if that's the only realm where anything close to supremacy has been demonstrated, being skeptical and setting your standards higher is reasonable. Not at all "not even wrong".
> I'd advise that when you're conjuring thoughts in other people's heads
Are you accusing me of strawmanning? If you think people are being "not even wrong" then I didn't strawman you at all, I accurately described your position. Your strawman about science was the only one in this comment thread. And again there was no gish gallop, and I hope if nothing else you double check the definition of that term or something.
In the last sentence of the abstract you will find:
"These results ... indicate a viable path to practical quantum advantage."
And in the conclusions:
"Although the random circuits used in the dynamic learning demonstration remain a toy model for Hamiltonians that are of practical relevance, the scheme is readily applicable to real physical systems."
So the press release is a little over-hyped. But this is real progress nonetheless (assuming the results actually hold up).
[UPDATE] It should be noted that this is still a very long way away from cracking RSA. That requires quantum error correction, which this work doesn't address at all. This work is in a completely different regime of quantum computing, looking for practical applications that use a quantum computer to simulate a physical quantum system faster than a classical computer can. The hardware improvements that produced progress in this area might be applicable to QEC some day, this is not direct progress towards implementing Shor's algorithm at all. So your crypto is still safe for the time being.
An MBA, an engineer and a quantum computing physicist check into a hotel. Middle of the night, a small fire starts up on their floor.
The MBA wakes up, sees the fire, sees a fire extinguisher in the corner of the room, empties the fire extinguisher to put out the fire, then goes back to sleep.
The engineer wakes up, sees the fire, sees the fire extinguisher, estimates the extent of the fire, determines the exact amount of foam required to put it out including a reasonable tolerance, and dispenses exactly that amount to put out the fire, and then satisified that there is enough left in case of another fire, goes back to sleep.
The quantum computing physicist wakes up, sees the fire, observes the fire extinguisher, determines that there is a viable path to practical fire extinguishment, and goes back to sleep.
Not quite sure why all the responses here are so cynical. I mean, it's a genuinely difficult set of problems, so of course the first steps will be small. Today's computers are the result of 80 astonishing years of sustained innovation by millions of brilliant people.
Even as a Googler I can find plenty of reasons to be cynical about Google (many involving AI), but the quantum computing research lab is not one of them. It's actual scientific research, funded (I assume) mostly out of advertising dollars, and it's not building something socially problematic. So why all the grief?
I completed my degree in computer science at age 22 - at that time Shor had just published his famous algorithm and the industry press was filled with articles on how quantum computing was just a few years away with just a few technical hurdles yet to be solved.
I turned 50 years old this year, forgive an old man a few chuckles.
Quantum computing hardware is still at its infancy.
The problem is not with these papers (or at least not ones like this one) but how they are reported. If quantum computing is going to suceed it needs to do the baby steps before it can do the big steps, and at the current rate the big leaps are probably decades away. There is nothing wrong with that, its a hard problem and its going to take time. But then the press comes in and reports that quantum computing is going to run a marathon tomorrow which is obviously not true and confuses everyone.
I don’t see why bitcoin wouldn’t update its software in such a case. The majority of minors just need to agree. But why wouldn’t they if the alternative is going to zero?
How could updating the software possibly make a difference here? If the encryption is cracked, then who is to say who owns which Bitcoin? As soon as I try to transfer any coin that I own, I expose my public key, your "Quantum Computer" cracks it, and you offer a competing transaction with a higher fee to send the Bitcoin to your slush fund.
No amount of software fixes can update this. In theory once an attack becomes feasible on the horizon they could update to post-quantum encryption and offer the ability to transfer from old-style addresses to new-style addresses, but this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Fortunately this will never actually happen. It's way more likely that ECDSA is broken by mundane means (better stochastic approaches most likely) than quantum computing being a factor.
> this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Any rational economic actor would participate in a post-quantum hard fork because the alternative is losing all their money.
If this was a company with a $2 trillion market cap there'd be no question they'd move heaven-and-earth to prevent the stock from going to zero.
Y2K only cost $500 billion[1] adjusted for inflation and that required updating essentially every computer on Earth.
> would require all holders (not miners) to actively update their wallets. Basically infeasible.
It doesn't require all holders to update their wallets. Some people would fail to do so and lose their money. That doesn't mean the rest of the network can't do anything to save themselves. Most people use hosted wallets like Coinbase these days anyway, and Coinbase would certainly be on top of things.
Also, you don't need to break ECDSA to break BTC. You could also do it by breaking mining. The block header has a 32-bit nonce at the very end. My brain is too smooth to know how realistic this actually is, but perhaps someone could do use a QC to perform the final step of SHA-256 on all 2^32 possible values of the nonce at once, giving them an insurmountable advantage in mining. If only a single party has that advantage, it breaks the Nash equilibrium.
But if multiple parties have that advantage, I suppose BTC could survive until someone breaks ECDSA. All those mining ASICs would become worthless, though.
Firstly I'd want to see them hash the whole blockchain (not just the last block) with the post-quantum algo to make sure history is intact.
But as far as moving balances - it's up to the owners. It would start with anybody holding a balance high enough to make it worth the amount of money it would take to crack a single key. That cracking price will go down, and the value of BTC may go up. People can move over time as they see fit.
As you alluded to, network can have two parallel chains where wallets can be upgraded by users asynchronously before PQC is “needed” (a long way away still) which will leave some wallets vulnerable and others safe. It’s not that herculean as most wallets (not most BTC) are in exchanges. The whales will be sufficiently motivated to switch and everyone else it will happen in the background.
A nice benefit is it solves the problem with Satoshi’s (of course not a real person or owner) wallet. Satoshi’s wallet becomes the defacto quantum advantage prize. That’s a lot of scratch for a research lab.
The problem is that the owner needs to claim their wallet and migrate it to the new encryption. Just freezing the state at a specific moment doesn't help; to claim the wallet in the new system I just need the private key for the old wallet (as that's the sole way to prove ownership). In our hypothetical post-quantum scenario, anyone with a quantum computer can get the private key and migrate the wallet, becoming the de-facto new owner.
I think this is all overhyped though. It seems likely we will have plenty of warning to migrate prior to achieving big enough quantum computers to steal wallets. Per wikipedia:
> The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates.
IIRC this is speculated to be the reason ECDSA was selected for Bitcoin in the first place.
Note, the 126 billion Toffoli gates are operations, so that's more about how many operations you need to be able to reliably apply without error.
It should be noted that according to IonQ's roadmap, they're targeting 2030 for computers capable of that. That's only about 5 years sooner than when the government has said everyone has to move to post quantum.
The problem is all the lost BTC wallets, which is speculated to be a lot and also one of the biggest reason for the current BTC price, who obviously cannot upgrade to PQ. There is currently a radical proposal of essentially making all those lost wallets worthless, unless they migrate [1]
No, I don't think so. By the time quantum supremacy is really achieved for a "Q-Day" that could affect them or things like them, the existing blockchains which have already been getting hardened will have gotten even harder. Quantum computing could be used to further harden them, as well, rather than compromise them.
Supposing that Q-Day brought any temporary hurdles to Bitcoin or Ethereum or related blockchains, well...due to their underlying nature resulting in justified Permanence, we would be able to simply reconstitute and redeploy them for their functionalities because they've already been sufficiently imbued with value and institutional interest as well. These are quantum-resistant hardenings.
So I do not think these tools or economic substrate layers are going anywhere. They are very valuable for the particular kinds of applications that can be built with them and also as additional productive layers to the credit and liquidity markets nationally, internationally, and also globally/universally.
So there is a lot of institutional interest, including governance interest, in using them to build better systems. Bitcoin on its own would be reduced in such justification but because of Ethereum's function as an engine which can drive utility, the two together are a formidable and quantum-resistant platform that can scale into the hundreds of trillions of dollars and in Ethereum's case...certainly beyond $1Q in time.
I'm very bullish on the underlying technology, even beyond tokenomics for any particular project. The underlying technologies are powerful protocols that facilitate the development and deployment of Non Zero Sum systems at scale. With Q-Day not expected until end of 2020s or beginning of 2030s, that is a considerable amount of time (in the tech world) to lay the ground work for further hardening and discussions around this.
no, not really, PQC is already being discussed in pretty much every relevant crypto thing for couple years alearady and there are multiple PQC algos ready to protect important data in banking etc as well
I don’t really understand the threat to banking. Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server. You can’t just inject transactions or something. Is the threat that internal network traffic could be read? Transactions all go to clearing houses anyway. Is it to protect browser->webapp style banking? those all use ec by now anyway, and even if they don’t how do you mitm this traffic?
As far as i am aware, eliptic curve is also vulnerable to quantum attacks.
The threat is generally both passive eavesdropping to decrypt later and also active MITM attacks. Both of course require the attacker to be in a position to eavesdrop.
> Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server.
Well if you are sitting in the right place on the network then you can.
> how do you mitm this traffic?
Depends on the scenario. If you are government or ISP then its easy. Otherwise it might be difficult. Typical real life scenarios are when the victim is using wifi and the attacker is in the physical vicinity.
Like all things crypto, it always depends on context. What information are you trying to protect and who are you trying to protect.
All that said, people are already experimenting with PQC so it might mostly be moot by the time a quantum computer comes around. On the other hand people are still using md5 so legacy will bite.
> Well if you are sitting in the right place on the network then you can.
Not really. This would be if not instantly then when a batch goes for clearing or reconciliation, be caught -- and an investigation would be immediately started.
There are safeguards against this kind of thing that can't be really defeated by breaking some crypto. We have to protect against malicious employees etc also.
One can not simply insert bank transactions like this. They are really extremely complicated flows here.
I meant on a technical level you could insert the data into the network. Obviously if the system as a whole does not depend on TLS for security, then no amount of breaking TLS will impact it
Sure, if a bank gets compromised you could in theory DOS a clearing house, but I'd be completely amazed if it succeeded. Those kind of anomalous spikes would be detected quickly. Not even imagining that each bank probably has dedicated instances inside each clearing house.
These are fairly robust systems. You'd likely have a much better impact dossing the banks.
Okay, but breaking that TLS (device->bank) would allow you to intercept the session keys and then decrypt the conversation. Alright, so now you can read I logged in and booked a transaction to my landlord or whatever. What else can you do? OTP/2FA code prevents you from re-using my credentials. Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
So we are all in a collective flap that someone can see my bank transactions? These are pretty much public knowledge to governments/central banks/clearing houses anyway -- doesn't seem like all that big a deal to me.
(I work on payment processing systems for a large bank)
> Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
if you can read the TLS session in general, you can capture the TLS session ticket and then use that to make a subsequent connection. This is easier as you dont have to be injecting packets live or make inconvinent packets disappear.
It seems like detecting a re-use like this should be reasonably easy, it would not look like normal traffic and we could flag this to our surveillance systems for additional checks on these transactions. In a post quantum world, this seems like something that would be everywhere anyway (and presumably, we would be using some other algo by then too).
Somehow, I'm not all that scared. Perhaps I'm naive.. :}
> It seems like detecting a re-use like this should be reasonably easy, it would not look like normal traffic
I don't see why it wouldn't look like normal traffic.
> Somehow, I'm not all that scared. Perhaps I'm naive.. :}
We're talking about an attack that probably won't be practical for another 20 years , which already has counter measures that are in testing right now. Almost nobody should be worried about it.
If quantum computers crack digital crytography, traditional bank account goes to zero too because regular 'ol databases also use crytography techniques for communication.
If all else fails, banks can generate terabytes of random one-time pad bytes, and then physically transport those on tape to other banks to set up provably secure communication channels that still go over the internet.
It would be a pain to manage but it would be safe from quantum computing.
Aaronson isn't a cartoonist, it was an AI cartoon from ChatGPT that an antisemite sent Aaronson in the mail which he then seemingly maliciously misattributed to Woit making people assume Aaronson went batshit.
Aaronson did work at OpenAI but not on image generation, maybe you could argue the OpenAI safety team he worked on should be involved here but I'm pretty sure image generation was after his time, and even if he did work directly on image generation under NDA or something, attributing that cartoon to Aaronson would be like attributing a cartoon made in Photoshop by an antisemite to a random Photoshop programmer, unless he maliciously added antisemitic images to the training data or something.
The most charitable interpretation that I think Aaronson also has offered is that Aaronson believed Woit was an antisemite because of a genocidal chain of events that in Aaronson's belief would necessarily happen with a democratic solution and that even if Woit didn't believe that that would be the consequence, or believed in democracy deontologically and thought the UN could step in under the genocide convention if any genocide began to be at risk of unfolding, the intent of Woit could be dismissed, and Woit could therefore be somehow be lumped in with the antisemite who sent Aaronson the image.
Aaronson's stated belief also is that any claim that Isreal was commiting a genocide in the last few years is a blood-libel because he believes the population of Gaza is increasing and it can't be a genocide unless there is a population decrease during the course of it. This view of Aaronsno would imply things like if every male in Gaza was sterilized, and the UN stepped in and stopped it as a genocide, it would be a blood libel to call that genocide so long as the population didn't decrease during the course of it, even if it did decrease afterwards. But maybe he would clarify that it could include decreases that happen with a delayed effect of the actions. But these kind of strong beliefs of blood-libel I think are part of why he felt ok labeling the comic with Woit's name.
I also don't think if the population does go down or has been going down he will say it was from a genocide, but rather that populations can go down from war. He's only proposing that a population must go down as a necessary criteria of genocide, not a sufficient one. I definitely don't agree with him, to me if Hamas carried out half of an Oct 7 every day it would clearly be a genocide even if that brought the replacement rate to 1.001 and it wouldn't change anything if it brought it to 0.999.
I meant Scott Adams, the creator of Dilbert. The joke is just that they have similar names, and Adams does a lot of political/topical commentary in both his comics and podcast but isn't a good source since a lot of his work is comedy focused.
I am unaware of any comics Aaronson made and I don't blame him for anything he did make or was loosely associated with. It is incredible to the extend people are willing to go to claim people are crazy though, both in regards to Adams and Aaronson.
Oh the similar names through me off just the same. It wasn't a comic he made but a recent event on his blog involving an antisemitic chatgpt illustration.
Classic HN, always downvoting every comment concerning decentralized and p2p currencies. How do you like your centralized, no privacy, mass surveilled worthless banking system
I don't consider the current system to be worthless. In fact, it functions remarkably well. There is certainly room for additional substrate layers though, and Bitcoin being digital or electronic gold and Ethereum being an e-steam engine or e-computer make for a powerful combination for applications together. I agree that the crowd here has historically not understood, or wanted to understand, the underlying protocols and what is possible. A bizarre kind of hubris perhaps, or maybe just a response to how the first iterations of a web2.5 or web3.0 were...admittedly more mired in a kind of marketing hype that was not as reflective of what is possible and sustainable in the space due to there not being realistic web and engineering muscle at the forefront of the hype.
I think this current cycle is going to change that though. The kinds of projects spinning up are truly massive, innovative, and interesting. Stay tuned!
What is it about then? I'm not spreading propaganda - I'm maximally truth seeking and approaching things from a technical/economic/governance point of view and not an ideological one per se. Though ideology shapes everything, what I mean is that I'm not ideologically predisposed towards a conclusion on things. For me what matters is the core, truthy aspects of a given subject.
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
> in partnership with The University of California, Berkeley, we ran the Quantum Echoes algorithm on our Willow chip...
And the author affiliations in the Nature paper include:
Princeton University;
UC Berkeley; University of Massachusetts, Amherst; Caltech; Harvard; UC Santa Barbara; University of Connecticut; UC Santa Barbara; MIT; UC Riverside; Dartmouth College; Max Planck Institute.
This is very much in partnership with universities and they clearly state that too.
Did you bother looking at the author list? This is a grand statement that's easily refuted by just looking.
Maybe you're thinking specifically of LLM labs. I agree this is happening there, but I wouldn't be as dramatic. Everywhere else, university-corporation/government lab partnerships are still going very strong.
Another response is to come to terms with a possibly meaningless and Sisyphean reality and to keep pushing the boulder (that you care about) up the hill anyway.
I’m glad the poster is concerned and/or disillusioned about the hype, hyperbole and deception associated with this type of research.
I don't think it's accurate to attribute some kind of altruism to these research universities. Have a look at some of those pay packages or the literal hedge funds that they operate. And they're mostly exempt from taxation.
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
I don't disagree, but these days I'm happy to see any advanced research at all.
Granted, too often I see the world through HN-colored glasses, but it seems like so many technological achievements are variations on getting people addicted to something in order to show them ads.
Did Bellcore or Xerox PARC do a lot of university partnerships? I was into other things in those days.
the big problem with quantum advantage is that quantum computing is inherently error-prone and stochastic, but then they compare to classical methods that are exact
let a classical computer use an error prone stochastic method and it still blows the doors off of qc
Stochasticity (randomness) is pervasively used in classical algorithms that one compares to. That is nothing new and has always been part of comparisons.
"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.
Afaik we are a decade or two away from quantum supremacy. All the AI monks, forget that if AI is the future, quantum supremacy is the present. And whoever controls the present, decides the future.
Rememeber, it is not about quantum general computing, it's about implementing the quantum computation of Shor's algorithm
but much like AI hype quantum hype is also way over played, yeah modern asymmetric encryption will be less secure, but even after you have quantum computers that can do Shor's algorithm it might be a while before there are quantum computers affordable enough for it to be an actual threat (i.e. it's not cheaper to just buy a zero day for the target's phone or something).
But since we already have post quantum algorithms, the end state of cheap quantum computers is just a new equilibrium where people use the new algorithms and they can't be directly cracked and it's basically the same except maybe you can decrypt historical stuff but who knows if it's worth it.
I was mostly talking about state actors buying quantum computers that are just built to run that particular algorithm and using it to snoop on poorer countries who cannot afford such tech. Plus all countries will have plenty of systems that have not been quantum proofed. The moment when it becomes affordable to a state actor until most state actors have access to such tech is likely to be a long time especially since no one will admit to possess such tech. And unlike nuclear weapons, it is much harder to prove whether or not someone has one
Notice how they say "quantum advantage" not "supremacy" and "a (big) step toward real-world applications". So actually just another step as always. And I'm left to doubt if the classic algorithm used for comparison was properly optimised.
I skimmed the paper so might have missed something, but iiuc there is no algorithm used for comparison. They did not run some well defined algorithm on some benchmark instances, they estimated the cost of simulating the circuits "through tensor network contraction" - I quote here not to scare but because this is where my expertise runs out.
Genuine question - why is Bitcoin not collapsing upon this news? Surely big money that control Bitcoin have high enough IQ to extrapolate inti the future and come to decision on the inevitable implications.
The critics claim the noise collapses quantum states so quickly that it's not possible to make full use of the quantum effects. The burden of proof is on the chip makers. I think they have not convinced the critics yet.
Why use the subtext? In the peer review (pp30-31), the authors
>acknowledge [that] Hamiltonian learning of an actual physical system has not been performed [and] have therefore further de-emphasized the quantum advantage claim in the revised manuscript
https://static-content.springer.com/esm/art%3A10.1038%2Fs415...
This excerpt misses the wider point of the paper. The paragraph immediately following the one you quote still does make claims of quantum advantage:
"Our second order OTOC circuits introduce a minimal structure while remaining sufficiently generic that circumvents this challenge by exploiting the exponential advantage provided by including time inversion in the measurement protocol, see arXiv:2407.07754."
The advantage claimed by the paper isn't about Hamiltonian learning (i.e extracting model parameters from observational data), but instead about computing the expectation value of a particular observable. They acknowledge that the advantage isn't provable (even the advantage of Shor's algorithm isn't provable), but they argue that there likely is an advantage.
Shor’s algorithm’s advantage isn’t proven, but a proof that integer factorization doesn’t admit a classical algorithm faster than O((log N)^3) could be found. The same applies for Google’s artificial problem.
An analogy which is closer to Google's experiment: measuring versus calculating the energy gaps in benzene to an arbitrary accuracy.
It is faster to measure those with state of the art "quantum tools" but that does not improve our understanding of other aromatic molecules.
(we may still get some insights about anthracene however)
The googles' advantage can be satisfactorily summarised as "not having to write the problem in terms of a classical basis" -- the overhead of having to represent qubits as bits.
I do believe that whatever "informational advantage" we can get from these experiments can be used to improve classical calculations.
Eg In the arxiv paper linked above they talk about provably-efficient shallow classical shadows
So, "verifiable" here means "we ran it twice and got the same result"?
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
Normally, where I come from anyway, verifiability would refer to the ability to prove to a classical skeptic that the quantum device did what it's supposed to, cf. e.g. Mahadev (https://arxiv.org/abs/1804.01082), Aaronson (https://arxiv.org/abs/2209.06930), in a strong, theoretical, sense. And that's indeed relevant in the context of proving advantage, as the earlier RCS experiments lacked that ability, so “demonstrating verifiable quantum advantage” would be quite the step forward. That doesn't appear to be what they did at all though. Indeed, the paper appears to barely touch on verifiability at all. And – unlike the press release – it doesn't claim to achieve advantage either; only to indicate “a viable path towards” it.
So the Scott Aaronson Full Employment Conjecture remains unrefuted?
It is not very clear from the text and from what I can say there is no "verifiability" concept in the papers they link.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
It means that they transcended the "works on my machine" stage, and can reliably run a quantum algorithm on more than one different quantum computer.
They haven’t, though?
At least they claim that: «Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result. This repeatable, beyond-classical computation is the basis for scalable verification.» (emph. mine)
But apparently they haven't demonstrated the actual portability between two different quantum computers.
What’s the difference between the claim that they’re making and what you say they haven’t done?
A key result here is the first demonstration of quantum supremacy; from TFA
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
I think I have read that before?
Yes, from this very same FA,
"Back in 2019, we demonstrated that a quantum computer could solve a problem that would take the fastest classical supercomputer thousands of years."
The actual article has much more measured language, and in the conclusion section gives three criteria for "practical quantum advantage":
https://www.nature.com/articles/s41586-025-09526-6
"(1) The observable can be experimentally measured with the proper accuracy, in our case with an SNR above unity. More formally, the observable is in the bounded-error quantum polynomial-time (BQP) class.
(2) The observable lies beyond the reach of both exact classical simulation and heuristic methods that trade accuracy for efficiency.
[...]
(3) The observable should yield practically relevant information about the quantum system.
[...] we have made progress towards (1) and (2). Moreover, a proof-of-principle for (3) is demonstrated with a dynamic learning problem."
So none of the criteria they define for "practical quantum advantage" are fully met as far as I understand it.
The key word is "practical" - you can get quantum advantage from precisely probing a quantum system with enough coherent qubits that it would be intractable on a classical computer. But that's exactly because a quantum computer is a quantum system; and because of superposition and entanglement, a linear increase in the number of qubits means an exponential increase in computational complexity for a classical simulation. So if you're able to implement and probe a quantum system of sufficient complexity (in this case ~40 qubits rather than the thousands it would take for Shor's algorithm), that is ipso facto "quantum advantage".
It's still an impressive engineering feat because of the difficulty in maintaining coherence in the qubits with a precisely programmable gate structure that operates on them, but as far as I can see (and I've just scanned the paper and had a couple of drinks this evening) what it really means is that they've found a way to reliably implement in hardware a quantum system that they can accurately extract information from in a way that would be intractable to simulate on classical machines.
I might well be missing some subtleties because of aforementioned reasons and I'm no expert, but it seems like the press release is unsurprisingly in the grayzone between corporate hype and outright deceit (which as we know is a large and constantly expanding multi-dimensional grayzone of heretofore unimagined fractal shades of gray)
Sounds like N, not 2
I would be quite worried about advances in quantum computers if I had any Bitcoin after watching this DEFCON talk: https://www.youtube.com/watch?v=OkVYJx1iLNs
The quantum algorithm that would break certain kinds of public key cryptography schemes (not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers) will take days to weeks to break a single key [0]. This is another reason why we will have plenty of warning before quantum computing causes any major disruptions to daily life.
What I would start worrying about is the security of things like messages sent via end-to-end encrypted services like WhatsApp and Signal. Intercepted messages can be saved now and decrypted any time in the future, so it's better to switch to more robust cryptography sooner rather than later. Signal has taken steps in this direction recently: https://arstechnica.com/security/2025/10/why-signals-post-qu....
[0] https://arxiv.org/pdf/2505.15917
Meta has been rolling out PQC: https://engineering.fb.com/2024/05/22/security/post-quantum-...
And Apple: https://security.apple.com/blog/imessage-pq3/
Cloudflare started rolling it out three years ago! https://developers.cloudflare.com/ssl/post-quantum-cryptogra...
Usually, the crypto should have Forward Secrecy already even without being PQ-safe (e.g., via https://en.wikipedia.org/wiki/Double_Ratchet_Algorithm) so in practice the attacker would need to break many successive session keys - which rotates every time a new message is sent.
> not even the core part of Bitcoin blockchains, which are not vulnerable to quantum computers
Um, what? Shor’s algorithm can take the public key of a wallet (present on any outgoing transaction in the ledger) and produce its private key. So now you can hijack any wallet that has transferred any Bitcoin. Notably only one successful run of the algorithm is needed per wallet, so you could just pick a big one if it takes weeks.
It probably wouldn’t help you mine in practice, sure. Technically it would give you better asymptotic mining performance (via Grover’s algorithm) but almost certainly worse in practice for the foreseeable future.
Quantum is a known threat. There is enough time to fix it. Folks are working on the fixes.
Cryptocurrencies would be the last thing I worry about w.r.t Quantum crypto attacks. Everything would be broken. Think banks, brokerage accounts, email, text messages - everything.
I think that’s backwards: most of the stuff you mentioned is using TLS and can switch to post-quantum algorithms with a config change, and do so incrementally with no user-visible impact - e.g. right now I’m already using PQC for many sites and about half of the traffic Cloudflare sees is using PQC:
https://radar.cloudflare.com/adoption-and-usage
In contrast, cryptocurrencies have to upgrade the entire network all at once or it’s effectively a painful fork. That effort appears to just be getting talked about now, without even starting to discuss timing:
https://github.com/bitcoin/bips/pull/1895
> In contrast, cryptocurrencies have to upgrade the entire network all at once or it’s effectively a painful fork
Bitcoin is much more centralized than the popular imagination would have you believe, both in terms of the small number of controlling interests behind the majority of the transaction capacity, and just as importantly the shared open source software running those nodes. Moreover, the economic incentives for the switch are strongly, perhaps even perfectly, aligned among the vast majority of node operators. Bitcoin is already dangerously close to, if not beyond, the possibility of a successful Byzantine attack; it just doesn't happen precisely because of the incentive alignment--if you're that large, you don't want to undermine trust in the network, and you're an easy target for civil punishment.
(I know that you understand this, but just highlighting it)
In fairness, the original Bitcoin white paper referenced both (1) distributed compute and (2) the self-defeating nature of a Byzantine attack as the means of protection. It's not as though (2) is just lucky happenstance.
Hence, why proof of stake can exist.
I definitely agree that the major players will want to move forward, but it seems like there's a legacy system kind of problem where it can stall if you get some slackers who either don't update (what happens to cold wallets?) or if some group has ideological disagreements about the solution. None of that is insurmountable, of course, but it seems like it has to be slower than something where you personally can upgrade your HTTPS servers to support PQC any time you want without needing to coordinate with anyone else on the internet.
I can't remember which chain it was but I'm sure I've seen stats on in-progress rollouts of protocol changes where the network took something like weeks or months to all get upgraded to the new version. You can design for tolerating both for a time.
Is this a purely server side migration? Do browsers/OSs need updating too?
Clients need to be updated, too, since what's happening is that the server and client need to agree on a common algorithm they both support, but that's been in progress for years and support is now pretty widespread in the current versions of most clients.
Stragglers are a problem, of course, but that's why I thought this would be a harder problem for Bitcoin: for me to use PQC for HTTPS, only my browser and the server need to support it and past connections don't matter, whereas for a blockchain you need to upgrade the entire network to support it for new transactions _and_ have some kind of data migration for all of the existing data. I don't think that's insurmountable – Bitcoin is rather famously not as decentralized as the marketing would have you believe — but it seems like a harder level of coordination.
The world has already migrated through so many past now-insecure cryptography setups. If quantum computers start breaking things, people will transition to more secure systems.
In HTTPS for example, the server and client must agree on how to communicate, and we’ve already had to deprecate older, now-insecure cryptography standards. More options get added, and old ones will have to be deprecated. This isn’t a new thing, just maybe some cryptographic schemes will get rotated out earlier than expected.
> If quantum computers start breaking things, people will transition to more secure systems.
that's not really the issue, the real interesting part is existing encrypted information that three letter agencies likely have dutifully stored in a vault and that's going to become readable. A lot of that communication was made under the assumption that it's secure.
Yeah, all the encrypted messages collected when illegal markets got seized will be decrypted. Many of them uses RSA 2048 so by 2030 its gonna be broken according to the timelines.
Its actually something we will notice. Arrests will be announced.
> Everything would be broken. Think banks, brokerage accounts, email, text messages - everything.
Wonder if this would become the next "nuclear proliferation".
Since it's so hard to manufacture it gets controlled at state level and then becomes a technology that the general public are never allowed to have.
No, it is a known problem. It will get fixed in time.
Like everything else that is a new invention, it can be a threat.
Anyways I am against stopping evolution on those grounds. What we need to do is learn and fix as you say. Not regulation and forbid. :)
Amazing talk. Thanks for sharing
Every time I mention quantum computing as a threat to crypto (which I have been for years), I get downvoted to oblivion. I guess we have a lot of HODLers here. A bet on crypto is a bet against quantum computing.
I haven't once even thought of investing in crypto, and think that the technology is mostly useless and proof of work schemes should be banned on environmental grounds.
Even so, I don't agree that quantum is a threat to crypto. There are already well known quantum-resistant encryption schemes being deployed live in browsers, today. Crypto can just start adopting one of these schemes today, and we're still probably decades away from a QC that can factor the kinds of primes that crypto security uses. The transition will be slightly more complex for proof of work schemes, since those typically have dedicated hardware - but other types of crypto coins can switch in months, most likely, if they decide to, at least by offering new wallet types or something.
>There are already well known quantum-resistant encryption schemes being deployed live in browsers, today. Crypto can just start adopting one of these schemes today, and we're still probably decades away from a QC that can factor the kinds of primes that crypto security uses.
It's very strange that some people act like switching over to a post quantum cryptography scheme is trivial. Did you watch the video I replied to, which is a talk by an actual quantum computing researcher?
https://ethresear.ch/t/how-to-hard-fork-to-save-most-users-f...
I haven't seen anyone post any progress on factoring large numbers with quantum computers in a while. Annealers won't do it efficiently, but probably still hold the record anyway, for a relatively small number you could do classical hardware. Gate model machines with enough qubits to do it are still ages off. Bitcoin should find a way to transition to a post-quantum algorithm, but that's about it. As long as they do it before anyone has a big enough QPU, they're fine, and nobody is even close, it seems.
Unless advances in QC could rewrite the blockchain then there's not much to worry about. If the crypto algorithms are compromised, you coins are pretty much frozen on the chain until a new algorithms are implemented. Are you're arguing QC makes signatures/verification/mining impossible?
Why though? Who is to decide which point of the blockchain is the good one and which blocks to “reject”?
Think twice. Everyone who hosts the blockchain would decide to stop because he invested in crypto, at least with some hardware costs. Beside of the small group of people that owns a quantum computer. I don't expect that this group is >50% of the people that hosts the blockchain.
You don't need >50% of bad actors to compromise the blockchain, but rather >50% of the total hashing power. This could very well be achievable by a small group of people with QC at some point.
How does QC aid SHA256 hash throughput?
[dead]
"surpassing even the fastest classical supercomputers (13,000x faster)"
"Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result."
"The results on our quantum computer matched those of traditional NMR, and revealed information not usually available from NMR, which is a crucial validation of our approach."
It certainly seems like this time, there finally is a real advantage?
I’ve only skimmed the paper but it seems like the “information not usually available” from NMR is the Jacobian and Hessian of the Hamiltonian of the system.
So basically you’re able to go directly from running the quantum experiment to being able to simulate the dynamics of the underlying system, because the Jacobian and Hessian are the first and second partial derivatives of the system with respect to all of its parameters in matrix form.
Related papers
The idea: Quantum Computation of Molecular Structure Using Data from Challenging-To-Classically-Simulate Nuclear Magnetic Resonance Experiments https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
Verifying the result by another quantum computer (it hasn't been yet): Observation of constructive interference at the edge of quantum ergodicity https://www.nature.com/articles/s41586-025-09526-6
[dead]
I would like to hear from the classical computation people if they validate these results and claims:
As many times in the past quantum supremacy was claimed, and then, other groups have shown they can do better with optimized classical methods.
That might take a few days or weeks, it seems like they put in some decent effort into it this time. From skimming the supplement I wouldn’t be suprised if the speedup is only 100x though. That’s still significant, but clearly less than they claim. For example I am not entirely convinced that 20% flops efficiency is really the upper limit or that the slicing overhead of 5x is really needed here.
Can someone explain if this is still the RCS problem or a similar one?
My impression was that every problem a quantum computer solves in practice right now is basically reducible from 'simulate a quantum computer'
This is quite different from their previous random circuit sampling (RCS) experiments that have made headlines a few times in the past. The key difference from an applied standpoint is that the output of RCS is a random bitstring which is different every time you run the algorithm. These bitstrings are not reproducible, and also not particularly interesting, except for the fact that only a quantum computer can generate them efficiently.
The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'
As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).
This is not the RCS problem or indeed anything from number theory.
The announcement is about an algorithm which they are calling Quantum Echoes, where you set up the experiment, perturb one of the qbits and observe the “echoes” through the rest of the system.
They use it to replicate a classical experiment in chemistry done using nuclear magnetic resonance imaging. They say they are able to reproduce the results of that conventional experiment and gather additional data which is unavailable via conventional means.
It’s really hard to parse both the announcement (too much hyperbola) and the article (too technical), however as I understand it, this is what quantum computing should be good at. Not at making classical algorithms faster, but at simulating quantum physics experiments. This is a good direction and I find it more plausible than “we factored numbers faster”.
This SHOULD be the main application of the tech. It’s hard to tell if it is because authors can’t do science communication, they can apparently only do sales.
This seems like an actually useful computation to do, unlike earlier results. Is that a reasonable reading of this article?
No it’s still completely useless for the real world. Also not actually verifiable
Classic quantum
Now verifiably useless in real life
> demonstrates the first-ever algorithm to achieve verifiable quantum advantage on hardware.
Am I crazy or have I heard this same announcement from Google and others like 5 times at this point?
My understanding is that this one is "verifiable" which means you get a reproducible result (i.e. consistent result comes out of a computation that would take much longer to do classically).
Non-verifiable computations include things like pulling from a hard-to-compute probability distribution (i.e. random number generator) where it is faster, but the result is inherently not the same each time.
This is as would be expected if it were real. Advantage isn't a black and white thing, because the comparison starts against 'any task done the best we know how to do using the most resources we happen to be willing to throw at it, even if we don't have a means to check that the output was correct', and ends at 'useful output you can formally verify where you have a strong reason to believe no classical algorithm would be effective.'
It's the third one I am seeing from Google specifically.
I'd classify this one as different as it accompanies a publication in Nature - https://www.nature.com/articles/s41586-025-09526-6
I think it was in Nature last time as well…
How is an accompanying publication in nature worth anything in this context?
It's unlikely that it would be accepted for publication in a top peer-reviewed journal if there wasn't something novel.
Definitely not a quantum expert, but I have a feeling that news like this have been happening for more than a decade, without anything usable.
It's great funding for physics research. I don't care if it's useless, it beats spending on politics and surveillance
There's also the fact that the road to useful is littered with useless, because how else would you make progress?
You could do what evolution does, and require that every version be useful. But evolution seems to require an awful lot of trial and error.
Evolution requiring that every version is useful seems a (significant) oversimplification.
If some version is not reasonably useful, it doesn't have kids.
[dead]
I’m no expert either, so I hope one can corroborate or correct me…
My understanding though is that these steps are really the very beginning. Using a quantum computer with quantum algorithms to prove that it’s possible.
Once proven (which maybe article this is claiming?) the next step is actually creating a computer with enough qubits and entanglable pairs and low enough error rates that it can be used to solve larger problems at scale.
Because my current understanding with claims like these is that they are likely true, but in the tiny.
It’d be like saying “I have a new algorithm for factoring primes that is 10000x faster than the current best, but can only factor numbers up to 103.”
It's what happens when companies are driven by profit rather than making accurate scientific statements that reputation is built by and further research funding is predicated on.
Hyperbolic claims like this are for shareholders who aren't qualified to judge for themselves because they're interested in future money and not actual understanding. This is what happens when you delegate science to corporations.
Yeah but let’s be honest they aren’t going to be profiting off this. So maybe to help them feel they are contributing to a good cause
"demonstrates": that words is a tell-all.
> The signal's overlap reveals how a disturbance spreads across the Willow chip
On its face this suggests that the result is not deterministic except perhaps on the same Willow chip at the same temperature.
Main caveat is that it’s verifiable (by them) but repeatable by others (in principle).
So actually it’s neither verifiable or repeatable in any real-works definition of the words.
This is your second comment in this thread (as far as I’ve scrolled) saying this without an explanation. Could you elaborate on why?
Quantum computing is mostly a scheme to get more grants/funding for quantum research, it doesn’t have any real world application and most likely won’t have any in the foreseeable future. Like verifiable means i can run 3x3 on any quantum computer and always get 9 as result - but quantum computers cant even do that. Don’t get me wrong; it’s very cool theoretical research, and more power to the scientist. But thinking it will have any impact on the real world in our lifetime is a pipe dream imho
but can it run Doom?
The chip would need to be scaled couple of orders of magnitude, but in principle, it is possible :) see https://arxiv.org/abs/2412.12162
Linux would be a start :-)
I wouldn't hold my breath for Linux on a quantum computer.
FWIU a 6-stage RISC processor is sufficient to run Linux.
Things like CUDA-Q may be faster on classical computers than on quantum computers for forever; though what CUDA-Q solves for is also an optimization problem?
the quantum chip iirc only runs a subset of algorithms due to limited gates implementable on the quantum chip; is the quantum chip a universal computer?
This is more the case for D-Wave's machines which are specialised for quantum annealing, allowing for greater numbers of qubits. Google and most other major hardware players make chips which can implement a universal quantum gate set allowing for arbitrary quantum operations to be performed (in principle). The issue with these chips is that quantum error correction is not fully implemented yet so computations are effectively time-limited due to the build up of noise from imperfect implementation and finite-temperature effects. A big part of current punts at quantum advantage is figuring out how to squeeze every last drop out of these currently faulty devices.
“13,000× faster” sounds huge, but I wonder what it’s being compared to. Quantum speedups are always tricky to measure
The article states: “...13,000 times faster on Willow than the best classical algorithm on one of the world’s fastest supercomputers...”
I agree it's not very precise without knowing which of the world's fastest supercomputers they're talking about, but there was no need to leave out this tidbit.
The paper talks only about the Frontier supercomputer which is #2 on Top500. But I think it was an analysis rather than them actually running it.
I was being sarcastic because 13,000 times faster is 4 orders of magnitude faster so it doesn't matter to which supercomputer it is compared.
I don’t understand how these papers get accepted
The field is nascent. The bar is not static.
I think that quantum will be the next bubble
I bought a hundred shares of D-Wave about six months ago, betting on this fact.
A part of me thinks quantum computing is mostly bullshit, but I would be very happy to be wrong. I should probably learn Q# or something just to be sure.
100%. I'm buying all the domain names and have already profited heavily over 9 months.
This type of announcement is definitely pumping air into it
We're all asking it: any impact on AES?
This is a chemistry experiment, so no.
The rule of thumb is that a working quantum computer that can run Grover's algorithm reduces the security of a symmetric cipher to half of its key size. That is, AES-128 should be considered to have a 64 bit key size, which is why it's not considered "quantum-safe."
Edit: An effective key space of 2^64 is not secure according to modern-day standards. It was secure at the times of DES.
AES-128 is quantum safe (more or less). 64 bit security in the classical domain isn't safe because you can parallelize across 2^20 computers trivially. Grover gives you 2^64 AES operations on a quantum coputer (probably ~2^70 gates or so before error correction or ~2^90 after error correction) that can't be parallelized efficiently. AES-128 is secure for the next century (but you might as well switch to aes-256 because why not)
Is AES-256 more quantum resistant? It still has 16byte block size, so intuitively it should be equally vulnerable to Grover.
Grover's algorithm is sqrt(N) wrt domain size and the key is part of the domain of the function.
> Quantum computing-enhanced NMR could become a powerful tool in drug discovery, helping determine how potential medicines bind to their targets, or in materials science for characterizing the molecular structure of new materials like polymers, battery components or even the materials that comprise our quantum bits (qubits)
There is a section in the article about future real world application, but I feel like these articles about quantum "breakthroughs" are almost always deliberately packed with abstruse language. As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
> As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
not really. but that doesn't mean it's not worth striving for. Breakthrough to commercial application are notoriously hard to predict. The only way to find out is to keep pushing at the frontier.
Can it factor 21?
Unfairly high bar. They're not magicians.
Just gonna leave this here: Why haven't quantum computers factored 21 yet? https://algassert.com/post/2500
All thats need to verify is, is it faster than classical chips? Why do they need some other methods?
And the quantum snake oil train keeps on going
It’s actually two trains going in opposite directions. But the quantum snake oil is always in the other train.
The last time I heard a similar news from Google, it turned out they were solving a quantum phenomenon using a quantum phenomenon. It seems to be the same pattern here. Not to say it's not progress, but kind of feels like overhyped.
Idk. I get this is the median take across many comments, I don’t mean to be disagreeable with a crowd. But I don’t know why using quantum phenomena is a sign something’s off. It’s a quantum computer! But I know something is off with this take if it didn’t strike you that way.
To me, it matters because it's a sign that it might not be particularly transferable as a method of computation.
A wind tunnel is a great tool for solving aerodynamics and fluid flow problems, more efficiently than a typical computer. But we don't call it a wind-computer, because it's not a useful tool outside of that narrow domain.
The promise of quantum computing is that it can solve useful problems outside the quantum realm - like breaking traditional encryption.
Good point, I guess that's why I find this comments section boring and not representative of the HN I've known for 16 years: there's a sort of half-remembering it wasn't powerful enough to do something plainly and obviously useful yesterday.
Then, we ignore today, and launder that into a gish-gallop of free-association, torturing the meaning of words to shoehorn in the idea that all the science has it wrong and inter alia, the quantum computer uses quantum phenomena to computer so it might be a fake useless computer, like a wind tunnel. shrugs
It's a really unpleasant thing to read, reminds me of the local art school dropout hanging on my ear about crypto at the bar at 3 am in 2013.
I get that's all people have to reach for, but personally, I'd rather not inflict my free-association on the world when I'm aware I'm half-understanding, fixated on the past when discussing something current, and I can't explain the idea I have as something concrete and understandable even when I'm using technical terms.
I know what you're talking about, but I think you happened to pick a bad example to pick on here. This wind tunnel analogy resembles a common criticism of the prior experiments that were done by Google and others over the last few years. Those experiments ran highly unstructured, arbitrary circuits that don't compute anything useful. They hardly resembled the kind of results that you would expect from a general purpose, programmable computer. It's a valid criticism, and it seems like the above commenter came to this conclusion on their own.
To that comment, the present result is a step up from these older experiments in that they a) Run a more structured circuit b) Use the device to compute something reproducible (as opposed to sampling randomly from a certain probability distribution) c) The circuits go toward simulating a physical system of real-world relevance to chemistry.
Now you might say that even c) is just a quantum computer simulating another quantum thing. All I'll say is that if you would only be convinced by a quantum computer factoring a large number, don't hold your breath: https://algassert.com/post/2500
If you assume everyone else is wrong from the start, then you won't like the comments, sure.
And what the hell are you calling a gish gallop? They wrote four sentences explaining a single simple argument. If you design a way to make qubits emulate particle interactions, that's a useful tool, but it's not what people normally think of as a "computer".
And whatever you're saying about anyone claiming "all the science has it wrong" is an argument that only exists inside your own head.
The fact an earlier demo was an RNG also, and this demo uses quantum phenomena (qubits) to look at quantum phenomena (molecules) does not mean quantum computing can't be a useful computer, a la a wind tunnel.
It's not that I don't "agree with it", there's nothing to agree with. "Not even wrong", in the Pauli sense.
I'd advise that when you're conjuring thoughts in other people's heads to make them mean, so you can go full gloves off and tell them off for what thoughts were in their head, and motivated their contributions to this forum, you pause, and consider a bit more. Especially in context of where you're encountering the behavior, say, a online discussion forum vs. a dinner party where you're observing a heated discussion among your children.
Of course it doesn't mean a quantum computer is restricted to that.
But if that's the only realm where anything close to supremacy has been demonstrated, being skeptical and setting your standards higher is reasonable. Not at all "not even wrong".
> I'd advise that when you're conjuring thoughts in other people's heads
Are you accusing me of strawmanning? If you think people are being "not even wrong" then I didn't strawman you at all, I accurately described your position. Your strawman about science was the only one in this comment thread. And again there was no gish gallop, and I hope if nothing else you double check the definition of that term or something.
Now factor 21.
As with any quantum computing news, I will wait for Scott Aaronson to tell me what to think about this.
Why wait? Just go read the paper:
https://www.nature.com/articles/s41586-025-09526-6
In the last sentence of the abstract you will find:
"These results ... indicate a viable path to practical quantum advantage."
And in the conclusions:
"Although the random circuits used in the dynamic learning demonstration remain a toy model for Hamiltonians that are of practical relevance, the scheme is readily applicable to real physical systems."
So the press release is a little over-hyped. But this is real progress nonetheless (assuming the results actually hold up).
[UPDATE] It should be noted that this is still a very long way away from cracking RSA. That requires quantum error correction, which this work doesn't address at all. This work is in a completely different regime of quantum computing, looking for practical applications that use a quantum computer to simulate a physical quantum system faster than a classical computer can. The hardware improvements that produced progress in this area might be applicable to QEC some day, this is not direct progress towards implementing Shor's algorithm at all. So your crypto is still safe for the time being.
> "These results ... indicate a viable path to practical quantum advantage"
I'll add this to my list of useful phrases.
Q: Hey AndrewStephens, you promised that task would be completed two days ago. Can you finish it today?
A: Results indicate a viable path to success.
An MBA, an engineer and a quantum computing physicist check into a hotel. Middle of the night, a small fire starts up on their floor.
The MBA wakes up, sees the fire, sees a fire extinguisher in the corner of the room, empties the fire extinguisher to put out the fire, then goes back to sleep.
The engineer wakes up, sees the fire, sees the fire extinguisher, estimates the extent of the fire, determines the exact amount of foam required to put it out including a reasonable tolerance, and dispenses exactly that amount to put out the fire, and then satisified that there is enough left in case of another fire, goes back to sleep.
The quantum computing physicist wakes up, sees the fire, observes the fire extinguisher, determines that there is a viable path to practical fire extinguishment, and goes back to sleep.
Meanwhile Schrodinger's cat sleeps peacefully in their carrier.
Not quite sure why all the responses here are so cynical. I mean, it's a genuinely difficult set of problems, so of course the first steps will be small. Today's computers are the result of 80 astonishing years of sustained innovation by millions of brilliant people.
Even as a Googler I can find plenty of reasons to be cynical about Google (many involving AI), but the quantum computing research lab is not one of them. It's actual scientific research, funded (I assume) mostly out of advertising dollars, and it's not building something socially problematic. So why all the grief?
I completed my degree in computer science at age 22 - at that time Shor had just published his famous algorithm and the industry press was filled with articles on how quantum computing was just a few years away with just a few technical hurdles yet to be solved.
I turned 50 years old this year, forgive an old man a few chuckles.
Charlie Brown, Lucy, football
Quantum advantage papers have a history of overpromising, this one looks interesting, but it would still seem wise to wait for a second opinion.
A consistent theme of Quantum Computing is setting up the problem to have the hardwired achieve nicely to get a good news article to get more funding.
Im pretty reluctant to make any negative comments about these kinds of posts be cause it will prevent actually achieving the desired outcome.
Quantum computing hardware is still at its infancy.
The problem is not with these papers (or at least not ones like this one) but how they are reported. If quantum computing is going to suceed it needs to do the baby steps before it can do the big steps, and at the current rate the big leaps are probably decades away. There is nothing wrong with that, its a hard problem and its going to take time. But then the press comes in and reports that quantum computing is going to run a marathon tomorrow which is obviously not true and confuses everyone.
There in lies the problem. Hey can I have a few billion dollars for my baby doesn't really work out too well for investors or industry.
The current situation with "AI" took off because people learned their lessons from the last round of funding cuts "AI winter".
That being said any pushback against funding quantum research would be like chopped your own hands off.
SO... BTC goes to zero?
I don’t see why bitcoin wouldn’t update its software in such a case. The majority of minors just need to agree. But why wouldn’t they if the alternative is going to zero?
How could updating the software possibly make a difference here? If the encryption is cracked, then who is to say who owns which Bitcoin? As soon as I try to transfer any coin that I own, I expose my public key, your "Quantum Computer" cracks it, and you offer a competing transaction with a higher fee to send the Bitcoin to your slush fund.
No amount of software fixes can update this. In theory once an attack becomes feasible on the horizon they could update to post-quantum encryption and offer the ability to transfer from old-style addresses to new-style addresses, but this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Fortunately this will never actually happen. It's way more likely that ECDSA is broken by mundane means (better stochastic approaches most likely) than quantum computing being a factor.
> this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Any rational economic actor would participate in a post-quantum hard fork because the alternative is losing all their money.
If this was a company with a $2 trillion market cap there'd be no question they'd move heaven-and-earth to prevent the stock from going to zero.
Y2K only cost $500 billion[1] adjusted for inflation and that required updating essentially every computer on Earth.
[1]https://en.wikipedia.org/wiki/Year_2000_problem#Cost
> would require all holders (not miners) to actively update their wallets. Basically infeasible.
It doesn't require all holders to update their wallets. Some people would fail to do so and lose their money. That doesn't mean the rest of the network can't do anything to save themselves. Most people use hosted wallets like Coinbase these days anyway, and Coinbase would certainly be on top of things.
Also, you don't need to break ECDSA to break BTC. You could also do it by breaking mining. The block header has a 32-bit nonce at the very end. My brain is too smooth to know how realistic this actually is, but perhaps someone could do use a QC to perform the final step of SHA-256 on all 2^32 possible values of the nonce at once, giving them an insurmountable advantage in mining. If only a single party has that advantage, it breaks the Nash equilibrium.
But if multiple parties have that advantage, I suppose BTC could survive until someone breaks ECDSA. All those mining ASICs would become worthless, though.
Firstly I'd want to see them hash the whole blockchain (not just the last block) with the post-quantum algo to make sure history is intact.
But as far as moving balances - it's up to the owners. It would start with anybody holding a balance high enough to make it worth the amount of money it would take to crack a single key. That cracking price will go down, and the value of BTC may go up. People can move over time as they see fit.
As you alluded to, network can have two parallel chains where wallets can be upgraded by users asynchronously before PQC is “needed” (a long way away still) which will leave some wallets vulnerable and others safe. It’s not that herculean as most wallets (not most BTC) are in exchanges. The whales will be sufficiently motivated to switch and everyone else it will happen in the background.
A nice benefit is it solves the problem with Satoshi’s (of course not a real person or owner) wallet. Satoshi’s wallet becomes the defacto quantum advantage prize. That’s a lot of scratch for a research lab.
Not even needed you can just copy network state of a specific moment in time and encrypt with a new algorithm that will be used from then on
The problem is that the owner needs to claim their wallet and migrate it to the new encryption. Just freezing the state at a specific moment doesn't help; to claim the wallet in the new system I just need the private key for the old wallet (as that's the sole way to prove ownership). In our hypothetical post-quantum scenario, anyone with a quantum computer can get the private key and migrate the wallet, becoming the de-facto new owner.
I think this is all overhyped though. It seems likely we will have plenty of warning to migrate prior to achieving big enough quantum computers to steal wallets. Per wikipedia:
> The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates.
IIRC this is speculated to be the reason ECDSA was selected for Bitcoin in the first place.
Note, the 126 billion Toffoli gates are operations, so that's more about how many operations you need to be able to reliably apply without error.
It should be noted that according to IonQ's roadmap, they're targeting 2030 for computers capable of that. That's only about 5 years sooner than when the government has said everyone has to move to post quantum.
Sir Alexander Dane: MINERS, not MINORS.
"Ahhhh... now you tell me" (Formerly Prince Andrew, at some point).
That actually confused me. I thought he he meant "the majority of the minority" while I was pretty sure it's just a simple majority
> The majority of minors just need to agree.
That's an uncomfortably apt typo.
Hey, why are you bringing the kids into this! ;) "The majority of minors"
I'll tell you right now, no way my kids would agree until they're at least adults. They don't even know what asymmetric cryptography is.
I’m confused, are your kids major Bitcoin miners?
GGP used the term "minors," GP is running with the typo.
Not major miners, but minor miners (if you count Minecraft).
The problem is all the lost BTC wallets, which is speculated to be a lot and also one of the biggest reason for the current BTC price, who obviously cannot upgrade to PQ. There is currently a radical proposal of essentially making all those lost wallets worthless, unless they migrate [1]
[1] - https://github.com/jlopp/bips/blob/quantum_migration/bip-pos...
I’m not sure there’s a better alternative.
No, I don't think so. By the time quantum supremacy is really achieved for a "Q-Day" that could affect them or things like them, the existing blockchains which have already been getting hardened will have gotten even harder. Quantum computing could be used to further harden them, as well, rather than compromise them. Supposing that Q-Day brought any temporary hurdles to Bitcoin or Ethereum or related blockchains, well...due to their underlying nature resulting in justified Permanence, we would be able to simply reconstitute and redeploy them for their functionalities because they've already been sufficiently imbued with value and institutional interest as well. These are quantum-resistant hardenings.
So I do not think these tools or economic substrate layers are going anywhere. They are very valuable for the particular kinds of applications that can be built with them and also as additional productive layers to the credit and liquidity markets nationally, internationally, and also globally/universally.
So there is a lot of institutional interest, including governance interest, in using them to build better systems. Bitcoin on its own would be reduced in such justification but because of Ethereum's function as an engine which can drive utility, the two together are a formidable and quantum-resistant platform that can scale into the hundreds of trillions of dollars and in Ethereum's case...certainly beyond $1Q in time.
I'm very bullish on the underlying technology, even beyond tokenomics for any particular project. The underlying technologies are powerful protocols that facilitate the development and deployment of Non Zero Sum systems at scale. With Q-Day not expected until end of 2020s or beginning of 2030s, that is a considerable amount of time (in the tech world) to lay the ground work for further hardening and discussions around this.
no, not really, PQC is already being discussed in pretty much every relevant crypto thing for couple years alearady and there are multiple PQC algos ready to protect important data in banking etc as well
I don’t really understand the threat to banking. Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server. You can’t just inject transactions or something. Is the threat that internal network traffic could be read? Transactions all go to clearing houses anyway. Is it to protect browser->webapp style banking? those all use ec by now anyway, and even if they don’t how do you mitm this traffic?
Where is the exact threat?
> those all use ec by now anyway
As far as i am aware, eliptic curve is also vulnerable to quantum attacks.
The threat is generally both passive eavesdropping to decrypt later and also active MITM attacks. Both of course require the attacker to be in a position to eavesdrop.
> Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server.
Well if you are sitting in the right place on the network then you can.
> how do you mitm this traffic?
Depends on the scenario. If you are government or ISP then its easy. Otherwise it might be difficult. Typical real life scenarios are when the victim is using wifi and the attacker is in the physical vicinity.
Like all things crypto, it always depends on context. What information are you trying to protect and who are you trying to protect.
All that said, people are already experimenting with PQC so it might mostly be moot by the time a quantum computer comes around. On the other hand people are still using md5 so legacy will bite.
> Well if you are sitting in the right place on the network then you can.
Not really. This would be if not instantly then when a batch goes for clearing or reconciliation, be caught -- and an investigation would be immediately started.
There are safeguards against this kind of thing that can't be really defeated by breaking some crypto. We have to protect against malicious employees etc also.
One can not simply insert bank transactions like this. They are really extremely complicated flows here.
I meant on a technical level you could insert the data into the network. Obviously if the system as a whole does not depend on TLS for security, then no amount of breaking TLS will impact it
Flooding the system with forged messages that overwhelm the clearinghouse having to verify them sounds like a good way to bring down a banking system.
Sure, if a bank gets compromised you could in theory DOS a clearing house, but I'd be completely amazed if it succeeded. Those kind of anomalous spikes would be detected quickly. Not even imagining that each bank probably has dedicated instances inside each clearing house.
These are fairly robust systems. You'd likely have a much better impact dossing the banks.
Yah, I suspect the banks pay a handsome sum to smarter people than you and me, and they've gamed this out already.
I build such systems ;)
The big threat is passively breaking TLS, so it’s browser traffic. Or, any internet traffic?
Okay, but breaking that TLS (device->bank) would allow you to intercept the session keys and then decrypt the conversation. Alright, so now you can read I logged in and booked a transaction to my landlord or whatever. What else can you do? OTP/2FA code prevents you from re-using my credentials. Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
So we are all in a collective flap that someone can see my bank transactions? These are pretty much public knowledge to governments/central banks/clearing houses anyway -- doesn't seem like all that big a deal to me.
(I work on payment processing systems for a large bank)
> Has it been demonstrated at all that someone who intercepts a session key is able to somehow inject into a conversation? It seems highly unlikely to me with TCP over the internet.
if you can read the TLS session in general, you can capture the TLS session ticket and then use that to make a subsequent connection. This is easier as you dont have to be injecting packets live or make inconvinent packets disappear.
It seems like detecting a re-use like this should be reasonably easy, it would not look like normal traffic and we could flag this to our surveillance systems for additional checks on these transactions. In a post quantum world, this seems like something that would be everywhere anyway (and presumably, we would be using some other algo by then too).
Somehow, I'm not all that scared. Perhaps I'm naive.. :}
> It seems like detecting a re-use like this should be reasonably easy, it would not look like normal traffic
I don't see why it wouldn't look like normal traffic.
> Somehow, I'm not all that scared. Perhaps I'm naive.. :}
We're talking about an attack that probably won't be practical for another 20 years , which already has counter measures that are in testing right now. Almost nobody should be worried about it.
No, we're still not much closer to that event.
If quantum computers crack digital crytography, traditional bank account goes to zero too because regular 'ol databases also use crytography techniques for communication.
If all else fails, banks can generate terabytes of random one-time pad bytes, and then physically transport those on tape to other banks to set up provably secure communication channels that still go over the internet.
It would be a pain to manage but it would be safe from quantum computing.
They could also use pre-shared keys with symmetric cryptography. AES-256-GCM is secure against quantum attack, no need to bother with one-time pads.
Let's say I give you a function you can call to crack any RSA key. How are you hacking banks?
https://arxiv.org/abs/2509.07255
This paper on verifiable advantage is a lot more compelling. With Scott Aaronson and Quantinuum among other great researchers
I will wait for a HN commenter to tell me what Scott Aaronson thinks about this.
this is my approach, as well, lol
I'm waiting for Peter Gutmann[1] to tell me what to think about this.
[1] https://eprint.iacr.org/2025/1237
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
teamwork.
As with most news, I'll be waiting for Scott Adams to tell me what to think about this
The text adventure guy, or the cartoonist who went batshit?
Aaronson isn't a cartoonist, it was an AI cartoon from ChatGPT that an antisemite sent Aaronson in the mail which he then seemingly maliciously misattributed to Woit making people assume Aaronson went batshit.
https://scottaaronson.blog/?p=9098
Aaronson did work at OpenAI but not on image generation, maybe you could argue the OpenAI safety team he worked on should be involved here but I'm pretty sure image generation was after his time, and even if he did work directly on image generation under NDA or something, attributing that cartoon to Aaronson would be like attributing a cartoon made in Photoshop by an antisemite to a random Photoshop programmer, unless he maliciously added antisemitic images to the training data or something.
The most charitable interpretation that I think Aaronson also has offered is that Aaronson believed Woit was an antisemite because of a genocidal chain of events that in Aaronson's belief would necessarily happen with a democratic solution and that even if Woit didn't believe that that would be the consequence, or believed in democracy deontologically and thought the UN could step in under the genocide convention if any genocide began to be at risk of unfolding, the intent of Woit could be dismissed, and Woit could therefore be somehow be lumped in with the antisemite who sent Aaronson the image.
Aaronson's stated belief also is that any claim that Isreal was commiting a genocide in the last few years is a blood-libel because he believes the population of Gaza is increasing and it can't be a genocide unless there is a population decrease during the course of it. This view of Aaronsno would imply things like if every male in Gaza was sterilized, and the UN stepped in and stopped it as a genocide, it would be a blood libel to call that genocide so long as the population didn't decrease during the course of it, even if it did decrease afterwards. But maybe he would clarify that it could include decreases that happen with a delayed effect of the actions. But these kind of strong beliefs of blood-libel I think are part of why he felt ok labeling the comic with Woit's name.
I also don't think if the population does go down or has been going down he will say it was from a genocide, but rather that populations can go down from war. He's only proposing that a population must go down as a necessary criteria of genocide, not a sufficient one. I definitely don't agree with him, to me if Hamas carried out half of an Oct 7 every day it would clearly be a genocide even if that brought the replacement rate to 1.001 and it wouldn't change anything if it brought it to 0.999.
I think you're very, very confused.
> guywithahat [...] I'll be waiting for Scott Adams to tell me what to think about this
Scott Adams
Text adventure guy: https://en.wikipedia.org/wiki/Scott_Adams_(game_designer)
Batshit cartoonist: https://en.wikipedia.org/wiki/Scott_Adams
(also, for fun, a cartoon by Scott Aaronson and Zack Weinersmith: https://www.smbc-comics.com/comic/the-talk-3)
I meant Scott Adams, the creator of Dilbert. The joke is just that they have similar names, and Adams does a lot of political/topical commentary in both his comics and podcast but isn't a good source since a lot of his work is comedy focused.
I am unaware of any comics Aaronson made and I don't blame him for anything he did make or was loosely associated with. It is incredible to the extend people are willing to go to claim people are crazy though, both in regards to Adams and Aaronson.
Oh the similar names through me off just the same. It wasn't a comic he made but a recent event on his blog involving an antisemitic chatgpt illustration.
[dead]
[dead]
[dead]
[flagged]
We can only hope.
Finally some good news to the crypto madness!
Classic HN, always downvoting every comment concerning decentralized and p2p currencies. How do you like your centralized, no privacy, mass surveilled worthless banking system
I don't consider the current system to be worthless. In fact, it functions remarkably well. There is certainly room for additional substrate layers though, and Bitcoin being digital or electronic gold and Ethereum being an e-steam engine or e-computer make for a powerful combination for applications together. I agree that the crowd here has historically not understood, or wanted to understand, the underlying protocols and what is possible. A bizarre kind of hubris perhaps, or maybe just a response to how the first iterations of a web2.5 or web3.0 were...admittedly more mired in a kind of marketing hype that was not as reflective of what is possible and sustainable in the space due to there not being realistic web and engineering muscle at the forefront of the hype.
I think this current cycle is going to change that though. The kinds of projects spinning up are truly massive, innovative, and interesting. Stay tuned!
It's not about how well it functions, bot. Go spread your propaganda somewhere else
What is it about then? I'm not spreading propaganda - I'm maximally truth seeking and approaching things from a technical/economic/governance point of view and not an ideological one per se. Though ideology shapes everything, what I mean is that I'm not ideologically predisposed towards a conclusion on things. For me what matters is the core, truthy aspects of a given subject.
Lol
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
Funnily enough I remember reading a comment a week or two ago decrying the death of corporate research labs like Bell Labs and Xerox PARC
From the article:
> in partnership with The University of California, Berkeley, we ran the Quantum Echoes algorithm on our Willow chip...
And the author affiliations in the Nature paper include:
Princeton University; UC Berkeley; University of Massachusetts, Amherst; Caltech; Harvard; UC Santa Barbara; University of Connecticut; UC Santa Barbara; MIT; UC Riverside; Dartmouth College; Max Planck Institute.
This is very much in partnership with universities and they clearly state that too.
Thanks. I could not find any mention of it. This is good.
Did you bother looking at the author list? This is a grand statement that's easily refuted by just looking.
Maybe you're thinking specifically of LLM labs. I agree this is happening there, but I wouldn't be as dramatic. Everywhere else, university-corporation/government lab partnerships are still going very strong.
Nihilism is to trendy right now
Nihilism is one response to disillusionment.
Another response is to come to terms with a possibly meaningless and Sisyphean reality and to keep pushing the boulder (that you care about) up the hill anyway.
I’m glad the poster is concerned and/or disillusioned about the hype, hyperbole and deception associated with this type of research.
It suggests he still cares.
I don't think it's accurate to attribute some kind of altruism to these research universities. Have a look at some of those pay packages or the literal hedge funds that they operate. And they're mostly exempt from taxation.
Before the mega monopolies took over, corps used to partner with universities to conduct this kind of research. Now we have bloated salaries, rich corporations, and expensive research while having under experienced graduates. These costs will get forwarded to the consumer. The future won’t have a lot of things that we have come to expect.
I don't disagree, but these days I'm happy to see any advanced research at all.
Granted, too often I see the world through HN-colored glasses, but it seems like so many technological achievements are variations on getting people addicted to something in order to show them ads.
Did Bellcore or Xerox PARC do a lot of university partnerships? I was into other things in those days.
the big problem with quantum advantage is that quantum computing is inherently error-prone and stochastic, but then they compare to classical methods that are exact
let a classical computer use an error prone stochastic method and it still blows the doors off of qc
this is a false comparison
Stochasticity (randomness) is pervasively used in classical algorithms that one compares to. That is nothing new and has always been part of comparisons.
"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.
Yes the claims here allow the classical computer to use a random number generator.
They get the same result when they run it a second time and it matches the classical result; this is their key achievement (in addition to the speed).
Afaik we are a decade or two away from quantum supremacy. All the AI monks, forget that if AI is the future, quantum supremacy is the present. And whoever controls the present, decides the future.
Rememeber, it is not about quantum general computing, it's about implementing the quantum computation of Shor's algorithm
but much like AI hype quantum hype is also way over played, yeah modern asymmetric encryption will be less secure, but even after you have quantum computers that can do Shor's algorithm it might be a while before there are quantum computers affordable enough for it to be an actual threat (i.e. it's not cheaper to just buy a zero day for the target's phone or something).
But since we already have post quantum algorithms, the end state of cheap quantum computers is just a new equilibrium where people use the new algorithms and they can't be directly cracked and it's basically the same except maybe you can decrypt historical stuff but who knows if it's worth it.
I was mostly talking about state actors buying quantum computers that are just built to run that particular algorithm and using it to snoop on poorer countries who cannot afford such tech. Plus all countries will have plenty of systems that have not been quantum proofed. The moment when it becomes affordable to a state actor until most state actors have access to such tech is likely to be a long time especially since no one will admit to possess such tech. And unlike nuclear weapons, it is much harder to prove whether or not someone has one
Notice how they say "quantum advantage" not "supremacy" and "a (big) step toward real-world applications". So actually just another step as always. And I'm left to doubt if the classic algorithm used for comparison was properly optimised.
FWIW "quantum advantage" and "quantum supremacy" are synonyms, some people just prefer the former because the latter reminds them of "white supremacy" https://en.wikipedia.org/wiki/Quantum_supremacy#Criticism_of...
Good point, I had no idea. I actually thought they just restrained themselves this time :D
I skimmed the paper so might have missed something, but iiuc there is no algorithm used for comparison. They did not run some well defined algorithm on some benchmark instances, they estimated the cost of simulating the circuits "through tensor network contraction" - I quote here not to scare but because this is where my expertise runs out.
Quantum is the new AI. It's just the new hype cycle of doom.
The barrier to entry seems a little higher than AI so it's at least a little limited in scope
Negative points? Is no one else tired of hype outweighing the science?
And Bitcoin is still 108k… why exactly?
> In physics, a quantum (pl.: quanta) is the minimum amount of any physical entity (physical property) involved in an interaction.
Not a big leap then.
I see what you did there. Shame a lot of other people didn't.