Hacker Newsnew | past | comments | ask | show | jobs | submit | Strilanc's commentslogin

> By late 2024 the biggest numbers that had been factored by an actual digital quantum computer had 35 bits (citing https://arxiv.org/pdf/2410.14397v1 )

This is incorrect. The cited reference says "N <= 35". That N is the number being factored, not the number of bits in the number. Also, footnote a of that paper points out (correctly) that the circuits that were used likely needed knowledge of the factors to create (e.g. as explained in https://arxiv.org/abs/1301.7007 ). As far as I know, only N=15 has been factored on a quantum computer in a no-shenanigans way.

It's conceivable that current ion trap machines could do a no-shenanigans N=21.... but anyone judging progress in quantum computing by largest-number-factored is looking at the wrong metric (for now). You won't see that metric move meaningfully until quantum error correction is done spinning up.


It's fame comes from the simplicity of its construction rather than its utility elsewhere in mathematics.

For example, Graham's number is pretty famous but it's more of a historical artifact rather than a foundational building block. Other examples of non-foundational fame would be the famous integers 42, 69, and 420.


Yes, speed matters. No, quantum computers can't do everything instantly even with unbounded qubits.

A well studied example is that it's impossible to parallelize the steps in Grover's algorithm. To find a preimage amongst N possibilities, with only black box access, you need Ω(sqrt(N)) sequential steps on the quantum computer [1].

Another well known case is that there's no known way to execute a fault tolerant quantum circuit faster than its reaction depth (other than finding a rewrite that reduces the depth, such as replacing a ripple carry adder with a carry lookahead adder) [2]. There's no known way to make the reaction depth small in general.

Another example is GCD (greatest common divisor). It's conjectured to be an inherently sequential problem (no polylog depth classical circuit) and there's no known quantum circuit for GCD with lower depth than the classical circuits.

[1]: https://arxiv.org/abs/quant-ph/9711070

[2]: https://arxiv.org/abs/1210.4626


Author here: yes that's all correct.

This is perhaps not clear enough, but the title refers to a pattern. For classical bits on a quantum computer this pattern is already playing out (as shown in the cited experiments), and for quantum bits I think it's about to play out.

Classical storage of classical bits is still far more reliable, of course. Hell, a rock chucked into one bucket or another is still more reliable. We'll never beat the classical computer at storing classical bits... but the rock in a bucket has some harsh competition coming.

I should maybe also mention that arbitrarily good qubits are a step on the road, not the end. I've seen a few twitter takes making that incorrect extrapolation. We'll still need hundreds of these logical qubits. It's conceivable that quantity also jumps suddenly... but that'd require even more complex block codes to start working (not just surface codes). I'm way less sure if that will happen in the next five years.


I don’t really expect fancier codes to cause a huge jump in the number of logical qubits. At the end of the day, there’s some code rate (logical bits / physical bits) that makes a quantum computer work. The “FOOM” is the transition from that code rate changing from zero (lifetime of a logical bit is short) to something that is distinctly different from zero (the state lasts long enough to be useful when some credible code). Say the code rate is 0.001 when this happens. (I haven’t been in the field for a little while, but I’d expect higher because those huuuuge codes have huuuuge syndromes, which isn’t so fun. But if true topological QC ever works, it will be a different story.) The code rate is unlikely to ever be higher than 1/7 or so, and it will definitely not exceed 1. So there’s at most a factor of 1000, and probably less, to be gained by improving the code rate. This isn’t an exponential or super-exponential FOOM.

A factor of 1000 may well be the difference between destroying Shor’s-algorithm-prone cryptography and destroying it later, though.


I'll add some nuance here. In a classical computer, computing with coded bits is not particularly difficult. We've known how to do it with some degree of mathematical rigor for decades -- IIRC John von Neumann was interested in this topic. And we barely even need to do it explicitly: computers have accurate enough gates without explicit coding that merely error-correcting RAM (ECC-style) and data links (Ethernet, PCIe, etc) is good enough for almost all applications. Even in aerospace, usually we just have one extra majority vote over a few different computers.

Quantum computers are different. It seems quite unlikely that anyone build the equivalent of, say, an XOR gate that takes two single physical qubits in and spits out two physical qubits (this is quantum land -- the standard gates neither create nor destroy qubits, so the number of inputs and outputs is the same) that works well enough to actually represent that particular operation in whatever software is being run. Instead each logical operation will turn into multiple physical operations that work like an error correcting code. The easy classical trick where your transistor is janky at 0.9V so you run it at 1.0V amounts to moving more electrons around per operation, and this approach is analogous to correcting bit flips but not phase errors, and it makes your quantum computer stop being quantum.

And here's where it gets messy. The physical qubit technologies that are best for longish-term data storage may not be the same as the technologies that are good for computation, and those may not be the same technologies that are good for communication at a distance. (For example, photons are pretty good for transmitting quantum states, but transferring a qubit from a different technology to a photon state and back is not so easy, and demonstration of computation with photons have been pretty limited.) As an extreme example, one can, in principle, store quantum states quite robustly and even compute with them if one can find the correct kind of unobtanium (materials with the appropriate type of non-Abelian anyon surface states), but, last I heard, no one had much of an idea how to get the qubit states off the chip even if such a chip existed.

So it is possible that we'll end up with a quantum computer that doesn't scale, at least for a while. There might be 20k physical qubits, and some code rate, and some number of logical quantum operations you can do on the logical qubits before they decay or you get bored, and very little ability to scale to more than one computer that can split up a computation between them. In that case, the code rate is a big deal.


But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical? I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits). Happy to be proven wrong of course.


> But you still believe that quantum computers have a likelihood of being possible to build AND that they can accomplish a task faster than classical?

Not GP but yes. I'm reasonably confident that we will have quantum computers that are large and stable enough to have a real quantum advantage, but that's mostly because I believe Moore's law is truly dead and we will see a plateau in 'classical' CPU advancement and memory densities.

> I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits)

I don't think people appreciate or realize that a good chunk of the innovations necessary to "get there" with quantum are traditional (albeit specialized) engineering problems, not new research (but breakthroughs can speed it up). I'm a much bigger fan of the "poking lasers at atoms" style of quantum computer than the superconducting ones for this reason, the engineering is more like building cleaner lasers and better AOMs [0] than trying to figure out how to super cool vats of silicon and copper. It's outside my area of expertise, but I would expect innovations to support better lithography to also benefit these types of systems, though less directly than superconducting.

Source: I worked on hard-realtime control systems for quantum computers in the past. Left because the academic culture can be quite toxic.

[0]: https://en.wikipedia.org/wiki/Acousto-optic_modulator


I don’t know how people claim the science is solved and “it’s just engineering” when scaling up to no trivial quantum circuits is literally the problem no one has solved and hand waving it away as an “engineering problem” seems really disingenuous. Foundational science needs to be done to solve these problems.

Classical CPUs have slowed but not stopped but more importantly quantum machines haven’t even been built yet let alone been proven possible to scale up arbitrarily. Haven’t even demonstrated they can factor 17 faster than a classical computer.


Factoring will be okay for tracking progress later; it's just a bad benchmark now. Factoring benchmarks have little visibility into fault tolerance spinning up, which is the important progress right now. Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.


> Factoring becoming a reasonable benchmark is strongly related to quantum computing becoming useful.

Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?

I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?


Under a Moore's law-like scenario, factoring 21 happens only about ~5 years before factoring a 1024 bit number. With all the optimizations, factoring an n bit number only requires ~n logical qbits, but most of those optimizations only work for large n, so 21 is only ~5 doubles away from 2^1024.

the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.


This is a type that I would use a lot.

For example, I often write classes that do cacheable analysis that results in a dict (e.g. the class stores a list of tiles defined by points and users want a point-to-tiles mapping for convenience). It's worth caching those transformations, e.g. using @functools.cached_property, but this introduces a risk where any caller could ruin the returned cached value by editing it. Currently, I take the safety hit (cache a dict) instead of the performance hit (make a new copy for each caller). Caching a frozendict would be a better tradeoff.


Maybe you should take a look at pyrsistent. That allows you to make frozen “maps”. You can “evolve” them into new versions of the objects and it keeps the references to the unchanged keys and values under the hood so it’s fast and memory efficient.


Well, for example, consider this recent study that claimed developers using AI tools take 19% longer to finish tasks [1].

This was their methodology:

> we recruited 16 experienced developers from large open-source repositories (averaging 22k+ stars and 1M+ lines of code) that they’ve contributed to for multiple years. Developers provide lists of real issues (246 total) that would be valuable to the repository—bug fixes, features, and refactors that would normally be part of their regular work. Then, we randomly assign each issue to either allow or disallow use of AI while working on the issue.

Now consider the question of whether you expect this research to generalize. Do you expect that if you / your friends / your coworkers started using AI tools (or stopped using AI tools) that the difference in productivity would also be 19%? Of course not! They didn't look at enough people or contexts to get two sig figs of precision on that average, nor enough to expect the conclusion to generalize. Plus the AI tools are constantly changing, so even if the study was nailing the average productivity change it would be wrong a few months later. Plus the time period wasn't long enough for the people to build expertise, and "if I spend time getting good at this will it be worth it" is probably the real question we want answered. The study is so weak that I don't even feel compelled to trust the sign of their result to be predictive. And I would be saying the same thing if it reported 19% higher instead of 19% lower.

I don't want to be too harsh on the study authors; I have a hard time imagining any way to do better given resource constraints and real world practicalities... but that's kind of the whole problem with such studies. They're too small and too specific and that's really hard to fix. Honestly I think I'd trust five anecdotes at lunch more than most software studies (mainly because the anecdotes have the huge advantage of being from the same context I work in). Contrast with medical studies where I'd trust the studies over the anecdotes, because for all their flaws at least they actually put in the necessary resources.

To be pithy: maybe we upvote Carmack quotes more than software studies because Carmack quotes are informed by more written code than most software studies.

[1]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...


Taking into account issues like that reading critically, which is great and essential. Dismissing ideas on that basis - often done on HN generally, even for large medical studies - is intellectually lazy, imho:

Life is full of flaws and uncertainty; that is the medium in which we swim and breath and work. The solution is not to lie at the bottom until the ocean becomes pure H2O; the trick is to find value.


> I don't want to be too harsh on the study authors

Well, I'll do it for you. There is much of attention grabbing bull*it. For example I've seen on LinkedIn study claiming 60% of Indians daily using AI in their jobs, and only 10% of Japanese. You can guess who did it, very patriotic, but far from the reality.


> The precision in phase needed to perform an accurate QFT scales EXPONENTIALLY with the number of qubits you're trying to transform.

This is false.

The gates that appear in the textbook QFT circuit (such as the one shown on wikipedia [1]) do mention angles that are exponentially small in N (the number of qubits being operated upon). That may be what's confusing you. But it's well known that the tolerance on those rotations is high, meaning that simply skipping all the exponentially tiny rotations introduces negligible error [2][3].

Here's a simple model. Each time you get a rotation off by an angle of X, add X to the "total algorithm rotation error" R. The chance of an algorithm failing is at most R^2. For example, if R is less than 1 degree then the chance of algorithm failure will be less than 0.03%. That's an acceptable retry chance for Shor's algorithm. The QFT circuit on N qubits performs less than N^2 rotations. So, for R to be less than 1 degree, it's sufficient for each rotation's error X to be less than (1°)/N^2. Therefore the required precision only increases polynomially (like N^2) instead of exponentially (like 2^N). Note the required precision can be improved from O(1/N^2) to O(1/N) using techniques like the qubit recycling QFT [4].

Actually, even if the required precision scaled exponentially, that still wouldn't be an insurmountable problem. Quantum error correction achieves exponentially tighter tolerances from polynomially increasing resources. For example, Ross and Selinger proved that continuous rotations can be approximated to a target error of epsilon using O(log(1/epsilon)) gates from the discrete gate set Clifford+T [4]. And Clifford gates with error below epsilon can be achieved in the surface code using O(log(1/epsilon)^2) noisy qubits for O(log(1/epsilon)) time [5]. And T gates can be achieved by using those reliable Clifford gates to perform magic state distillation of log(1/epsilon)^O(1) T states [6]. Since everything scales polynomially in log(1/epsilon), making epsilon exponentially smaller adds polynomial resources.

There is no part of Shor's algorithm that requires resources growing exponentially in n (the number of digits of the number being factored). The practical scaling is more like n^3: factoring a number that's twice as large can be done with ~two times as many qubits running for ~four times as long. Even if the qubits are noisy [7].

[1]: https://en.wikipedia.org/wiki/Quantum_Fourier_transform#/med...

[2]: https://arxiv.org/abs/quant-ph/0306018

[3]: https://arxiv.org/abs/quant-ph/9601018

[4]: https://arxiv.org/pdf/quant-ph/9903071#page=12

[5]: https://arxiv.org/abs/1208.0928

[6]: https://arxiv.org/abs/1209.2426

[7]: https://arxiv.org/abs/1905.09749


I saw a commercial once where the joke was a guy asking a girl for her IP address instead of her phone number. They went with 127.0.0.1; the loopback address. So (at least in my eyes) there was the extra unspoken joke of her essentially telling the guy to go f*#$ himself.


> the people writing the standard are not exactly known for adding features “just because”

Ah yes, C++, the discerning language.

Iterating over optional does seem syntactically convenient. My main question would be if it guarantees no overhead. For example, is there an additional conditional branch due to the iterator hiding the statically know fact that there's at most one iteration? I don't use C++ for its beauty, I use it for speed.


No, the generated code seems to be mostly the same as the manual version: https://gcc.godbolt.org/z/aK8orbKE8

The main difference there seems to be that GCC treats the if() as unlikely to be taken while the for() as likely.


> Ah yes, C++, the discerning language.

C++, the language that refused to add `contains()` to maps until, you know, C++20!


They didn't stop there, C++23 brought contains() to strings too! You can do some crazy stuff nowadays like say if(username.contains("hello"))

Absolutely incredible, definitely worth the wait


Note that despite this work in C++ you can't flip the script to if ("hello".contains(username)) unlike Rust

Rust will also let you: "FOO".contains(char::is_uppercase)

That's showing off three clever tricks Rust has that C++ does not, but one day it will just be compile time evaluated as true and that's one place C++ is richer today. Many other things in Rust can be but Rust's traits aren't today allowed to be compile time evaluated, so the fact that we could determine at compile time whether there's an uppercase letter in FOO isn't enough yet. One day.


I mean you could make that work in C++ too, with ref qualifier method overloads:

  namespace std {

  class string {

     bool contains(const string& other) & {
         //Check if 'this' is in 'other' 
     }

     bool contains(const string& other) && {
         //check if 'other' is in 'this'
     }
  };

  }

  using namespace std::string_literals;

  int main() {

     string foo("foo bar foo");

     foo.contains("bar"); //returns true

     "bar"s.contains("foo bar foo"); //returns true
  }
But I hope not because this 'flipping the script' behavior just makes the code flow more difficult to reasonate about.


No, I guess you missed the point. Rust's contains isn't somehow magically upside down like your weird C++ example but it does work on the actual built-in string literal, whereas C++ can't do that because its built-in string literal is the weird C string literal that's a zero terminated char array for some reason.

Notice how you needed to write "bar"s to make your code compile? That trailing 's' says you want specifically to construct a std::string not use the language's built-in string literals. So you'll need a full blown hosted C++ environment (otherwise there's no allocator for the string class)

The Rust string literal was inherently a &'static str, which only needs the core language, it Just Works™.

There's a weird asymmetry, Bjarne wants user defined types to have all the features the built-in types have, but, not vice versa. So you can overload the short circuiting boolean operators on your own types (don't, this is basically always a bad idea) but you can't call methods on built-in literals like "foo" or true or 'X'...


Pedantically, C++ just needs std::string_view for contains(), not a full std::string. No need for an allocator.

https://en.cppreference.com/w/cpp/string/basic_string_view/c...

With the string literal suffixes, I don't think it's all that different ("hello"sv.contains(username)). Yes, yes, for legacy reasons bare "" is the C-style string, but you can get a string_view string easily enough.


I agree that it's not "all that different". It's just that years later what C++ offers is slightly worse while having more awkward syntax.

Exactly like the article topic. Iterating over your maybe type was IMO an obvious thing you'd want, but it took C++ a decade to provide this behaviour in a slightly awkward way.

And likewise for their maybe reference, which C++ 26 also finally lands. The original discussion papers for std::optional explain the core ideas for std::optional<T&> but apparently the possibility that it's an assign through type, a thing nobody has ever wanted, and no other language has ever provided - was so important that it blocked all work on this idea for about a decade.


This gave me a good chuckle. For those wondering, `count` was the go-to and I can tolerate an argument that `contains` is a negligible improvement.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: