Cool project but... This is an egregious misrepresentation of the actual results both from significance perspective and accuracy perspective.
A. No validation is done on server side to confirm the workers are reporting correct results.
B. Increasing the limit by less than a thousandth of a percent does not make this a "world record"! If we go by that logic, I only have to validate one more example than you and claim a world record. And then you'd do the same. And then I'd do the same and we'll be playing "world record" ping pong all day!
But "B" isn't the big problem here because we have worse problems "A"! Nobody (not even the OP) can tell if the results are accurate!
No, I'm not simply dissing at a Show HN post. There are many comments here that explain these problems much better than I could.
That makes sense in sports. But in math? It's trivially easy to generate thousands of so-called "world records" every second.
Here's one:
4*10^18 + 7*10^13 + 1.
Boom! New world record. Now add 1 and you've got another. Try it. Keep going. World records like this will be surpassed by someone else in milliseconds.
Honestly, this is the first time I've heard "world record" used for NOT finding a counterexample. The whole thing feels absurd. You can keep checking numbers forever, calling each one a record? It's silly, to be honest. Never heard anyone calling these world records, before today.
OP has a nice project. But the wording is so deceptive and so silly that it harms the credibility of the project more than it helps.
That's not comparable to finding Goldbach NON-counterexamples.
With Goldbach, claiming a "world record" just means checking one more number and seeing if it is still NOT a counterexample. It's easy. Contrast that with computing a new digit of pi - something you can't achieve by simply incrementing a value and running a check.
Finding each new digit of pi (the ones very far out) is not a trivial task. The computational effort increases by a lot as you go deeper. Something like O(n (log n)^k) for some k (usually k = 3).
Since this is math, I feel pedantic. It may not be a notable world record, but it’s still a world record. There are infinitely many non-notable world record categories. I currently hold the one for saying the word “fbejsixbenebxhsh” the most number of times in a row. Nobody cares, but it’s still a world record.
Since it's not just math but also using English on a social website, we can be even more pedantic and observe that posting it implies notability. It is literally noting it.
This is more like if someone pulled a truck down 2,800 miles of road between NYC and LA in 2012, left it there, and then you grabbed the rope in 2025 to pull it less than another tenth of a mile to have "shatters world record" in your blog title.
I.e. not only is this an extremely small increment but the original work did not have to be repeated. Nothing about the state of computing in 2012 would have prevented going the extra amount here, they just decided to stop. The original record even states (on https://sweet.ua.pt/tos/goldbach.html):
> On a single core of a 3.3GHz core i3 processor, testing an interval of 10^12 integers near 10^18 takes close to 48 minutes
So the additional work here in 2025 was the equivalent of running a single core of a 2012 i3 for ~70 more hours.
All this is a shame as the project itself actually seems much more interesting than leading claims.
> That is the literal definition of a world record here, my guy.
I don't dispute that. If you read my comment carefully, you'll find that I'm calling them "world records" too. My point is that nobody in the math community uses "world record" for finding trivial non-counterexamples like this. There are infinitely many such "world records" and each one is trivial to surpass in under a second.
Compare that to something like the finding a new Mersenne prime or calculating more digits of pi. Those records hold weight because they're difficult to achieve and stand for years.
This post could've been one of the infinite, uninteresting "world records" if the OP had applied more rigor in the implementation. But due to gaps in verification, this post is not a world record of any kind because the correctness of the results can't be confirmed. The OP has no way to confirm the correctness of their data. You'd get better context by reading the full thread. This has already been discussed at length.
I think they're saying that because it builds on the previous result having any one effort claim a record doesn't really make sense.
Like imagine there was a record for longest novel published, and what you did was take the previous longest novel and add the word "hello" to the end of it. Does the person who added "hello" get the record?
He slightly pushed the computation past the previous world record, and he’s continuing to push it forward with a clear goal. It’s well within the spirit of a world record.
Besides, a world record is still a world record — it’s up to you to decide how interesting it is. You are indeed just dissing on a Show HN post.
Server side validation is trivial. What makes you believe that is not happening? That code is not available.
If you'd read the article carefully, he hasn't. For all we know one client (or worse, several) found counterexamples but didn't report them back to the server. Without verification on the server side, there's no way to claim the entire range has been reliably checked.
What he's shown is that many volunteers have checked a large portion of the numbers up to a certain bound and found no counterexamples. But he hasn't shown that all numbers within that range have actually been verified. It's entirely possible that some block results were falsely reported by bad clients. Meaning counterexamples could still be hiding in those falsely reported gaps, however improbable! This kind of lapse in rigor matters in math! This lapse in rigor invalidates the entire claim of the OP!
> Server side validation is trivial. What makes you believe that is not happening? That code is not available.
Please read the full thread. This has all already been discussed at length.
I think that is rather uncharitable, there is a pretty good mix of people here.
Yes, some are trying to sell themselves, which may be distasteful to you but people need to eat.
Others are activists, pushing their views.
Others here are just reading and occasionally commenting, like myself.
But there are also a lot of very passionate and smart people here who are pushing boundaries and limits with what they can do in terms of hardware and software.
and here we have the problem, defense. stop defending people “trying to eat”. we all are, and many of us are programmers. many of us are not here to have others buy our meal.
I'm here to read articles about hardware, software, and social hacking in the sense of "exploiting a system for fun, education, and only occasionally profit".
"However, the Jargon File reports that considerable overlap existed for the early phreaking at the beginning of the 1970s. An article from MIT's student paper The Tech used the term hacker in this context already in 1963 in its pejorative meaning for someone messing with the phone system."
No license is going to be suitable for everyone. You might think BSD, MIT, WTFPL are the most liberal licenses but they are not suitable for me. If I am invested in an open source software enough to contribute code to it then I want that when a greedy company takes that software and improves it and redistributes it they are also forced to release the source code benefitting everyone. BSD, MIT, WTFPL can't do that but GPL can. No license is a silver bullet.
Can you not use a desktop software to correct your spellings, fix your grammar and reformulate sentences? I think even something like Microsoft Word is going to be a good bargain for this. If you want a free tooling Emacs with some plugins can be great. A quick search gave me https://simpleit.rocks/lisp/emacs/writing-in-emacs-checking-... Exploring this space may give more results for free tooling that runs your desktop and sends your data to noone.
Bygone? It's a fairly regular occurrence where I search for some Win32 API and end up on Chen's blog. I get what you are saying about articles like this one though. I can't imagine much new work is happening with the 8086 (or 8088 or '186 or '286 or ...)
I mean even w.r.t. Windows stuff, quite a large part of what he talks about is variations on "here's an interesting old anecdote from 1995" or "You know that thing that's totally useless now and doesn't matter ? Here's why we absolutely needed it in 1989."
Programming languages! It is ironic because I am a programmer and I write programs and develop software for a living. But proliferation of programming languages has made my life worse. Too many programming languages and every company has their own favorites!
It may be an unpopular opinion on HN but I don't enjoy learning new syntaxes every year for relatively few benefits in new concepts or paradigms.
In my ideal world everyone would be using Lisp (my username checks out!) but so much power. It has a simple syntax (some say it has no syntax but I think that is a little hyperbole). In my ideal world new concepts and paradigms are implemented in Lisp using Lisp. I'd much rather spend time solving real problems that real human beings care about. I'd much rather learning new ways of solving problems with new paradigms. I don't want to waste hours learning new syntaxes and their gotchas and edge-cases!
If you only had one programming language, this problem would turn into an issue of too many frameworks. The problem isn't that there are too many programming languages; the issue is that there are many ways to structure code around a problem. Different problems incentivize different structures and so you end up with different languages.
Do you get Javascript error messages in Clojure? That is one of the drawbacks to transpiled languages I have used before.
The best one I’ve used so far is Kotlin, which is a pleasure to use in comparison to Java, but this might be because I used it in an IDE written by the language designers themselves.
I have the same feelings for Lisp, honestly. From Day Zero of my programming experience I dreamt about a “meta” language (overdose of C64’s BASIC V2 caused this symptoms, maybe). I even didn’t know the meta word but years later I found the Lisp and said “yeah, that’s it”.
If anything, I think websites have more uptime these days than they were 10 years ago. Remember the "this website is under maintenance" banners of the bygone era, anyone?
I know that being a tourist in Canary Islands is fun but is it a good country for a German to move to permanently? Does it have the kind of infrastructure and healthcare Germany has?
There is quite a bit German community there incl doctors, lawyers etc.
the big hospitals are also okay, although the local youth is fleeing the island.
The comments by singaporecode where they say "I should sponsor the guide’s author" (when the author is himself/herself/themselves) and "I found online" (where they published it themselves) are clearly deceptive self-promotions of https://ashok-khanna.medium.com/ and https://github.com/ashok-khanna but do notice that they are unrelated to Lisp-Stat.
A. No validation is done on server side to confirm the workers are reporting correct results.
B. Increasing the limit by less than a thousandth of a percent does not make this a "world record"! If we go by that logic, I only have to validate one more example than you and claim a world record. And then you'd do the same. And then I'd do the same and we'll be playing "world record" ping pong all day!
But "B" isn't the big problem here because we have worse problems "A"! Nobody (not even the OP) can tell if the results are accurate!
No, I'm not simply dissing at a Show HN post. There are many comments here that explain these problems much better than I could.
This is egregrious clickbait!