Hacker Newsnew | past | comments | ask | show | jobs | submit | jerf's commentslogin

For identity theft, I think at this point it depends on where you set the bar. I've never had someone clean out my checking account or anything truly large, but my wife and I have had fraudulent charges on our credit cards several times as they've been leaked out one way or another. I would not "identify" as a "identity theft victim" per se if you asked me out of the blue, because compared to some of what I've heard about, I've had nothing more than minor annoyances come out of this. But yeah, I'd guess that it's fair to say that at this point most people have had at least some sort of identity-related issue at some point.

While that speed increase is real, of course, you're really just looking at the general speed delta between Python and C there. To be honest I'm a bit surprised you didn't get another factor of 2 or 3.

"Cimba even processed more simulated events per second on a single CPU core than SimPy could do on all 64 cores"

One of the reasons I don't care in the slightest about Python "fixing" the GIL. When your language is already running at a speed where a compiled language can be quite reasonably expected to outdo your performance on 32 or 64 cores on a single core, who really cares if removing the GIL lets me get twice the speed of an unthreaded program in Python by running on 8 cores? If speed was important you shouldn't have been using pure Python.

(And let me underline that pure in "pure Python". There are many ways to be in the Python ecosystem but not be running Python. Those all have their own complicated cost/benefit tradeoffs on speed ranging all over the board. I'm talking about pure Python here.)


Good point. The profiler tells me that the context switch between coroutines is the most time-consuming part, even if I tried to keep it as light as possible, so I guess the explanation for "only" getting 45x speed improvement rather than 100x is that it is spending a significant part of the time moving register content to and from memory.

Any ideas for how to speed up the context switches would be welcome, of course.


It is not weird in the slightest. These things are coordinated at the state level all the time.

This is probably one of those good tests of "is your 'conspiracy theory' meter properly calibrated", because if it's going off right now and you are in disbelief, you've got it calibrated incorrectly. This is so completely routine that there's an entire branch of law codified in this way called the "Uniform Commercial Code": https://en.wikipedia.org/wiki/Uniform_Commercial_Code and see the organization running this' home page at https://www.uniformlaws.org/acts/ucc .

And that's just a particular set of laws with an organization dedicated to harmonizing all the various states laws for their particular use cases. It's not the one and only gateway to such laws, it's just an example of a cross-state law coordination so established that it has an entire organization dedicated to it. Plenty of other stuff is coordinated at the state level across multiple states all the time.


That is not a number, that is infinity.

The (implicit) rules of the game require the number to be finite. The reason for this is not that infinity is not obviously "the largest" but that the game of "write infinity in the smallest number of {resource}" is trivial and uninteresting. (At least for any even remotely sensible encoding scheme. Malbolge[1] experts may chime up as to how easy it is to write infinity in that language.) So if you like, pretend we played that game already and we've moved on to this one. "Write infinity" is at best a warmup for this game.

(I'm not going to put up another reply for this, but the several people posting "ah, I will cleverly just declare 'the biggest number someone else encodes + 1'" are just posting infinity too. The argument is somewhat longer, but not that difficult.)

[1]: https://esolangs.org/wiki/Malbolge


It isn’t actually infinite since it can only do a finite number of iterations per second (though it would be large!), and there are only a finite number of seconds in the universe (near as we can tell).

This game assumes the computations run to completion on systems that will never run out of resources. No one in this universe will ever compute Ackerman's Number, BB(6), or the final answer given in the post. Computations that never complete are infinite.

If you are playing this game and can't produce a number that doesn't fit in this universe you are probably better suited playing something else. That's just table stakes. If it even qualifies as that. "Inscribe every subatomic particle in the universe with a 9 every planck instant of the universe until the heat death of the universe" doesn't even get off the starting line in games like this.

Another general comment: It feels like a lot of people are really flailing around here, and need to understand this is a game. It has rules. If you change the rules, you are playing a different game. There is nothing wrong with playing a different game. It is just a different game. The game is not immutably written in the structure of the universe, or a mathematical truth, it is a human choice. And there isn't necessarily a "why" to the rules any more than there's a "why" to why the bishop moves as it does in chess. You can, in fact, change that rule. There are thousands of such variants. It's just that you're playing a different game than chess at that point. If you don't want to play the author's game, then that's fine, but it doesn't change the game itself. And proposing different solutions is equivalent to saying that you can win a chess game by just flipping over the board and yelling "I win". You can do that. Perhaps you've even won some game. But whatever game you just won, it isn't chess.


At the moment, good code structure for humans is good code structure for AIs and bad code structure for humans is still bad code structure for AIs too. At least to a first approximation.

I qualify that because hey, someone comes back and reads this 5 years later, I have no idea what you will be facing then. But at the moment this is still true.

The problem is, people see the AIs coding, I dunno, what, a 100 times faster minimum in terms of churning out lines? And it just blows out their mental estimation models and they substitute an "infinity" for the capability of the models, either today or in the future. But they are not infinitely capable. They are finitely capable. As such they will still face many of the same challenges humans do... no matter how good they get in the future. Getting better will move the threshold but it can never remove it.

There is no model coming that will be able to consume an arbitrarily large amount of code goop and integrate with it instantly. That's not a limitation of Artificial Intelligences, that's a limitation of finite intelligences. A model that makes what we humans would call subjectively better code is going to produce a code base that can do more and go farther than a model that just hyper-focuses on the short-term and slops something out that works today. That's a continuum, not a binary, so there will always be room for a better model that makes better code. We will never overwhelm bad code with infinite intelligence because we can't have the latter.

Today, in 2026, providing the guidance for better code is a human role. I'm not promising it will be forever, but it is today. If you're not doing that, you will pay the price of a bad code base. I say that without emotion, just as "tech debt" is not always necessarily bad. It's just a tradeoff you need to decide about, but I guarantee a lot of people are making poor ones today without realizing it, and will be paying for it for years to come no matter how good the future AIs may be. (If the rumors and guesses are true that Windows is nearly in collapse from AI code... how much larger an object lesson do you need? If that is their problem they're probably in even bigger trouble than they realize.)

I also don't guarantee that "good code for humans" and "good code for AIs" will remain as aligned as they are now, though it is my opinion we ought to strive for that to be the case. It hasn't been talked about as much lately, but it's still good for us to be able to figure out why a system did what it did and even if it costs us some percentage of efficiency, having the AIs write human-legible code into the indefinite future is probably still a valuable thing to do so we can examine things if necessary. (Personally I suspect that while there will be some efficiency gain for letting the AIs make their own programming languages that I doubt it'll ever be more than some more-or-less fixed percentage gain rather than some step-change in capability that we're missing out on... and if it is, maybe we should miss out on that step-change. As the moltbots prove that whatever fiction we may have told ourselves about keeping AIs in boxes is total garbage in a world where people will proactively let AIs out of the box for entertainment purposes.)


Grovel over your linter's command-line options and/or configuration file. It's not an uncommon feature but from my personal and limited experience it is also not always advertised as well as you like. For instance, golangci-lint has not just a feature to check only changed code, but several variants of it available, but I think possibly the only places that these are mentioned on its site are in the specific documentation of the issues configuration YAML documentation: https://golangci-lint.run/docs/configuration/file/#issues-co... written in a My Eyes Glaze Over coloration scheme [1], and mentioned in the last FAQ, which means reading to the bottom of that page to find out about it.

Most mature systems that can issue warnings about source code (linters, static analyzers, doc style enforcers, anything like that) have this feature somewhere because they all immediately encounter the problem that any new assertion about source code applied to code base even just two or three person-months large will immediately trigger vast swathes of code, and then immediately destroy their own market by being too scary to ever turn on. So it's a common problem with a fairly common solution. Just not always documented well.

[1]: Let me just grumble that in general coloration schemes should not try to "deprioritize" comments visually, but it is particularly a poor choice when the comments are the documentation in the most literal sense. I like my comment colors distinct, certainly, but not hidden.


Nobody has to have instructions on how to "hack" the Steam Deck because it's a computer and you just run whatever you want on it.

The instructions on how to crack open the immutable OS image are readily available from Valve but you probably won't need them since it's already got a lot of power even without that.


If it's just the SSID it's pretty useless for making sure people are at work. I can totally connect to "Office_CA-SJC-03" from home, or any other SSID you care to name.

"Roguelites have proliferated"

I know it's easy to feel that this is people chasing trends, but I've really come to appreciate roguelites over many of the PS2 era games because they give me real progression in a single play session, but also, that single play session is discardable.

As an adult this is a very compelling proposition.

In the PS2 era, while you can find some early roguelite-like-things, you tended to have either the games that have no interesting progression (arcade-like) and the you would just play the game, or you had very long scale games like JRPGs that slowly trickle out the progression but are also multi-dozen-hour games. Compressing the progression into something that happens in a small number of hours, yet eliminates the "I'm 50 hours into this game that I stopped 2 years ago, do I want to pick it back up if I've forgotten everything?" has been very useful to me.

This has been a fairly significant change in gaming for me. I still have some investment into the higher end JRPGs but the "roguelite" pattern across all sorts of genres has been wonderful overall. I don't even think of it as a genre anymore; it's a design tool, like 'turn based versus real time'.


Roguelites are the worst thing to happen to video games since microtransactions. It’s an extremely attractive option to the cash-strapped indie dev, as it promises infinite ‘content’ for little development effort, but what it’s really done is turned every game into a combination of cookie clicker and a slot machine.

The fact that you think arcade games have “no interesting progression” shows just how toxic the roguelike design pattern is. The progression in arcade games is you getting better at the game. If a game needs a “progress system” to communicate a sense of accomplishment to the player, that’s because the gameplay is shallow.


If AIs were to plateau where they are for an extended period of time, I definitely worry about their net effect on software quality.

One of the things I worry about is people not even learning what they can ask the computer to do properly because they don't understand the underlying system well enough.

One of my little pet peeves, especially since I do a lot of work in the networking space, is code that works with strings instead of streams. For example, it is not that difficult (with proper languages and libraries) to write an HTTP POST handler that will accept a multi-gigabyte file and upload it to an S3 bucket, perhaps gzip'ing it along the way, such that any size file can be uploaded without reference to the RAM on the machine, by streaming it rather than loading the entire file into a string on upload, then uploading that file to S3, requiring massive amounts of RAM in the middle. There's still a lot of people and code out in the world that works that way. AIs are learning from all that code. The mass of not-very-well-written code can overwhelm the good stuff.

And that's just one example. A whole bunch of stuff that proliferates across a code base like that and you get yet another layer of sloppiness that chews through hardware and negates yet another few generations of hardware advances.

Another thing is that, at the moment, code that is good for an AI is also good for a human. They may not quite be 100% the same but right now they're still largely in sync. (And if we are wise, we will work to keep it that way, which is another conversation, and we probably won't because we aren't going to be this wise at scale, which is yet another conversation.) I do a lot of little things like use little types to maintain invariants in my code [1]. This is good for humans, and good for AIs. The advantages of strong typing still work for AIs as well. Yet none of the AIs I've used seem to use this technique, even with a code base in context that uses this techique extensively, nor are they very good at it, at least in my experience. They almost never spontaneously realize they need a new type, and whenever they go to refactor one of these things they utterly annihilate all the utility of the type in the process, completely blind to the concept of invariants. Not only do they tend to code in typeless goo, they'll even turn well-typed code back into goo if you let them. And the AIs are not so amazing that they overcome the problems even so.

(The way these vibe coded code bases tend to become typeless formless goo as you scale your vibe coding up is one of the reasons why vibe coding doesn't scale up as well as it initially seems to. It's good goo, it's neat goo, it is no sarcasm really amazing that it can spew this goo at several lines per second, but it's still goo and if you need something stronger than goo you have problems. There are times when this is perfect; I'm just about to go spray some goo myself for doing some benchmarking where I just need some data generated. But not everything can be solved that way.)

And who is going to learn to shepherd them through writing better code, if nobody understands these principles anymore?

I started this post with an "if" statement, which wraps the whole rest of the body. Maybe AIs will advance to the point where they're really good at this, maybe better than humans, and it'll be OK that humans lose understanding of this. However, we remain a ways away from this. And even if we get there, it may yet be more years away than we'd like; 10, 15 years of accreting this sort of goo in our code bases and when the AIs that actually can clean this up get here they may have quite a hard time with what their predecessors left behind.

[1]: https://jerf.org/iri/post/2025/fp_lessons_types_as_assertion...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: