I believe you can enter "flow state" with something like Claude Code, from what I've read, but it's mostly reduced to pressing 1 or 2 and typing a few prompts. The reward loop is much more closed now though, so it's a bit more akin to reaching flow state playing Tetris.
Most people running it are normies that saw it on linkedin and ran the funny "brew install" command they saw linked because "it automates their life" said the AI influencer.
Absolutely nobody in any meaningful amount is running this sandboxed.
I would be genuinely, truly surprised if even 10% were. I think the people on HN who say this are wildly disconnected from the security posture of the average not-HN user.
I really don't want games to turn into a soulless, AI generated mush. This could also bring a huge amount of slop-generated content flooding the game market.
On the other hand I think this could be very cool from some specific use cases like outdoor scenarios in simulators. I've always wanted a game like Euro Truck Simulator where I can drive a car around a whole 1:1 representation of a country and this might just allow that, obviously I don't care about an accurate representation of every building or tree or hallucinations, just for it to be believable enough.
I wonder if it can be integrated into already existing engines though, because it seems like a big stretch to write actual game logic as an LLM prompt.
Being optimistic (or pessimistic heh), if things keep the trend then the models will evolve as well and will probably be quite better in one year than they are now.
It might be biased to Reddit/Twitter users but from what I've seen game developers seem to be much more averse towards using AI (even for coding) than other fields.
Which is curious since prototyping helps a lot in gamedev.
> I worry about new languages though. I guess maybe model training with synthetic data will become a requirement?
I read a (rather pessimistic) comment here yesterday claiming that the current generation of languages is most likely going to be the last, since the already existing corpus of code for training is going to trump any other possible feature the new language might introduce, and most of the code will be LLM generated anyways.
I had the thought of an AI-specific bytecode a while ago, but since then it's seemed a little silly -- the only langs that work well with agentic coding are the major ones with big open-source corpuses and SO/reddit discussions to train on.
I also saw something about a bytecode for prompts, which again seems to miss the point -- natural language is the win here.
What is kind of mysterious about the whole thing is that LLMs aren't compilers yet they grok code really well. It's always been a mystery to me that tools weren't smarter and then with LLMs the tooling became smarter than the compiler, and yet ... if it actually was a compiler we could choose to instruct it with code and get deterministic results. Something about the chaos is the very value they provide.
reply