Hacker Newsnew | past | comments | ask | show | jobs | submit | dysoco's commentslogin

Why this and not Garnix?

10k is more close to a yearly software developer salary in my country than a monthly one.

That being said at least the $20/mo Claude Code subscription is really worth it, and many companies are paying for the AI tools anyways.


I believe you can enter "flow state" with something like Claude Code, from what I've read, but it's mostly reduced to pressing 1 or 2 and typing a few prompts. The reward loop is much more closed now though, so it's a bit more akin to reaching flow state playing Tetris.

Claude Code seems to be back up

i just checked; no version 5 models listed in claude code yet

is back online

Most people are running Moltbot (or whatever is called today) in an isolated environment so it's not much of a big deal really.

edit: okay fair enough I might be biased on who I follow/read on who 'most' people are


Most people running it are normies that saw it on linkedin and ran the funny "brew install" command they saw linked because "it automates their life" said the AI influencer.

Absolutely nobody in any meaningful amount is running this sandboxed.


Press X to doubt.

If even half are running it sufficiently sandboxed I'll eat my hat.


I would be genuinely, truly surprised if even 10% were. I think the people on HN who say this are wildly disconnected from the security posture of the average not-HN user.

I'm not so sure most people are doing this.

#1 "molty" is running on its "owner"'s MacBook: https://x.com/calco_io/status/2017237651615523033

What’s this have to do with containers?

They can live in any machine.


But to be useful it’s not in a contained environment, it’s connected to your systems and data with real potential for loss or damage to others.

Best case it hurts your wallet, worse case you’ll be facing legal repercussions if it damages anyone else’s systems or data.


I think it is, doing something so pointless is a bad sign. or what value did I miss?

I'm both scared and excited for this.

I really don't want games to turn into a soulless, AI generated mush. This could also bring a huge amount of slop-generated content flooding the game market.

On the other hand I think this could be very cool from some specific use cases like outdoor scenarios in simulators. I've always wanted a game like Euro Truck Simulator where I can drive a car around a whole 1:1 representation of a country and this might just allow that, obviously I don't care about an accurate representation of every building or tree or hallucinations, just for it to be believable enough.

I wonder if it can be integrated into already existing engines though, because it seems like a big stretch to write actual game logic as an LLM prompt.


Being optimistic (or pessimistic heh), if things keep the trend then the models will evolve as well and will probably be quite better in one year than they are now.

It might be biased to Reddit/Twitter users but from what I've seen game developers seem to be much more averse towards using AI (even for coding) than other fields.

Which is curious since prototyping helps a lot in gamedev.


> What skills are atrophying that would be useful in the future?

Well for once, tech companies are still at large hiring via leetcode/livecoding interviews. I feel much less prepared now that I was a year ago.


Were you really using anything in your day to day work that had any relevance to preparing for tech interviews?

> I worry about new languages though. I guess maybe model training with synthetic data will become a requirement?

I read a (rather pessimistic) comment here yesterday claiming that the current generation of languages is most likely going to be the last, since the already existing corpus of code for training is going to trump any other possible feature the new language might introduce, and most of the code will be LLM generated anyways.


I've wondered to myself here and there if new languages wouldn't be specifically written for LLM agentic coding, and what that might look like.


I had the thought of an AI-specific bytecode a while ago, but since then it's seemed a little silly -- the only langs that work well with agentic coding are the major ones with big open-source corpuses and SO/reddit discussions to train on.

I also saw something about a bytecode for prompts, which again seems to miss the point -- natural language is the win here.

What is kind of mysterious about the whole thing is that LLMs aren't compilers yet they grok code really well. It's always been a mystery to me that tools weren't smarter and then with LLMs the tooling became smarter than the compiler, and yet ... if it actually was a compiler we could choose to instruct it with code and get deterministic results. Something about the chaos is the very value they provide.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: