This is what worries me the most. Marketing is ultimately a business of manipulation, and services like ChatGPT seem like excellent tools for manipulation. I wish OpenAI could find a less adversarial business model.
No, not a joke. The author also co-vibe-coded a book, called Vibe Coding, describing and recommending exactly the sort of system he's trying to build as Gas Town.
Bubblewrap is a it's a very minimal setuid binary. It's 4000 lines of C but essentially all it does is parse your flags ask the kernel to do the sandboxing (drop capabilities, change namespaces) for it. You do have to do cgroups yourself, though. It's very small and auditable compared to docker and I'd say it's safer.
If you want something with a bit more features but not as complex as docker, I think the usual choices are podman or firejail.
You should at least read the tests, to make sure they express your intent. Personally, I'm not going to take responsibility for a piece of code unless I've read every line of it and thought hard about whether it does what I think it does.
AI coding agents are still a huge force-multiplier if you take this approach, though.
I don't think you have to use this if it's not working in your case. I think the idea is to try to anticipate the next few turns of the conversation, so you can pick the tree you want to go down in a fast way. If the prediction is accurate, I could see that being effective.
What's the advantage of having multiple channels with separate residual connections? Why not just concatenate those channels, and do residual connections on the concatenated channel?
I'm looking for a language optimized for use with coding agents. Something which helps me to make a precise specification, and helps the agent meet all the specified requirements.
I'm looking for a language optimized for human use given the fundamental architectural changes in computing in the last 50 years. That way we could skip both the boilerplate and the LLMs generating boilerplate.
I'm working on something similar. Dependently typed, theorem proving, regular syntax, long form english words instead of symbols or abbreviations. It's not very well baked yet but claude/codex are already doing really well generating it. I expect that once the repo has been around long enough to be included in training data it'll improve. Probably next year or the year after.
reply