Gas town and the like all basically sound to me like, "AI is amazing! Ok, actually it isn't very good, but maybe we can just use more AI with our AI and then it'll be good!"
And I'm not surprised at all to learn that this path took us to a "Maintenance Manager Checker Agent." I wonder what he'll call the inevitable Maintenance Manager Checker Agent Checker Agent?
Maybe I've been in this game too long, but I've encountered managers that think like this before. "We don't need expensive, brilliant, developers, we just need good processes for the cheap inexperienced developers to follow." I think what keeps this idea alive is that it sort of works for simple CRUD apps and other essentially "solved" problems. At least until the app needs to become more than just a simple CRUD app
For years we had people trying to make voice agents, like Iron Man's Jarvis, a thing. You had people super bought into the idea that if you could talk to your computer and say "Jarvis, book me a flight from New York to Hawaii" and it would just do it just like the movies, that was the future, that was sci-fi, it was awesome.
But it turns out that voice sucks as a user interface. The only time people use voice controls is when they can't use other controls, i.e. while driving. Nobody is voluntarily booking a flight with their Alexa. There's a reason every society on the planet shifted from primarily phone calls to texting once the technology was available!
It's similar with vibe coding. People like Yegge are extremely bought into the idea of being a hyperpowered coder, sitting in a dimly lit basement in front of 8 computer screens, commanding an army of agents with English, sipping coffee between barking out orders. "Agent 1, refactor that method to be more efficient. Agent 5, tighten up the graphics on level 3!"
Whether or not it's effective or better than regular software development is secondary, if it's a concern at all. The purpose is the process. It's the future. It's sci-fi. It's awesome.
AI is an incredible tool and we're still discovering the right way to use it, but boy, "Gas Town" is not it.
"Agent 1, refactor that method to be more efficient. Agent 5, tighten up the graphics on level 3!"
I'm not sure its even that, his description of his role in this is:
"You are a Product Manager, and Gas Town is an Idea Compiler. You just make up features, design them, file the implementation plans, and then sling the work around to your polecats and crew. Opus 4.5 can handle any reasonably sized task, so your job is to make tasks for it. That’s it."
And he says he isn't reviewing the code, he lets agents review each others code from look of it. I am interested to see the specs/feature definitions he's giving them, that seems to be one interesting part of his flow.
Yeah maybe the refactoring was a bad example because it implies looking at the code. It's more like "Agent 1, change the color of this widget. Agent 9, add a red dot whenever there's a new message. Agent 67, send out a marketing email blast advertising our new feature."
Assuming both agents are using the same model, what could the reviewer agent add of value to the agent writing the code? It feels like "thinking mode" but with extra steps, and more chance of getting them stuck in a loop trying to overcorrect some small inane detail.
"I implemented a formula for Jeffrey Emanuel’s “Rule of Five”, which is the observation that if you make an LLM review something five times, with different focus areas each time though, it generates superior outcomes and artifacts. So you can take any workflow, cook it with the Rule of Five, and it will make each step get reviewed 4 times (the implementation counts as the first review)."
And I guess more generally, there is a level of non-determinism in there anyway.
> Nobody is voluntarily booking a flight with their Alexa.
Rich people use voice because they have disposable income and they don't care if a flight is $800 or $4,000. They are likely buying business/first class anyways.
Tony Stark certainly doesn't care.
Elon Musk certainly uses voice to talk to his management team to book his flights.
The average person doesn't have the privilege of using voice because it doesn't have enough fuck-you-money to not care for prices.
As someone who's friends with executive assistants: rich people use executive assistants (humans) because they are busy and/or value their time more than money and don't want to bother with the details. None of them are using voice assistants.
> Tony Stark certainly doesn't care. Elon Musk certainly uses voice to talk to his management team to book his flights.
Delegating to a human isn't the same as using a voice assistant, this should be obvious, unless you believe that managers are doing all the real work and every IC is a brainless minion. Maybe far in the future when there's AGI, but certainly not today.
> The average person doesn't have the privilege of using voice because it doesn't have enough fuck-you-money to not care for prices.
You can order crap off Amazon for the same price as you would through the website with your Alexa right now, but Amazon themselves have admitted approximately 0% of people actually do this which is why the entire division ended up a minor disaster. It's just a shitty interface in the same way that booking a flight through voice is a shitty interface.
Still the same. "Hey look, I got these crappy developers (LLMs) to actually produce working code! This is a game-changer!" When the working code is a very small, limited thing.
I don't know, your talking about an incredibly talented engineer saying:
"In the past week, just prompting, and inspecting the code to provide guidance from time to time, in a few hours I did the following four tasks, in hours instead of weeks"
Its up to you to decide how to behave, but I can't see any reasons to completely dismiss this. It ends with good guidance what to do if you can't replicate though.
This is a good point, but with ai it’s a little different, because both your process and ai are getting better. You build processes that can aspirationally support inferior AIs, while at the same time AIs themselves improve and meet you half way. This thought does not help my mental well being unfortunately.
I think in the end people will realise AI is not a silver bullet that will solve all problems and make all software engineers obsolete. Instead it will just become an extra tool in our toolbelt right alongside LSP, linters/fixers, unit test frameworks, well though out text editors and IDE's, etc.
When the bubble has burst in a few years, the managers will have moved on to the next fad.
It's definitely helpful for search and summarisation.
In terms of prototyping, I can see the benefits but they're negated by the absurd amount of work it takes to get the code into more maintainable form.
I guess you can just do really heavy review throughout, but then you lose a lot of the speed gains.
That being said, putting a textual interface over everything is super cool, and will definitely have large downstream impacts, but probably not enough to justify all the spending.
Replacing human computers with electronic computers is nothing like what LLMs do or how they work. The electronic computer is straight up automation. Same input in gives you the same input out every time. Electronic computers are actually pretty simple. They just do simple mathematical operations like add, subtract, multiply, and divide. What makes them so powerful is that they can do billions of those simple operations a second.
LLMs are not simple deterministic machines that automate rote tasks like computers or compilers. People, please stop believing and repeating that they are the next level of abstraction and automation. They aren't.
This is amazing! Thank you for confirming what I've been suspecting for a while now. People that actually know very little about software development now believe they don't need to know anything about it, and they are commenting very confidently here on hn.
People that actually know very little about software development now believe they don't need to know anything about it, and they are commenting very confidently here on hn.
Compiling high level languages to assembly is a deterministic procedure. You write a program using a small well defined language (relative to natural language every programming language is tiny and extremely well defined). The same input to the same compiler will get you the same output every time. LLMs are nothing like a compiler.
Is there any compiler that "rolls the dice" when it comes to optimizations? Like, if you compile the exact same code with the exact same compiler multiple times you'll get different assembly?
And th Alan Kay quote is great but does not apply here at all? I'm pointing out how silly it is to compare LLMs to compilers. That's all.
Rolling the dice is accomplished by mixing optimizations flags, PGO data and what parts of the CPU get used.
Or by using a managed language with dynamic compiler (aka JIT) and GC. They are also not deterministic when executed, and what outcome gets produced, it is all based on heuristics and measured probabilities.
Yes, the quote does apply because many cannot grasp the idea of how technology looks beyond today.
But the compiler doesn't "roll the dice" when making those guesses! Compile the same code with the same compiler and you get the same result repeatedly.
Still using Google assistant after trying Gemini on my pixel about 6 months ago. It was not an assistant replacement, it couldn't even perform basic operations on my phone, it would just say something like, "I'm sorry, I'm just an LLM and I can't send text messages." Has that changed?
Italian here, and I never heard of the term either.
Everybody always used the term floppy also for the 3.5 disks
I guess that since it was a foreign word the physical connotation of the term was simply lost, and "a floppy" was just the disk that your computer used.
"It does not matter if AI companies will not be able to get their money back and the stock market will crash. All that is irrelevant, in the long run."
Seriously? If these were open source tools that anyone could run on their home PC that statement would make sense, but that's not what we are talking about here. LLMs are tools that cost massive amounts of money to operate, apparently. The tool goes away if the money goes away. Fossil fuels revolutionized the world, but only because the cost benefit made sense (at least in the relative short-term).
Adding to this, if AI goes away we are left with a generation of people that do not understand the code that AI wrote and older generations that eventually retire out. This is nearly on par with the fictional Dune's destruction of the thinking machines and they have to essentially build religious groups to continue functioning as a society and create guilds to replace the functions of the thinking machines.
Oh my gosh, everything you want to host comes with a docker compose file that requires you to tweak maybe two settings. Caddy as your web proxy has the absolute simplest setup possible. You don't need AI to help you with this. You got this. You want to make sure you understand the basics so you (or your LLM doesn't do anything brain dead stupid). It's not that hard, you can do it!
And I'm not surprised at all to learn that this path took us to a "Maintenance Manager Checker Agent." I wonder what he'll call the inevitable Maintenance Manager Checker Agent Checker Agent?
Maybe I've been in this game too long, but I've encountered managers that think like this before. "We don't need expensive, brilliant, developers, we just need good processes for the cheap inexperienced developers to follow." I think what keeps this idea alive is that it sort of works for simple CRUD apps and other essentially "solved" problems. At least until the app needs to become more than just a simple CRUD app
reply