That's a valid point - I largely agree with you, I can make it generate mostly corrected & acceptable code with a solid level of quality, given smaller & scoped features and detailed prompts. I'm just not at all convinced that it's a net productivity boost - you must take your time for these detailed prompts and then verify the output, which is less of the case when you write the thing from scratch.
It certainly speed up some things, slows down others; for learning - a great resource! For generating code I'm on the fence, still experimenting; for now, I write some code manually, some with Claude, working on the hybrid setup. My intuition tells my that a flexible use of this tool will prove to be the most optimal - writing some code manually, some with LLms, depending on both the task and the programmer knowledge, experience and skills.
I think that the bottom line is that the bottlenecks are - the specific model you use & your skills, experience and reasoning capacity (intelligence); and you control only the latter, so focus on that!
"Perhaps there are frontiers of digital addiction we have yet to reach. Maybe one day we’ll all have Neuralinks that beam Instagram Reels directly into our primary visual cortex, and then reading will really be toast."
Even then, smart people will care about dissecting ideas, explore new concepts and broaden their understanding - and for most of it, Text Is King.
ACII is a standard, but it's also standard. To a really high degree. 99% of webpages are UTF-8.
You could make a similar argument about, say, H.264, but the dominance is not as compelling and drops massively if you account for different container formats.
Hunger and drive can definitely lead to unexpected initially results; they cannot replace relevant experience, but if somebody has at least done something similar, it's often worth making a bet on them!
There also is an interesting paradox in experience and motivation; often the most experienced and best people on paper are unfortunately the least motivated, least hungry - burn out and boredom do their part.
I've often heard the idea that you can always teach someone how to code, but you can't teach them to want to be great at it.
At the same time, I think there's a limit to how great someone can get even with a lot of experience. We see that with sports, there's probably a similar limit to cognitive activities too.
You can probably get the average, already smart person, to be a pretty good 8/10 on just about anything, be that music, math, writing, coding. But there are levels beyond that may require natural wiring that most of us just aren't born with. An extreme example of course, but there's no amount of experience I can acquire to get to a von Neumann level of genius, but fortunately we don't need that to build business web apps.
What makes you think that they will just keep improving? It's not obvious at all, we might soon hit a ceiling, if we haven not already - time will tell.
There are lots of technologies that have been 99% done for decades; it might be the same here.
From the essay - not presented in agreement (I'm still undecided), but Dario's opinion is probably the most relevant here:
> My co-founders at Anthropic and I were among the first to document and track the “scaling laws” of AI systems—the observation that as we add more compute and training tasks, AI systems get predictably better at essentially every cognitive skill we are able to measure. Every few months, public sentiment either becomes convinced that AI is “hitting a wall” or becomes excited about some new breakthrough that will “fundamentally change the game,” but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.
> We are now at the point where AI models are beginning to make progress in solving unsolved mathematical problems, and are good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI. Three years ago, AI struggled with elementary school arithmetic problems and was barely capable of writing a single line of code. Similar rates of improvement are occurring across biological science, finance, physics, and a variety of agentic tasks. If the exponential continues—which is not certain, but now has a decade-long track record supporting it—then it cannot possibly be more than a few years before AI is better than humans at essentially everything.
> In fact, that picture probably underestimates the likely rate of progress. Because AI is now writing much of the code at Anthropic, it is already substantially accelerating the rate of our progress in building the next generation of AI systems. This feedback loop is gathering steam month by month, and may be only 1–2 years away from a point where the current generation of AI autonomously builds the next. This loop has already started, and will accelerate rapidly in the coming months and years. Watching the last 5 years of progress from within Anthropic, and looking at how even the next few months of models are shaping up, I can feel the pace of progress, and the clock ticking down.
I think the reference to scaling is a pretty big giveaway that things are not as they seem - I think it's pretty clear that we've run out of (human produced) data, so there's nowhere to scale to in that dimension. I'm pretty sure modern models are trained in some novel ways that engineers have to come up with.
It's quite likely they train on CC output too.
Yeah, there's synthethic data as well, but how do you generate said data is very likely a good question and one that many people have lost a lot of sleep over.
What convinces me is this: I live in SF and have friends at various top labs, and even ignoring architecture improvements the common theme is this: any time researchers have spent time to improve understanding on some specific part of a domain (whether via SFT or RL or whatever), its always worked. Not superhuman, but measurable, repeatable improvements. In the words of sutskever, "these models.. they just wanna learn".
Inb4 all natural trends are sigmoidal or whatever, but so far, the trend is roughly linear, and we havent seen seen a trace of a plateau.
Theres the common argument that "Ghipiti 3 vs 4 was a much bigger step change" but its not if you consider the progression from much before, i.e. BERT and such, then it looks fairly linear /w a side of noise (fries).
Technology is just a lever for humanity. Really would like an AI butler, but I guess that's too hard (?). So many things AI could do to make my life better, but instead the world is supposedly over because it can summarize articles, write passable essays, and generate some amount of source code. In truth we haven't even scratched the surface, there is infinite new work to be done, infinite new businesses, infinite existing and new desires to satisfy.
It's done when there is no need to improve it anymore. But you can still want to improve it.
A can opener from 100 years ago will open today's cans just fine. Yes, enthusiasts still make improvements; you can design ones that open cans easier, or ones that are cheaper to make (especially if you're in the business of making can openers).
But the main function (opening cans) has not changed.
Even if the technology doesn't get better, just imagine a world where all our processes are documented in a way that a computer can repeat them. And modifying the process requires nothing more than plain English or language.
What used to require specialized integration can now be accomplished by a generalized agent.
The trade-off is replacing deterministic code with probabilistic agents. I've found you still need a heavy orchestration layer—I'm using LangGraph and Celery—just to handle retries and ensure idempotency when the agent inevitably drifts. It feels less like removing complexity and more like shifting it to reliability engineering.
I don't know whether I would go that extreme, but I also often find myself faster writing code manually; for some tasks though and contextually, AI-assisted coding is pretty useful, but you still must be in the driving seat, at all times.
Are guardrails, CI/CD, to make code at least compile-able and require minimal quality standards also possible to change via PR or managed somewhere else? With this possibility, it might went into oblivion indeed!
Ahhh, the problem with incentives; maybe it's more the case that, had the Linux Foundation behaved in a more aligned with the pure open-source ethos way, Linux would not be so widely used and working so well? They problem of incentives in open-source is real - who, why and how should support your development? Especially for things like OSes, which require constant work.
I don't know whether we have figured out the best models here just yet; the results are mixed
Exactly this; for some tasks, it can speed up you dramatically, 5 - 10 x; with others, it actually makes you slower.
And yes, very often writing a prompt + verifying results and possible modifying them and/or following-up takes longer than just writing code from scratch, manually ;)
It certainly speed up some things, slows down others; for learning - a great resource! For generating code I'm on the fence, still experimenting; for now, I write some code manually, some with Claude, working on the hybrid setup. My intuition tells my that a flexible use of this tool will prove to be the most optimal - writing some code manually, some with LLms, depending on both the task and the programmer knowledge, experience and skills.
reply