Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My opinion is that these are not analogous.

Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.

It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.

However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.

Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.





Are you not getting practice working with an LLM? Why would that not also be a skill you can practice?

25+ year programmer thats been using an agentic IDE for 9 months chiming in.

Yes I absolutely am. Yes it's a skill. Some programmers I've discussed this with made up their mind before they tried it. If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming. I'd their goal is to program software then they won't like the idea of the LLM doing it for them.

To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.

So where people fall on this debate mostly depends on their values and priorities (wants).

The thing about wants is they are almost never changed with logical arguments. Some people just want to write the code themselves. Some people also want other people to write the code themselves. I don't know why, but I know that logical arguments are unlikely to change these people's minds because our wants are so entangled in our sense of self that they exist outside of the context of pure logic, and probably for valid evolutionary beneficial reasons.

To be clear, programmers working on very novel niche use cases where LLMs genuinely aren't useful have valid case of "it's not helpful to me yet", and these people are distinct from what I'm mostly referring to. If someone is open minded, tried their best to make it work, and decided it's not good enough yet, that's totally fair and unrelated to my main point.


I kinda like your analogy but I find it a bit misguided. I'll give another one that fits more my experience.

Consider a math/physics studying a course. Using an LLM is like having all the solutions to math/physics exercises in the course and reading them. If the goal is to finish all the problems quickly then an LLM is great. If the goal is to properly learn math/physics, then doing the thinking yourself and use the LLM as last recourse or to double check your work is the way to go.

Back to the carpenter, I think there is a lot of value on not using power tools to learn more about making chairs and become better at it.

I am using many LLMs for coding everyday. I think they are great. I am more productive. I finish features quickly and make progress quickly and the dopamine release is fantastic. I started playing with agents and I am marvelled at what they can do. I can also tell that I am learning less and becoming a lot more complacent when working like this.

So I question myself what the goal should be (for me). Should my goal be producing more raw output or produce less output while enriching my knowledge and expertise?


Ah yes there is a distinction for students or someone learning principles.

If the goal is learning programming then some of that should be done with LLMs and some without. I think we are still figuring out how to use LLMs to optimize rate of learning, but my guess is the way they should be used is very different than how an expert should use them to be productive.

Again it comes back to the want though (learning vs doing vs getting done), so I think my main point stands.


> To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.

I'd say it's closer to carpenters using CNC machines.

You can be a "successful" carpenter that sells tons of wood projects built entirely using a CNC and not even know how to hold a chisel. But nobody is going to call you a good woodworker. You're just a successful businessman.

For sure, there's gradients, where people use it for the hard parts and manually do what they can. I.e., CNC templates and use those as router guides on their work. People will be impressed by your builds, but Paul Sellers is still going to be more considered more talented.


From my thousands of hours working with LLMs since GPT-3, I strongly disagree.

In the media AI hype perspective your CNC analogy sounds right. In my -grounded in real experience using it- perspective the power tool analogy is far more apt.

If you treat agentic IDE like CNC machine that's how you get problems.

Consider the population of opinions. One other reply to my comment is about how the LLM introduced security flaw and repeated line after line of the same code, implying it's useless and can't be trusted. Now you're replying that the LLMs are more capable and autonomous they can be trusted with full automation to the extent of CNC.

My point is that the truth lies somewhere in between.

Maybe in the future your CNC analogy would be valid but right now with windsurf/cursor and Opus 4.5 we aren't there yet.

Lastly, even with your analogy setting up and using CNC is a skill. It's an engineering and design skill. So maybe the person doing that would be more of an engineer than a woodworker, but going as far as calling them a business person isn't accurate to me.


> If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming.

Just this week alone I had the LLMs:

- Introduce a serious security flaw.

- Decided it was better to duplicate the same 5 lines of code 20 times instead of making a function and calling that.

And that is actually just this week. And to be clear, I am not making that up to prove a point, I use AI day in and day out and it happens consistently. Which is fine, humans can do that too, the issue is when there is a whole new generation of "programmers" that have absolutely zero clue how to spot those issues when (not if) they come up.

And as AI gets better (which it will) it actually makes it more dangerous because people start blindly trusting the code it produces.


If that's happening then you're most likely not using the best tools (best model and IDE) for agentic coding and/or not using them right.

How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.

I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.

If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?

Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.

I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.

I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.


First, I'm using frontier models with Cursor agenic mode.

> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.

And on the human side, that is precisely why procedures like code review have been standard for a while.

But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).


Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.

It really boils down to who is using the LLM tool and how they are using it and what they want.

When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).

So we are mashing together a few dimensions, my GPC was pointing out:

- A: competent developer wants software functionality produced that is secure and maintainable

- B: competent developer wants to produce software functionality that is secure and maintainable

The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.

What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.

What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.

One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?


> thousands of developers are using the tools productively,

There's at least one study that suggests that they actually are not in fact working more productively, they just feel that way.

Unfortunately for me personally, Claude Code on the latest models does not generally make me more productive, but it has absolutely led to several of my coworkers submitting absolutely trash-tier untested LLM code for review.

So until i personally see it give me output that meets my standards, or i see my coworkers do so, I'm not going to be convinced. Legions of anonymous HN commenters insisting they're 50 year veterans that have talked Claude into spitting out perfect code will never convince me.

(I spent over an hour working with Claude Code to write unit tests. I did eventually get code that met my standards, after dozens of rounds of feedback and many manual edits, and cleaning up quite a lot of hallucinatory code. Like most times I decide to "put in the effort" to get worthwhile results from Claude, I'm entirely certain I could have done it faster myself. I just didn't really feel like it at 4 on a Friday)


The seed of this thread was the premise that using these power tools requires skill. Skill that takes time and practice to become proficient at.

And my point was whether or not people take the time to develop the skill depends on their motivations, values and beliefs.

In this thread I have weighed both sides;, cases when LLMs are productive and when they are not.

Your comment comes off as biased and evidence of my point.


It's the difference between looking at someone else's work and doing it yourself.

How much can you level up by watching other people do their homework?


Would you say that management is just a dead-end skill then, with no ability to level up your managerial experience whatsoever? Or is there a distinction I am missing?

Those are objectively different skills tho.

In the same way that using hand tools are different skills than using power tools.

Or doing bookkeeping for a small business with pencil and paper is a different skill than using spreadsheets or dedicated bookkeeping software.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: