Hacker Newsnew | past | comments | ask | show | jobs | submit | py4's commentslogin

I honestly think a better title would be "how to stay relevant in LLM era"


simple example: Claude Cowork was written entirely with claude code


Did they make it faster than without Claude Code?

1. No, implementing well defined requirements were not commoditized a decade ago. You still have to come up with the design and proper (efficient,correct,...) solution that respects the requirements. it was and still is the skill set of a L4/L5 SWE.

2. If you think LLMs cannot help with navigating ambiguity and requirements, you are wrong. it might not be able to 100% crack it (due to not having all the necessary context), but still help a lot.


You realize you are arguing my point? We are in complete agreement about #1.

As far as #2, I came into a large project at my new at the time company last year one week before having to fly out to a customer site. I threw everything I could find about the project into NotebookLM and started asking it questions like I would ask the customer. Tools like Gong are pretty good to at summarizing calls. I agree with you on #2.

I am at a point now where I am the first technical person after sales closes a deal and I lead (larger) projects and do smaller projects myself. But I realize remotely, my coworkers from Latin America are just as good as I am now and cheaper.

I’m working on moving to a sales role when I see the time coming. It’s high touch and the last thing that can’t be taken over.

I would never have trusted any L4 or L5 SWE I met at AWS anywhere near one of my customers (ProServe). But they also wouldn’t let me put code into a repo that ran an AWS service. Fair is fair

If I remember correctly, the leveling guidelines were (oversimplifying).

An L4 should be able to handle a well defined story

An L5 should be able to handle a well defined Epic where the what is known bit not how

An L6 should be able to lead a more ambiguous longer term project made of multiple Epics.


I was saying it was not commoditized a decade ago, but i feel it's getting commoditized *now*. So you seem to be basically saying SWE is over and it's time to move on to something that is primarily based on human-human interaction?


Yes it has to be. LLMs are getting to the point they can do everything else. What they can’t do, cheaper non US labor can.

For context, the software developer market in the US is very bimodal, most developers are on the enterprise dev side (including most startups like YC companies). I’m referring to this side - not FAANG and equivalent

By commoditization back then, I knew there was nothing I could do on that side of the market that would let me make more than around $150K-$165K. My plan then was to get on the other side of the market in 2020 after my youngest graduated and out of enterprise dev.

“Commodization” now means too many people chasing too few job. In 2016, I could throw my resume up in the air and get three or four random enterprise dev job offers within less than a month - now not so much.

I discovered AWS belatedly later that year and my thesis was changed to I want to do #1 that you said above - customer focused, using AWS as a tool, and bringing a developer mindset to cloud implementations.

It just magically happened in June 2020 that both felt into my lap - cloud consulting full time opportunity at BigTech (no longer there thankfully).


LLM is not hype. it has made and my colleagues who are NOT working on CRUD, way more productive


If you believe that, then what's the point of this thread? You've decided (wrongly imo but that's not the point I guess) that the LLM is better than you and should be trusted to to the job. If you start from that position, then of what use are the skills you wish to keep fresh?


You don't have to think the AI is better than yourself. Many coding tasks are just repetitive boilerplate... pretty simple stuff. Sometimes you have to set 20 fields on an object, refactor 10 functions to return a default value, write a generic SQL statement that returns data in a specific shape, center a div, or any number of relatively simple tasks. I wouldn't use it for the high level architectural decisions. Just a fancy context-aware autocomplete. Even though I can spell just fine, I use autocomplete on my phone all the time just to save time. I think it's a similar thing for code, if you use it properly. Of course many do just offload all the thinking and do not critically review it's work, but I think that is the wrong approach.


Yes it is hype.


What is it that you are working on?


llm training/inference stack


Let's say you want to make an architectural change. There are two options:

1. Ask AI to come up with the different options and let you review it

2. You think about the options and ask AI for feedback

#1 is much faster but results in atrophy (you are not critically coming up with the architecture changes)

#2 uses your and AI skills but it's gonna be slower.

which one will you choose? currently i'm doing #1


I agree... I think the process of coming to a conclusion yourself is different than having that solution proposed to you and accepting it.


if it's able to do the interesting/engaging part faster than me, i don't see why i should not outsource to it (The same argument as why use LLM-assisted programming at all, you don't want to miss the productivity boost)


What then is your interest in avoiding skill atrophy? It sounds like you realize that outsourcing your programming work to AI will likely result in skill atrophy, but you are so happy with the results that you are okay with this. (And so are a lot of people! Not saying it's a bad decision.)

What change are you after?


i'm trying to see what i can do to stay relevant


Aren't you making yourself irrelevant and the first group to be cropped out of the market or do you see others who don't use llms as much as dinos who will be filtered away first because your method offers more productivity?

I personally rarely have been paid for productivity. How fast I can put out features rarely earns me extra money. What people want is someone who understands what they want and finds a way to deliver when we agreed to and spots pitfalls along the way.


If LLMs are the best programmers then programming is obsolete so why do you want programming skills?


you need to be able to review what the LLM is writing.

Had missed it. Tnx


It's not clear from the article whether it's a dense model or MoE. This matters when it comes to comparing with GPT-4 - in terms of # params - which is reported to be MoE


As far as I know, EVERY 1t+ LLM is a MoE. Switch-c-2048, Wi Dao 2.0, GLAM, Pangu-Σ, presumably GPT4. Am I missing any?


What is MoE?

Edit: Ah, Mixture of Experts. I hadn't heard this one yet. Thanks!


This. We have not exhausted all the techniques at our disposal yet. We do need to look for a new architecture though, but these are orthogonal


aside from sources mentioned by others (arxiv-sanity-lite, newsletter): 1. deep learning monitor: https://deeplearn.org 2. Following folks on Twitter and then Twitter recommendation algo will take care of the rest


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: