Hacker Newsnew | past | comments | ask | show | jobs | submit | throw1235435's commentslogin

The language is still ahead of C#, and still receiving features and keeping up by and large with the .NET ecosystem. Tbh I don't get the sheer negativity; the same thing could be said for Gleam or any other functional language these days tbh especially with AI coming along w.r.t long term support. Eventually things just work and are mostly complete; things don't have to get reinvented or get better forever.

> As much as I had high hopes for F# I think its safe at this point, to not pursuit it any further

I find this attitude interesting; you wanted it to be more than it was. I don't have high hopes for any language; other than it building my software which it and many others can do. Right tool for right job. I'm not attached to my code, other than if it can be maintained, changed, has sane defaults/guardrails to introduce less defects, etc. F# can do this, as many others. Interestingly I've seen the same attitude eventually happen to all languages of this class other than Rust; Scala, OCaml, etc are in similar positions.

Funnily enough Opus/CC has a number of times for my projects has suggested Rust, and if that doesn't work (too much for the team) went F# even over Java based langs assuming domain modelling code and the need for more perf (e.g. value types, and other stuff) without explicit prompting. Its then generated fsx scripts to run experiments, etc that seem to be more reliable than the default Python ones it runs (package errors and static typing fixes mostly). `dotnet fsi` fits well into the agentic workflow.

> Rescript and that being said, Rescript is probably more of a competitor to gleam

Depends on your target. F# at least has a backdoor into the C# ecosystem - from a risk perspective that makes it more palatable. New languages have an initial "excitement" about them; but generally they are just tools.

Pick something that fits your team and build. F# does work, and so do many other tools. In the age of AI IMO the ecosystem and guardrails for the AI matter more than the language and its syntax IMO other than readability. In this regard F# still is "less risky" with its .NET interop.


> They are laughing at them.

Yes but not for the reason you think - more that those are the future customers. If you look closely most are pivoting slowly away from software and shifting more to AI + hardware. The slow layoffs and pivoting that capital to infra shows this. All that "vibed" software needs to run somewhere. Also the models that generate and also power all that software need compute which comes from somewhere.

If I can:

- Have large margin compute since GPU's, power, data centre, etc setup is expensive AND

- Models that outperform models you can have at home.

- Vibed software that derives a lot of functionality from the AI compute and wants to be hosted on compute.

The big companies are pivoting away from software to being more infrastructure like for the democratized software that is projected to be made. They will be fine but in 10 years they will be more cloud hyperscalers, AI compute agents, etc than software businesses. Any software they write will be more to package up their compute as higher margin products.

None of this IMV gives any hope to current SWE's.


There's a lot of this forum in exactly that position. The fear is real; there is a real risk this AI destroys families and people's lives in the disruption.

This is what will occur - the bad scenario that is. Labor and its knowledge distributes (hard to contain knowledge), capital centralises and compounds. Always been that way. With AI there will be a a tension between the two of course.

The root question is: Will AI decentralise quicker than the disruption to this profession? I don't think so.

I've noticed us techies don't really understand economics and game theory all that well - we just see awesome toy and want to play with it and want others to enjoy it too. We have worked to democratize computing for years (e.g. OSS) now to our detriment. No one in society long term respects people who do this in a capitalist system; they find them naive. I can now understand why other professions find us a little immature like kids playing with tech toys.

I love solving problems with technology and love the field, but as I've gotten older I look back on a less technological life with nostalgia. Technology for all its benefit has disrupted the one thing humans do need and had for millions of years in our evolution - relative stability within their lifetimes. The mental health benefits to stability are massive and usually unmeasured. Technology, as evidenced by this thread, creates more and more anxiety about our future and our place within the community (e.g. social media, AI, and others). "Adaptability" isn't just a psychological trait; a wealthy person and secure person by definition is more adaptable too.


Sure; I absolutely agree and more to the point SWE's and their ideologies compared to other professions have meant they are the first on the chopping block. But what do you tell those people; that they no longer matter? Do they still matter? How will they matter? They are no different than practitioners of any other craft - humans in general derive value partly from the value they can give to their fellow man.

If the local unskilled job matters more than a SWE now these people have gone from being worth something to society to being less of worth than someone unskilled with a job. At that point following from your logic I can assume their long term value is one of an unemployed person which to some people is negative. That isn't just an identity crash; its a crash potentially on their whole lives and livelihood. Even smart people can be in situations where it is hard to pivot (as you say mortgages, families, lives, etc).

I'm sure many of the SWE's here (myself included) are asking the same questions; and the answers are too pessimistic to admit public ally and even privately. Myself the joy of coding is taken away with AI in general, in that there is no joy doing something that a machine will be able to do better soon for me at least.


I agree with you that the implications are bleak. For many people they are not abstract or philosophical. They are about income, stability, and the ability to keep a life intact. In that sense the fear is completely rational.

What stands out to me is that there seems to be a threshold where reality itself becomes too pessimistic to consciously accept.

At that point people do not argue with conclusions. They argue with perception.

You can watch the systems work. You can see code being written, bugs being fixed, entire workflows compressed. You can see the improvement curve. None of this is hidden. And yet people will look straight at it and insist it does not count, that it is fake, that it is toy output, that it will never matter in the real world. Not because the evidence is weak, but because the implications are unbearable.

That is the part that feels almost surreal. It is not ignorance. It is not lack of intelligence. It is the mind refusing to integrate a fact because the downstream consequences are too negative to live with. The pessimism is not in the claim. It is in the reality itself.

Humans do this all the time. When an update threatens identity, livelihood, or future security, self deception becomes a survival mechanism. We selectively ignore what we see. We raise the bar retroactively. We convince ourselves that obvious trend lines somehow stop right before they reach us. This is not accidental. It is protective.

What makes it unsettling is seeing it happen while the evidence is actively running in front of us. You are holding reality in one hand and watching people try to look away without admitting they are looking away. They are not saying “this is scary and I do not know how to cope.” They are saying “this is not real,” because that is easier.

So yes, the questions you raise are the real ones. Do people still matter. How will they matter. What happens when economic value shifts faster than lives can adapt. Those questions are heavy, and I do not think anyone has clean answers yet.

But pretending the shift is not happening does not make the answers kinder. It just postpones the reckoning.

The disturbing thing is not that reality is pessimistic. It is that at some point reality becomes so pessimistic that people start editing their own perception of it. They unsee what is happening in order to preserve who they think they are.

That is the collision we are watching. And it is far stranger than a technical debate about code quality.


Whether you look away or embrace it doesn’t matter though. We’re all going to be unemployed. It sucks.

Yeah I'm talking about HN, where the viewpoints are so divided. There are people here who are telling you not to worry and that it doesn't suck.

Software dev's training the model with their code making themselves obsolete is encouraged not banned.

Claude code making itself obsolete is banned.


If that is true; then all the commentary around software people having jobs still due to "taste" and other nice words is just that. Commentary. In the end the higher level stuff still needs someone to learn it (e.g. learning ASX2 architecture, knowing what tech to work with); but it requires IMO significantly less practice then coding which in itself was a gate. The skill morphs more into a tech expert rather than a coding expert.

I'm not sure what this means for the future of SWE's though yet. I don't see higher levels of staff in big large businesses bothering to do this, and at some scale I don't see founders still wanting to manage all of these agents, and processes (got better things to do at higher levels). But I do see the barrier of learning to code gone; meaning it probably becomes just like any other job.


Not entirely disagreeing with your point but I think they've mostly been forced to pivot recently for their own sakes; they will never say it though. As much as they may seem eager the most public people tend to also be better at outside communication and knowing what they should say in public to enjoy more opportunities, remain employed or for the top engineers to still seem relevant in the face of the communities they are a part of. Its less about money and more about respect there I think.

The "sudden switch" since Opus 4.5 when many were saying just a few months ago "I enjoy actual coding" but now are praising LLM's isn't a one off occurrence. I do think underneath it is somewhat motivated by fear; not for the job however but for relevance. i.e. its in being relevant to discussions, tech talks, new opportunities, etc.


Not sure. As software becomes a commodity I can see the "old school" like tech slowing down (e.g. programming languages, frameworks frontend and backend, etc). The need for a better programming language is less now since LLM's are the ones writing code anyway more so these days - the pain isn't felt necessarily by the writer of the code to be more concise/expressive. The ones that do come out will probably have more specific communities for them (e.g. AI)

The question is how rapid the adoption is. The price of failure in the real world is much higher ($$$, environmental, physical risks) vs just "rebuild/regenerate" in the digital realm.


Military adoption is probably a decent proxy indicator - and they are ready to hand the kill switch to autonomous robots


Maybe. There the cost of failure again is low. Its easier to destroy than to create. Economic disruption to workers will take a bit longer I think.

Don't get me wrong; I hope that we do see it in physical work as well. There is more value to society there; and consists of work that is risky and/or hard to do - and is usually needed (food, shelter, etc). It also means that the disruption is an "everyone" problem rather than something that just affects those "intellectual" types.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: