Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I generally agree with you, but am trying to see the world through the new AI lens. Having a machine make human errors isn't the end of the world, it just completely changes the class of problems that the machine should be deployed to. It definitely should not be used for things that need those strict verifiable processes. But it can be used for those processes where human errors are acceptable, since it will inevitably make those some classes of error...just without needing a human to do so.

Up until modern AI, problems typically fell into two disparate classes: things a machine can do, and things only a human can do. There's now this third fuzzy/brackish class in between that we're just beginning to explore.



I can agree with you. And in a discussion with adults working together to address our issues I will.

The issue is that we don't have exact proof that AI is suitable for tasks and the people doing those are already laid off.

The economy now is propped up only by the belief that AI will be so successful that it will eliminate most of the workforce. I just don't see how this ends well.

Remember, regulations are written in blood. And I think we're about to write many brand new regulations.


Yea I'm not attempting to make any broad statements about regulations or who has or hasn't been laid off. Only that a common mistake I see a lot of people making is trying to apply AI/LLMs to tasks that need to be deterministic and, predictably, seeing bad results.

There is a class of task that is well-suited for current gen AI models. Things that are repetitive, tedious, and can absorb some degree of error. But I agree that this class of tasks is significantly narrower than what the market is betting on AI being able to accomplish.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: