Hacker Newsnew | past | comments | ask | show | jobs | submit | pmdrpg's commentslogin

This op ed suggests that it’s easier to audit a huge amount of code before merging it in than is to write the code from scratch. I don’t know about anyone else, but I generally find it easier to write exactly what I want than to mentally model what a huge volume of code I’ve never seen before will do?

(Especially if that code was spit out by an alien copypasta that is really good at sounding plausible with zero actual intelligence or intent?)

Like, if all I care about is: does it have enough unit tests and do they pass, then yeah I can audit that.

But if I was trying to solve truly novel problems like modeling proteins, optimizing travel routes, or new computer rendering techniques, I wouldn’t even know where to begin, it would take tons of arduous study to understand how the new project full of novel algorithms is going behave?


Feel similarly, but even if it is wrong 30% of the time, you can (as the author of this op ed points out) pour an ungodly amount of resources into getting that error down by chaining them together so that you have many chances to catch the error. And as long as that only destroys the environment and doesn’t cost more than a junior dev, then they’re going to trust their codebases with it yes, it’s the competitive thing to do, and we all know competition produces the best outcome for everyone… right?


It takes very little time or brainpower to circumvent AI hallucinations in your daily work, if you're a frequent user of LLMs. This is especially true of coding using an app like Cursor, where you can @-tag files and even URLs to manage context.


> it’s the competitive thing to do

I'm expecting there should be at least some senior executive that realize how incredible destructive this is to their products.

But I guess time will tell.


I remember the first time I played with GPT and thought “oh, this is fully different from the chatbots I played with growing up, this isn’t like anything else I’ve seen” (though I suppose it is implemented much like predictive text, but the difference in experience is that predictive text is usually wrong about what I’m about to say so it feels silly by comparison)


> I suppose it is implemented much like predictive text

Those predictive text systems are usually Markov models. LLMs are fundamentally different. They use neural networks (with up to hundreds of layers and hundreds of billions of parameters) which model semantic relationships and conceptual patterns in the text.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: