Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, but none of the current LLMs are even remotely useful doing that kind of work for even something moderately complex. I have a 2k LOC project that no LLM even "understands" *. They can't grasp what it is: It's a mostly react-compatible implementation of "hooks" to be used for a non-DOM application. Every code assistant thinks it's a project using React.

Any documentation they write at best re-states what is immediately obvious from the surrounding code (Useless: I need to explain why), or is some hallucination trying to pretend it's a React app.

To their credit they've slowly gotten better now that a lot of documentation already exists, but that was me doing the work for them. What I needed them to do was understand the project from existing code, then write documentation for me.

Though I guess once we're at the point AI is that good, we don't need to write any documentation anymore, since every dev can just generate it for themselves with their favorite AI and in the way they prefer to consume it.

* They'll pretend they understand by re-stating what is written in the README, then proceed to produce nonsense.



I've found "Claude 3.7 Sonnet (Thinking)" to be pretty good at moderately complex code bases, after going through the effort to get it to be thorough.

Without that effort it's a useless sycophant and is functionally extremely lazy (ie takes short cuts all the time).

Don't suppose you've tried that particular model, after getting it to be thorough?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: