Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My LLM workflow involves a back-and-forth clarification (verbose input -> LLM asks questions) that results in a rich context representing my intent. Generating comments/docs from this feels lossy.

What if we could persist this final LLM context state? Think of it like a queryable snapshot of the 'why' behind code or docs. Instead of just reading a comment, you could load the associated context and ask an LLM questions based on the original author's reasoning state.

Yes, context is model-specific, a major hurdle. And there are many other challenges. But ignoring the technical implementation for a moment, is this concept – capturing intent via persistent, queryable LLM context – a valuable problem to solve? I feel it would be.



It's no accident most of the software development world gravitated toward free and open source tooling: proprietary tool-specific code and build recipes have the same gotchas as the model-specific context would have today.

So perhaps switching to open-source models of sufficient "power" will obsolete that particular concern (they would be a "development dependency", just like a linter, compiler or code formatter are today).


It sounds worthwhile. I just wonder how you envision the author encoding their reasoning state. If as (terse) text, how would the author know the LLM successfully unpacks its meaning without interrogating it in detail, and then fine-tuning the prompt? And at that point, it would probably be faster to just write more verbose docs or comments?

What about a tool that simply allows other developers to hover over some code and see any relevant conversations the developer had with a model? Version the chat log and attach it to the code basically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: