I'm generally positive on LLMs, but became convinced that long term memory features that LLM chat providers implemented are just too hard to keep on track.
They create a "story drift" that is hard for users to escape. Many users don't – and shouldn't have to – understand the nature and common issues of context. I think in the case of the original story here the LLM was pretty much in full RPG mode.
I've turned off conversation memory months ago, in most cases i appreciate knowing i'm working with a fresh context window; i want to know what the model thinks, not what it guesses i'd like to hear. I think conversations with memory enabled should have a clear warning message on top.
Thinking more about this, a right to inspect the full context window ensured by consumer protection laws wouldn't be a bad thing.
If there's one place to implement a PsyOp, context is it. Users should be allowed to see what influenced the message they're reading on top of the training data.
The reasoning option was good for this. It used to tell you the motivations of the LLM to say what it said: "the user is expressing concern about X topic, I have to address this with compassion..."
I’ve considered turning off memory and just using the addition to the system prompt (or whatever they call it in the UI) to feed it certain context (languages/frameworks/etc that I prefer or am often working with).
I often do not like it when it references another conversation since I created a fresh convo to avoid poisoning the context. I sometimes just switch providers to ask a “clean” question or use incognito to be sure my previous conversations aren’t tainting the response. Memory can be cool but sometimes it ties things together it shouldn’t or brings in context things I don’t want.
They create a "story drift" that is hard for users to escape. Many users don't – and shouldn't have to – understand the nature and common issues of context. I think in the case of the original story here the LLM was pretty much in full RPG mode.
I've turned off conversation memory months ago, in most cases i appreciate knowing i'm working with a fresh context window; i want to know what the model thinks, not what it guesses i'd like to hear. I think conversations with memory enabled should have a clear warning message on top.