The chats are more useful when it doesn't confirm my bias. I used LLMs less when they started just agreeing with everything I say. Some of my best experiences with LLMs involve it resisting my point of view.
There should be a dashboard indicator or toggle to visually warn when the bot is just uncritically agreeing, and if you were to asked it to "double check your work" it would immediately disavow its responses.
There should be a dashboard indicator or toggle to visually warn when the bot is just uncritically agreeing, and if you were to asked it to "double check your work" it would immediately disavow its responses.