I've been impressed by how good ChatGPT is at getting the right context old conversations.
When I ask simple programming questions in a new conversation it can generally figure out which project I'm going to apply it to, and write examples catered to those projects. I feel that it also makes the responses a bit more warm and personal.
Agreed that it can work well, but it can also irritating - I find myself using private conversations to attempt to isolate them, a straightforward per-chat toggle for memory use would be nice.
Love this idea. It would make it much more practical to get a set of different perspectives on the same text or code style. Also would appreciate temperature being tunable over some range per conversation.
ChatGPT having memory of previous conversations is very confusing.
Occasionally it will pop up saying "memory updated!" when you tell it some sort of fact. But hardly ever. And you can go through the memories and delete them if you want.
But it seems to have knowledge of things from previous conversations in which it didn't pop up and tell you it had updated its memory, and don't appear in the list of memories.
So... how is it remembering previous conversations? There is obviously a second type of memory that they keep kind of secret.
If you go to Settings -> Personalisation -> Memory, you have two separate toggles for "Reference saved memories" and "Reference chat history".
The first one refers to the "memory updated" pop-up and its bespoke list of memories; the second one likely refers to some RAG systems for ChatGPT to get relevant snippets of previous conversations.
ChatGPT is what work pays for so it's what I've used. I find it grossly verbose and obsequious, but you can find useful nuggets in the vomit it produces.
Go into your user settings -> personalisation. They’ve recently added dropdowns to tune its responses. I’ve set mine to “candid, less warm” and it’s gotten a lot more to-the-point in its responses.
I was very disappointed with Supernova in the East. What started as a telling of the Pacific War from the point of view of the Japanese empire morphed into the usual "war is bad but American soldiers are heroes" that's very common for this period.
I tuned out when he spent 30 minutes describing a famous photo-op of General MacArthur going ashore to the Philippines. That is the complete opposite of the original promise of the podcast.
The podcast started as a sequel to Mike Duncan's classic The History of Rome, and in my opinion surpassed it. Where THoR eventually falls into the narrative trap of turning into "The Lives of Roman Emperors", THoB spends a lot of time talking about economic, demographic, societal, and technological changes within the Empire and the world.
Extremely recommended if you want a proper history podcast.
The thing AI miss about the internet from the late 2000s and early 2010s was having so much useful data available, searchable, and scrappable. Even things like "which of my friends are currently living in New York?" are impossible to find now.
I always assumed this was a once-in-history event. Did this cycle of data openness and closure happen before?
Search "YouTube Revanced" on Android. It's a bit of a pain to install, but it lets you customise your YouTube app and add or remove as many features as you want.
These kinds of customisations should be standard for apps people use every day.
You -> Gear icon -> Revanced Settings -> General -> Navigation Buttons -> Hide Shorts.
You need to also hide them from the feed and a few other places. You are not stupid; Revanced has too many options and the settings and large and confusing. It's easier to search "shorts" and toggle everything.
Thank you. I already had that setting enabled, but your comment inspired me to review all the settings again and I have been successful in hiding Shorts from view (for now, until Google changes something again no doubt).
Everything on the internet is fake. That is as true now as it always was.
For every real post, I can make up a fake one that's more agreeable to the hivemind and therefore will be more upvoted. Since you see a limited amount of posts in a session, you will only see fake posts and the real ones will be hidden forever.
The author overlooked an interesting error in the second skull pancake image: the strawberry is on the right eye socket (to the left of the image), and the blackberry is on the left eye socket (to the right of the image)!
This looks like it's caused by 99% of the relative directions in image descriptions describing them from the looker's point of view, and that 99% of the ones that aren't it they refer to a human and not to a skull-shaped pancake.
I am a human, and I would have done the same thing as Nano Banana. If the user had wanted a strawberry in the skull's left eye, they should've said, "Put a strawberry in its left eye socket."
Exactly what I was thinking too. I'm a designer, and I'm used to receiving feedback and instructions. "The left eye socket" would to me refer to what I currently see in front of me, while "its left eye socket" instantly shift the perspective from me to the subject.
I find this interesting. I've always described things from the users point of view. Like the left side of a car, regardless of who is looking at it from what direction, is the driver side. To me, this would include a body.
To be honest this is the sort of thing Nano Bannana is weak at in my experience. It's absolutely amazing - but doesn't understand left/right/up/down/shrink this/move this/rotate this etc.
See below to demonstrate this weakness with the same prompts as the article see the link below, which demonstrates that it is a model weakness and not just a language ambiguity:
Mmh, ime you need to discard the session/rewrite the failing prompt instead of continuing and correcting on failures. Once errors occur you've basically introduced a poison pill which will continuously make things to haywire. Spelling out what it did wrong is the most destructive thing you can do - at least in my experience
I admit I missed this, which is particularly embarrassing because I point out this exact problem with the character JSON later in the post.
For some offline character JSON prompts I ended up adding an additional "any mentions of left and right are from the character's perspective, NOT the camera's perspective" to the prompt, which did seem to improve success.
The lack of proper indentation (which you noted) in the Python fib() examples was even more apparent. The fact that both AIs you tested failed in the same way is interesting. I've not played with image generation, is this type of failure endemic?
Came to make exactly the same comment. It was funny that the author specifically said that Nano Banana got all five edit prompts correct, rather than noting this discrepancy, which could be argued either way (although I think the "right eye" of a skull should be interpreted with respect to the skull's POV.)
Extroverts tend to expect directions from the perspective of the skull. Introverts tend to expect their own perspective for directions. It's a psychology thing, not an error.
When I ask simple programming questions in a new conversation it can generally figure out which project I'm going to apply it to, and write examples catered to those projects. I feel that it also makes the responses a bit more warm and personal.
reply