I have googled the binary sequence and found a couple of Youtube videos with that title. It is likely that the translation is in some comments. That's how it is "100% certain". Youtube comments.
It's not the first time I see it answer "heuristically" like a child would. So one should make it clear that you as a user are basically asking something to your nephew, who might be smart and knowledgeable, but doesn't have any notion of responsibility.
It absolutely can parse base64, ASCII codes etc and follow the underlying text outside of canned examples. That was one of the earliest tricks to get past all the RLHF filtering.
Out of curiosity, why did it fail to decode correctly the first time? Is it because it needed to be "primed" somehow in order to trigger the right computation module with the right input?
Who knows? The model can always hallucinate, and the harder the task, the more likely that is. But why some things are harder than others... it's still a blackbox, after all, so we can only speculate.
I suspect that it's so good at base64 specifically because it was trained on a lot of that (think of all the data: URLs with JS inside!), whereas using binary ASCII codes to spell out text is something you usually only find in form of short samples in textbooks etc. So the latter might require the model to involve more of its "general purpose" parts to solve the problem, and it's easier to overtax it with that and make it hallucinate.
It's not the first time I see it answer "heuristically" like a child would. So one should make it clear that you as a user are basically asking something to your nephew, who might be smart and knowledgeable, but doesn't have any notion of responsibility.