I think actually the problem is it always answers confidently.
Ask it about why World War II started, or how to make a cake, or where to go for dinner, or anything else, and it gives you a confident, reasonable answer. A lot of the answers are simply whatever it's already seen, mashed up. You can think of it as a search. But actually it doesn't think about what it's saying, it's stringing words together to make you think it's smart.
So then when it makes up something, it will sound to you, the reader who always sees it answer in perfect English with a decent answer, like it found an article about this professor in its dataset and is merely summarizing it.
I was showing a colleague few instances where ChatGPT was confidently wrong, and he picked up on something I never had. He said "Oh, so it's doing improv!" He explained to me that the standard response in improv is to say "Yes, and..." and just run with whatever the audience suggests. He's completely right! ChatGPT constantly responds with "Yes, and..." It's just always doing improv!
And people are trying to replace doctors with LLMs. It's like "ER" meets "Who's Line?"
ChatGPT is Mandela Effect, personified. It's going to go for what seems like it SHOULD be true. Sometimes that will go horribly wrong, except it will, by its very nature, seem like it's probably not wrong at all.
> I think actually the problem is it always answers confidently
This isn't a problem restricted to ChatGPT, there are humans who display this trait too. This might be appealing at a superficial level, but if you start believing speakers with this trait it's a slippery slope. A very slippery slope.
I'm trying really hard to avoid Godwin's law, so let me suggest that Elizabeth Holmes could be one example of this.
Ask it about why World War II started, or how to make a cake, or where to go for dinner, or anything else, and it gives you a confident, reasonable answer. A lot of the answers are simply whatever it's already seen, mashed up. You can think of it as a search. But actually it doesn't think about what it's saying, it's stringing words together to make you think it's smart.
So then when it makes up something, it will sound to you, the reader who always sees it answer in perfect English with a decent answer, like it found an article about this professor in its dataset and is merely summarizing it.