Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To be clear, I see a lot of "magical thinking" among people who promote AI. They imagine a "perfect" AI tool that can basically do everything better than a human can.

Maybe this is possible. Maybe not.

However, it's a fantasy. Granted, it is a compelling fantasy. But its not one based on reality.

A good example:

"AI will probably be smarter than any single human next year. By 2029, AI is probably smarter than all humans combined.” -- Elon Musk

This is, of course, ridiculous. But, why should we let reality get in the way of a good fantasy?





> AI will probably be smarter than any single human next year.

Arguably that's already so. There's no clear single dimension for "smart"; even within exact sciences, I wouldn't know how to judge e.g. "Who was smarter, Einstein or Von Neumann?". But for any particular "smarts competition", especially if it's time limited, I'd expect Claude 4.5 Opus and Gemini 3 Pro to get higher scores than any single human.


So we are back to the original comment that generated this thread: why hasn't AI generated a new and better compression algorithm, for example?

As I see it, it's because it didn't want to.

Hear me out: let's say that generating a new and better compression algorithm is something that might take a dedicated researcher about a year of their life, and that person is being paid to work on it, in the industry or via a grant. Is there anyone who has been running Claude Code instances for a human-year in a loop with the instruction to try different approaches until it has a better compression algorithm?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: