Hacker Newsnew | past | comments | ask | show | jobs | submit | kranner's commentslogin

> "Crystallized intelligence" and "emotional intelligence" are the consolation prizes no one really wants.

Strongly disagree.

Crystallized intelligence lets me see analogies and relations between disparate domains, abstract patterns that repeat everywhere, broadening my vision from a blinkered must-finish-this-task to a broader what-the-hell-is-this-world-I'm-in. I'm old enough to realise life is finite. Nothing satisfies like understanding.

Emotional intelligence lets me actually behave more like what I know a sane person should behave like. It lets me see I don't have to act on every passing whim and fancy, which are more like external noise than something of an essential expression from my inner self (which is a culturally-instigated fantasy). It lets me see how I'm connected to everyone else and everything in the world. Why I shouldn't stuff my own pockets at everyone else's expense. Why making other people unhappy ultimately makes myself unhappy. It wouldn't have been that hard to spot if I hadn't been caught up in fluid intelligence feats of strength.

These are the real rewards of middle age, not anyone's consolation prizes.

That said, I respect your right to disagree. But I feel this particular way.


> I just don't think there was a great way to make solved problems accessible before LLMs. I mean, these things were on github already, and still got reimplemented over and over again.

I'm not sure people wrote emulators, of all things, because they were trying to solve a problem in the commercial sense, or that they weren't aware of existing github projects and couldn't remember to search for them.

It seems much more a labour of love kind of thing to work on. For something that holds that kind of appeal to you, you don't always want to take the shortcut. It's like solving a puzzle game by reading all the hints on the internet; you got through it but also ruined it for yourself.


I'm seeing the same thing with my own little app that implements several new heuristics for functionality and optimisation over a classic algorithm in this domain. I came up with the improvements by implementing the older algorithm and just... being a human and spending time with the problem.

The improvements become evident from the nature of the problem in the physical world. I can see why a purely text-based intelligence could not have derived them from the specs, and I haven't been able to coax them out of LLMs with any amount of prodding and persuasion. They reason about the problem in some abstract space detached from reality; they're brilliant savants in that sense, but you can't teach a blind person what the colour red feels like to see.


How is it out of distribution? There are plenty of Python libraries for sound and music generation; it would be surprising if they were not in the training set.

There's a general pattern becoming evident of people being surprised with AI capabilities because they didn't realise (and none of us do fully) how broad the training set is, the variety of human output AI companies were able to harvest.

Even if all AI does is remix and regurgitate, there's a segment of the audience that is going to find some particular output brilliantly creative and totally original.


This is still a surprising composition of low level in-distribution things then. Like I would not have expected it to generate the waveforms from scratch, and be able to piece them together so well. If it had just plugged some kind of notation into a pre-existing API in its code then I would probably agree with you.

> Guess what, they got over it. You will too.

Prediction is difficult, especially of the future.


It ain't over 'til it's over. And when you come to a fork in the road, take it.

> my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

Presumably humanity still has room to grow and not everything is already in the training set.


Assuming the prototypes are functional.

There must be another table. I got "Are you Australian?" for "dingo" and for "cicada" "don't you love their songs?"

edit: https://rose.systems/animalist/eggs.js


Hmm, what's this one?

  var h = hash(guess);
  if (h==7182294905658010 || h==6344346315172974) { return "Adorable guess, but it's spelled “rosy”."; }
I'm guessing they're hashes for "<something> rosie" or "<something> rosey", but what?

7182294905658010 "rosey maple moth" 6344346315172974 "rose maple moth"

Yea, there's a bunch of easter eggs. And then there's the table for the taxonomy tree.

It's a reference to this article about entirely automated software production (eventually and hypothetical): https://news.ycombinator.com/item?id=46739117

Film studies is kind of well known for being a vanity degree for rich people's kids. At least all the film studies students I know from India who're studying in the US are without exception the kids of super-rich businessmen and politicians.

The serious ones are all either already working in the industry or studying at the super-competitive National School of Drama.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: