It's a reference to Murnau's "Nosferatu" where a letter from Orlok is written in a strange ideographic language that includes a picture of a house, evidently relating Orlok's request to buy a house.
>somehow turn an intellectually handicapped gardener into a psychic genius
waddya mean somehow? They got him to understand the Sacred Geometry. I saw it in San Francisco when it came out and the guy next to me said "oh shit! it's the SACRED GEOMETRY!"
Reading his sheet music can be almost as much fun as watching him perform. I can remember seeing one piece with a tempo note that read, "Mit Schlag" [1]. I laughed so hard I couldn't see.
[1] For those who don't speak German, this phrase literally means, "with impact" but idiomatically it means "with whipped cream." It's generally used when ordering desert.
An amusing moment in my college years was when a roommate was trying to broaden his mind by listening to classical — but unwittingly downloaded PDQ. It sounded _almost_ authentic, but had the classic Schickele twists that gave it away.
I’ve not heard much of his music (I must remedy that), but I remember hearing one of his pieces (perhaps on Prairie Home Companion?) which was an operatic setting of a joke (“transporting gulls across a staid lion for immortal porpoises”) and it elucidated so much of how opera worked structurally that I never got from listening to actual operas. There’s a lot of education hidden in the jokes.
I played his Quartet for Clarinet, Violin, Cello, and Piano in college. At the time I knew all about PDQ Bach, but I wasn't aware of his serious compositions until I played the piece. It was really quite lovely.
and why not?
Both are neural nets.
Carbon versus Silicon.
It's just scale at this point. At this point, maybe human neurons and connections are more complex than what can be done in a machine, but not for long.
Humans only think they are special, that a machine can't feel.
But that is only our subjective experience of ourselves, which itself is also just interpretation. It seems like everyone is in agreement that our perception of the outer world is fallible. But so is our perception of our internal processes. Our inner thoughts are just as opaque to us as the outer world.
"Man can do what he wills but he cannot will what he wills." Schopenhauer.
Neural nets and brains are fundamentally different. If you cut open a brain you won't find attention blocks. There's no evidence as far as I know that back propagation actually happens. It's a nice model but it is a model.
Maybe if you cut open a brain, you wont find a printout with backpropagate code.
But you do find neurons, in a network.
Humans do learn right? They take in information and encode it in their brain.
Why does it have to be back propagation to qualify. Bayesian? Minimum Entropy?
There are a lot of forms a machine neural network can take. There are a lot of theories on exactly how the brain 'calculates'/'processes'. With all of the advancements in Neuroscience and AI in last 5 years, it is bit hubris to say we'll never be able to figure out the brain, and also be able to model it.
There are a lot more like this. The field is moving too rapidly for me to go find every paper today. But it is dozens, and not even so cutting edge there isn't already books on it.
To add, there is a whole field of biologically-plausible learning. Predictive coding is one such method. It is isomorphic to SGD[1], and doesn't require global computation. ie it's backprop-free optimization. Although slower than SGD on silicon, it's massively parallelizible which ends up making it more biologically plausible rather than less.
Because human cognition is fundamentally affected and controlled by it's body and its sensorimotor interactions with the environment (read: "Six Views on Embodied Cognition" by Margret Wilson) which LLMs don't have and will never have.
Agree with embodied issue. Humans have a large amount of sensory input, from the 'body'. I'd disagree that AI will never have this, considering the large number of sensory technologies that already exists and are being developed.
It only takes wiring them together. Eyes, smells, touch, these are all existing and being refined.
2.
LLMs? AI research is far more vast than just LLM's. LLM's just happen to be the latest shiny thing.
"never"?. That is bold, 5 years ago people said the abilities in current LLM's were a "never", yet here we are.
Queinnec's attitude was that ALL forms of scoping are useful. In the 80s there was a class struggle between Lexical scoping, which prevailed, and everything else, which Q. detailed.