Hacker Newsnew | past | comments | ask | show | jobs | submit | barberpole's commentslogin

It's a reference to Murnau's "Nosferatu" where a letter from Orlok is written in a strange ideographic language that includes a picture of a house, evidently relating Orlok's request to buy a house.


how did this garbage wind up here?


I thought this was going to be about Yggdrasil. Why must every name in the world be destroyed by software labellers?


> Leibniz's monads [...]have nothing in common with category theory monads except the name.[0]

The monad is something about which you can reason but you cannot look inside because it is windowless. No quibbling, please.


>somehow turn an intellectually handicapped gardener into a psychic genius

waddya mean somehow? They got him to understand the Sacred Geometry. I saw it in San Francisco when it came out and the guy next to me said "oh shit! it's the SACRED GEOMETRY!"


sure it did, the stuff around SOAR and John Anderson, production systems that aimed to model cognitive load, Case-based reasoning, etc.


He is remembered for laffs, but he was a very very compelling composer.


Reading his sheet music can be almost as much fun as watching him perform. I can remember seeing one piece with a tempo note that read, "Mit Schlag" [1]. I laughed so hard I couldn't see.

[1] For those who don't speak German, this phrase literally means, "with impact" but idiomatically it means "with whipped cream." It's generally used when ordering desert.


An amusing moment in my college years was when a roommate was trying to broaden his mind by listening to classical — but unwittingly downloaded PDQ. It sounded _almost_ authentic, but had the classic Schickele twists that gave it away.


I’ve not heard much of his music (I must remedy that), but I remember hearing one of his pieces (perhaps on Prairie Home Companion?) which was an operatic setting of a joke (“transporting gulls across a staid lion for immortal porpoises”) and it elucidated so much of how opera worked structurally that I never got from listening to actual operas. There’s a lot of education hidden in the jokes.


I played his Quartet for Clarinet, Violin, Cello, and Piano in college. At the time I knew all about PDQ Bach, but I wasn't aware of his serious compositions until I played the piece. It was really quite lovely.

https://www.youtube.com/watch?v=fMbwNtWZABA


> Emotions are basically very fast predictions by the brain

This predicts that basically ChatGPT has emotions.


and why not? Both are neural nets. Carbon versus Silicon.

It's just scale at this point. At this point, maybe human neurons and connections are more complex than what can be done in a machine, but not for long.

Humans only think they are special, that a machine can't feel.

But that is only our subjective experience of ourselves, which itself is also just interpretation. It seems like everyone is in agreement that our perception of the outer world is fallible. But so is our perception of our internal processes. Our inner thoughts are just as opaque to us as the outer world.

"Man can do what he wills but he cannot will what he wills." Schopenhauer.


Neural nets and brains are fundamentally different. If you cut open a brain you won't find attention blocks. There's no evidence as far as I know that back propagation actually happens. It's a nice model but it is a model.


I think the latest neuroscience would disagree.

Maybe if you cut open a brain, you wont find a printout with backpropagate code.

But you do find neurons, in a network.

Humans do learn right? They take in information and encode it in their brain.

Why does it have to be back propagation to qualify. Bayesian? Minimum Entropy?

There are a lot of forms a machine neural network can take. There are a lot of theories on exactly how the brain 'calculates'/'processes'. With all of the advancements in Neuroscience and AI in last 5 years, it is bit hubris to say we'll never be able to figure out the brain, and also be able to model it.

There are a lot more like this. The field is moving too rapidly for me to go find every paper today. But it is dozens, and not even so cutting edge there isn't already books on it.

https://www.quantamagazine.org/some-neural-networks-learn-la...


To add, there is a whole field of biologically-plausible learning. Predictive coding is one such method. It is isomorphic to SGD[1], and doesn't require global computation. ie it's backprop-free optimization. Although slower than SGD on silicon, it's massively parallelizible which ends up making it more biologically plausible rather than less.

1. 2020 PREDICTIVE CODING APPROXIMATES BACKPROP ALONG ARBITRARY COMPUTATION GRAPHS https://arxiv.org/pdf/2006.04182.pdf


Because human cognition is fundamentally affected and controlled by it's body and its sensorimotor interactions with the environment (read: "Six Views on Embodied Cognition" by Margret Wilson) which LLMs don't have and will never have.


1

Agree with embodied issue. Humans have a large amount of sensory input, from the 'body'. I'd disagree that AI will never have this, considering the large number of sensory technologies that already exists and are being developed.

It only takes wiring them together. Eyes, smells, touch, these are all existing and being refined.

2.

LLMs? AI research is far more vast than just LLM's. LLM's just happen to be the latest shiny thing.

"never"?. That is bold, 5 years ago people said the abilities in current LLM's were a "never", yet here we are.


>Capitalization Of Boilerplate Oriented Language

this is not only true of COBOL, my friend.


Queinnec's attitude was that ALL forms of scoping are useful. In the 80s there was a class struggle between Lexical scoping, which prevailed, and everything else, which Q. detailed.


I'm definitely curious about the everything else. I will into this book.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: