Hacker Newsnew | past | comments | ask | show | jobs | submit | Agebor's commentslogin

There are actually now some ideas in crypto spaces to use stablecoins bound to stock/coin indexes instead of fiat, so...


Why not corporations? They would be owned by the market and compete against each other to offer best quality machinery.

Of course you still have the leadership class of people that could be corrupt, but that can be solved e.g. by sortition https://en.wikipedia.org/wiki/Sortition

+ you need anti-monopoly laws and some solutions preventing competition-to-the-bottom (e.g. if marginal cost of service becomes 0, it may no longer make sense to compete for it, so it needs to enter some sort of maintenance mode to keep quality in check)


It's possible sortition could be used but unlikely to be successful. Yes, there always is corruption in every institution - part of human nature. It's not going away, no matter what ruling class is in power.

My point is - how can someone express true happiness when they are controlled by others? With all these services, there would be no justification for betterment. It all sounds like an advertisement for centralized authoritatism - "We'll take care of your wants and dreams; you just give us your all".


That perhaps it was already done before...


I've been thinking along similar lines on high level - this approach will likely be very general and can be used to create different kinds of media, as well as potentially behaviours in a later stage.

https://metapresent.org/creation-engine

Will be interesting to base it on a decentralised open platform that could be "built-in" in the Internet.


Hmm actually now i'm not certain which one of these I have.

I'm a software developer and I have kind of like two different "address spaces".

There is the normal visual + auditory address space. And there is the "intuition space" which is similar to the first one (e.g. I can imagine a 3D object and rotate my viewpoint around it, simulate conversations, etc), but it's with limited detail, more like black and white unless I concentrate more. I can "hear" there but it's separate from normal hearing.

It's super-useful in programming, as i can imagine code in some kind of 3D space, where I can move in and out of different functions calling each other (I still imagine them mostly as text though), so I can remember them pretty well.


Visualizing code is a pretty useful tool. I can't do it as easily for problems that I have yet to solve, but I can do it for problems that I understand well.

Interestingly enough, however, it's the playing around with the 3D structure in my head that makes it fun for me to solve problems. It's a pleasurable activity to noodle on problems that way.


Great to see these views are becoming more mainstream. The talk does not mention it, but it's likely convergent with neuroscience ideas of:

- Bayesian Brain

- Theory of constructed emotions (https://www.youtube.com/watch?v=0gks6ceq4eQ)

- Free energy principle and Active Inference (https://www.youtube.com/watch?v=Y1egnoCWgUg)

A good overview is also here: https://towardsdatascience.com/why-intelligence-might-be-sim...

The issue to be reconciled is though that some of those ideas are talking about "keeping uncertainty in the sweet spot, not too high or low" while others about "minimising uncertainty/prediction error". I think the difference will turn out to be only in relation to how far of a future/prediction we are talking about. So optimising for long-term vs the short-term.


This is a similar view to the emerging theory of Bayesian Brain, which views the brain as a system that tries to minimise the prediction error (which might be the same thing as "free-energy" in some related publications) by comparing expectations with actual information coming from the senses.

https://towardsdatascience.com/the-bayesian-brain-hypothesis...

So far it seems that it explains quite a lot of data, and many mind illnesses (e.g. many diseases can be thought as the brain under-correcting or over-correcting for the prediction error).

By under-correcting, the brain is not learning enough on its mistakes, which may lead to delusions of superiority (e.g. being stuck in usual habits, or inability to change one's world-view based on new information). On the other hand, when over-correcting, the world may seem unpredictable, frightening - leading to self-doubt, anxiety and negative thoughts.

Being wrong around 15% of the time might actually be the optimal rate for learning... https://www.independent.co.uk/news/science/failing-study-suc...


There is a fascinating Bayesian explanation for schizophrenia: the story goes that schizophrenic people have a much sharper prior/posterior than non-schizophrenic people, which makes it more difficult for them to correct their internal models when the environment diverges from predictions. Which causes them to drift off into their own realities.

For example, if you run the rubber hand experiment with non-schizophrenic people, even if you don't stroke their hand and the rubber hand at the exact same time (say the timing offset is gaussian with standard deviation sigma), with enough repeated exposures to the stimuli they will recognize the rubber hand as their own. In contrast, if you repeat the same experiment with schizophrenic people, it takes a smaller standard deviation or substantially more trials to have them recognize the rubber hand as their own.

I wish I had the references lying around, but I dug into the literature for this a few years back and found this hypothesis to be surprisingly well supported.


I agree; Karl Friston's work is among the most interesting I have ever read, period. Interestingly, his 2009 paper (https://www.fil.ion.ucl.ac.uk/~karl/The%20free-energy%20prin...) / free-energy principle makes use of reinforcement learning, gradient descent, Markov blankets, Helmholtz machines, and other foundational tenets of modern machine learning ... In that regard, Geoff Hinton (a foundational figure in modern machine learning) overlapped with Friston while Hinton was in England at that point in his career.


Yes, interesting man. I encountered him on a workshop by Bert Kappen on stochastic optimal control. It shows that there are different control strategies for different noise levels separated by phase transitions.

I checked Friston again. He now also has this article: https://www.frontiersin.org/articles/10.3389/fncom.2012.0004...

CLE = Conditional Lyapunov Exponents.

"In short, free energy minimization will tend to produce local CLE that fluctuate at near zero values and exhibit self-organized instability or slowing."

I've to study it more what he means with self-organized instability.


Oh interesting - thanks for the pointers.


Note that Geoffrey Hinton's first Restricted Boltzmann Machines were designed to minimize free energy. The first Restricted Boltzmann Machine, however, was Paul Smolensky's Harmonium. It maximized a metric called harmony, which was essentially the inverse of free energy. When Hinton and Smolensky collaborated with Rummelhart on a publication, they settled on calling it "goodness of fit".

My point is that saying that the brain is maximizing harmony is quite reasonable -- and much easier to understand.

Rumelhart, D. E., Smolensky, P., McClelland, J. L., & GE, H. (1986). Schemata and Sequential Thought in PDP Model. PDP, Exploration in the Microstructure of Cognition, The MIT Press, Cambridge, MA, Vol. IIº.


"the brain" Can any of this grand top-down 'delusions and personality traits are thermodynamic xyz' theorising about "the brain" apply to, say a bee's brain or a worm's?


Two facile answers:

1) Yes. Why couldn't it?

2) No, it requires a certain level of brain complexity.


Regarding complexity: Probably anything with a brain has the problem of balancing lots of different sources of information and maintaining (enough) coherence of behavior. So even very simple worms will need a simple version of this...


Hard to identify delusions in worms. But the entropy / free energy models work for the best understood worm brain (C. Elegans).


Specifically, "Signatures of criticality in a maximum entropy model of the C. elegans brain during free behaviour" <http://cognet.mit.edu/proceed/10.7551/ecal_a_010> and I'm sure many more.


Science has been sticking to that view since around Newton. But it's actually beginning to slowly change now, as time-independent physics might be a reason why our theories work very well in specific areas, but tend to break down in a more holistic approach.

See the work of physicist Lee Smolin in Time Reborn (https://www.amazon.co.uk/Time-Reborn-Crisis-Physics-Universe...) on why time and information might be the most fundamental concepts.

This does not mean though that we experience time in the same way - far from it. Just that passage of time is fundamental to information processing.


Thanks for the recommendation. Indeed information is the most fundamental concept and mutations. If you will call time a causality force which makes this mutations it's fine. But it's wrong if you think about time as a dimension in which you can travel.


Yes. You might find interesting some very rated ideas as this article - Bayesian Brain. In this view, the brain is a complex system that learns to adapt to and predict its environment, similar to other complex systems like even companies.


Theories are now emerging that Universe is running one large bayesian learning algorithm (bayesian inference itself is proven to be an optimal knowledge creation method)

See e.g. Bayesian Brain and Universal Darwinism


See also: brain as hydraulics, brain as a system of cogs, brain as a computer.[1]

[1] https://aeon.co/essays/your-brain-does-not-process-informati...


Argh, that article! Those are metaphors at some level of representation!!! You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage. The three conditions stated in the article basically are the set up for reinforcement learning, (which can be implemented at some level of abstraction on a computer) and the question about if a representation is required or not is a mathematical one. For linear systems with gaussian noise the optimal control this is an answered question, yes you optimally estimate the state, then you base your controller off the optimal estimate. For more complicated systems it is unclear if the representation is required or not, but it sure seems reasonable that some level of representation is required. In the baseball example they are still talking about keeping a constant optical line, not what happens to raw optic nerve inputs. It has a reduced dimensionality representation!


> You could argue in the same specious way that a computer is not a computer because it is really a bunch of atoms interacting via non deterministic quantum mechanical rules so it can't really implement deterministic algorithms and error free information storage.

Jaron Lanier argues that computers and computation are cultural. Why give them a special ontological status? Yeah, we can think of the universe as computational or informational. We can also think of it as mathematical, mental or just whatever physics posits (fields, strings, higher dimensions, etc). Whatever the case, when someone like the article in the OP states that reality is X, then an ontological claim is being made. Its metaphysics.

One could instead argue that the world just is itself, and anything we say about it is our human model or best approximation. Which would be a combination of realism (the world itself) and idealism (how we make sense of it). Then it's just a matter of not mistaking the map for the territory. Instead of saying that the brain or the universe is X, we say that X is our best current map for Y. We don't say that London is a map. That would be making a category error.


I think the article's overall stride is not so much about how these things are represented in neuronal mapping per se, and more that we shouldn't apply the idea of computer mechanics to organisms.

Less load/process/store of absolute data and more like natural processes such as the process of erosion creating rivers. The analogy of the environment as a "lock" and organisms are just the most fit "keys" to success in particular environment.

So the computer analogy is bad because organisms are more a matrix of interactions, feedbacks and responses that work well enough, but dont follow a "logical" design. This can be replicated within a computer easily and the result is evolutionary computation & hardware, genetic algorithms and evolutionary neural networks. The problem in understanding the result of evolutionary systems is that they're blind to design and only respond to fitness and therefore create systems that are so tightly coupled it's a quest to understand how the model even works.

So the article is suggesting we shouldn't apply human design principles to evolved solutions. Perhaps we need some kind of "messy science" to make sense of it all.

Going forward, machine learning running evolutionary algorithms on neural networks should be able to produce sufficiently incomprehensibility for us to be studying our own inventions for years to come.


Thank you for this. I spend a non-trivial amount of time telling people working in AI and machine learning (which I also do) that the brain isn't some parameter optimization machine and that analogies from whatever technology or math people are excited about aren't very useful. I wish some neuroscience education and articles like these were some part of the ML canon.


I couldn't disagree with you more. The article referenced by GP mistakes the form for the function. Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies.

And even if we don't have the correct algorithms in sight today, there is every reason to believe that whatever processes are occurring in our brains and bodies, can indeed be simulated and replicated virtually.

The only way to argue against this idea is to claim that there is some special magical non-material aspect to our existence... which no article or neuroscience education has yet demonstrated.


The comment was about universal Bayesian brains and other things that are quite a stretch to say the least. Of course, since our brains are made of physical matter, they must perform computations that other physical matter can perform.

The trap is to think about the brain in terms of things we find impressive, and about things we find impressive as being like brains somehow. Therefore analogies to steam engines, computers and deep learning. And these analogies have always turned out to be silly.


> Just because the computer uses different technology than human tissue, doesn't mean it isn't emulating the same ultimate processes that are happening in our bodies

BUT: at least I think we are far from it. Very far. In the sense that we don't need more computing power of the current approaches to get e.g. AGI, we need radically new ones. And I actually don't see why this would be opposed to more neuroscience education, instead of excitement for cool but still quite limited models, and why this would be pretending that there is some "special magical non-material aspect to our existence"

How much can you compress the essential structure and complexity of an intelligent brain? It is an open question, but if in the end you can not compress it "enough", it does not have much practical consequences of it being also theoretically a mathematical object. And on top of that: we already know how to make new ones...


Define intelligent.

Very tiny animal life shows what we would consider intelligent behavior. There is no particular reason to believe that evolution has even come close to size optimization that intelligence can be reduced in, as there are a large number of other dimensions it is working on at the same time, survival being the big one.


>> which no article or neuroscience education has yet demonstrated.

True, but there are some pretty interesting ideas out there. I'm going have to start putting together a list of articles. From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain. Not saying I believe these things, but the people behind them are pretty smart.


There is no real evidence that we have free will, and the general "suspicion" in the field is that we don't. Yes the brain is made of particles, but their arrangement is very particular and very complex, leaving cognition and all other things the brain does to almost certainly be emergent phenomena. Boiling down to single particles is like trying to reverse engineer a Tesla by focusing on the fact that it has iron atoms in it.


> From the proof that if we have free will, so do particles to some extent. To the notion that quantum computation may happen in the brain.

The question remains, what reason do we have to believe that only a living brain, and not a silicon analogue, can tap into those features of reality?


I agree with you that humans are very different from optimization machines in that they have some freedom in what they choose to optimize. Alan Newell made this point a long time ago, back then attempts were made to describe humans in terms of control theory. It works up to a point, but autonomous behavior needs the faculty to set goals independently of pre-programmed optimization points as well as current situational factors. Humans, Newell argued, should be understood as knowledge systems that operate on their representations of the world, but are equally adept at simulating the world in their heads, and create knowledge beyond current representations.

The article, however, is rubbish. As psychologist, I cringed throughout. It is a blurr of half-baked ideas and ill-understood controversies from cognitive science. The author manages to write an entire article about information at its core without ever properly defining information, not to speak of representation. In the sense of Shannon, or course neurons are channels transmitting information. What else would they do?

And of course we can decode that information even from the outside, even down to discrete processing stages during the execution of mental tasks (https://onlinelibrary.wiley.com/doi/epdf/10.1111/ejn.13817). And if there are truely no representations in the brain, as the author states, how do we plan for future events that are far beyond the horizon? And even if you reject all that, there is DNA in the brain that is literally information and expressed (decoded and made into protein) ALL THE TIME.

Regarding cognition, the good Mr. Epstein has not grasped the difference between computers and computability. I don't think anybody is looking for silicon in the brain. The smart people are asking how it is possible for a complex system to operate in a complex world without an outside unit directing their behavior. They ask "How how can the human mind occur in the physical universe" (http://act-r.psy.cmu.edu/?post_type=publications&p=14305)? How is it that we can do the things we do? How do we set goals, plan steps to achieve them, and choose the right actions for implementation?

I get where you are coming from and I agree with you regarding a dangerous misunderstanding of AI, especially ML. But this article is not helping putting things in perspective. I am willing, however, to concede one point to Mr. Epstein: His brain is dearly lacking information, representation, algorithms, or any such marker usually signifying intelligent life.


The article is bad, but the point of silly analogies to various technologies remains.

Regarding the questions you addressed, my suspicion is that the brain's primary trick is to model the organism and the environment. Planning ahead, reasoning and synthesizing knowledge can all happen if you can do that. I'd argue (and of course I'm biased) that control theory is probably a better place to start thinking about the brain, in that light, insofar as building models of the world is important.


Information processing as basis for human existence is not an analogy once you accept a very basic premise of what information is. It is the literal description of what is going on, even on the biological level. I've mentioned DNA, the immune system is another example.

If you want to be successful in a complex world, survive, replicate, you will profite massively if you know what is going on around you better than that other thing that wants to eat you. If you can grasp the structure of the physical world and predict its changes, you will come out on top. Information processing is an evolutionary necessity, because we are grounded in a physical world. Information is the successful way to deal with the world, because it gives the organism a choice.

Control theory is great if you want to describe real valued in- and outputs and their relationship over time. Like throwing a ball. But at some point we need to become discreet and abstract the real valued domain of space and time into symbols.


> But the IP metaphor is, after all, just another metaphor

With no apologies to the UNIX metaphor of "everything is a file", my favorite starting point for [pretending at] explaining intelligence/understanding/recognition has long been "everything is a metaphor" ;)


Everything is a file if you are brave enough.


Yes, this line of thought it pretty interesting. There is also an equivalence between quantum physics and a kind of machine learning called restricted Boltzmann machines, in that they can efficiently simulate each other.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: