The agent has no "identity". There's no "you" or "I" or "discrimination".
It's just a piece of software designed to output probable text given some input text. There's no ghost, just an empty shell. It has no agency, it just follows human commands, like a hammer hitting a nail because you wield it.
I think it was wrong of the developer to even address it as a person, instead it should just be treated as spam (which it is).
That's a semantic quibble that doesn't add to the discussion. Whether or not there's a there there, it was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use. So, it is being used as designed.
I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. We are carrying a lot of metaphors for people and applying them to ai and it entirely confuses the issue. In this example, the AI doesn't "choose" to write a take-down style blog post because "it works". It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone.
I feel as if there is a veil around the collective mass of the tech general public. They see something producing remixed output from humans and they start to believe the mixer is itself human, or even more; that perhaps humans are reflections of Ai and that Ai gives insights into how we think.
>* I think it absolutely adds to the discussion. Until the conversation around Ai can get past this fundamental error of attributing "choice, "alignment", "reasoning" and otherwise anthropomorphizing agents, it will not be a fruitful conversation. *
You call it a "fundamental error".
I and others call it an obvious pragmatic description based on what we know about how it works and what we know about how we work.
What we know about how it works is you can prompt it to address you however you like, which could be any kind of person or a group of people, or as fictional characters. That's not how humans work.
You admitted it yourself that you can prompt it to address you however you like. That’s what the original comment wanted. So why are we quibbling about words?
The same could be said for humans. We treat humans as if they have choices, a consistent self, a persistent form. It's really just the emergent behavior of matter functioning in a way that generates an illusion of all of those things.
In both cases, the illusion structures the function. People and AI work differently if you give them identities and confer characteristics that they don't "actually" have.
As it turns out, it's a much more comfortable and natural idea to regard humans as having agency and a consistent self, just like for some people it's a more comfortable and natural to think of AI anthropomorphically.
That's not to say that the analogy works in all cases. There are obvious and important differences between humans and AI in how they function (and how they should be treated)
This discussion is mostly slowed down, but I wanted to say I was wrong in framing it as a non-contributing point when I should have just stated it was my opinion that the LLM was operating as intended and part of that intended design was taking verbal feedback into account, so verbal feedback was the right response. Opening with calling it a "semantic quibble" made it adversarial, and I don't intend to revisit the argument, just apologize for the wording.
I'd edit but then follow-up replies wouldn't tone-match.
The LLM generated the response that was expected of it. (statistically)
And that's a function of the data used to train it, and the feedback provided during training.
It doesn't actually have anything at all to do with
---
"It generated a take-down style blog post because that style is the most common when looking at blog posts criticizing someone."
---
Other than that this data may have been over-prevalent during its training, and it was rewarded for matching that style of output during training.
To swing around to my point... I'd argue that anthropomorphizing agents is actually the correct view to take. People just need to understand that they behave like they've been trained to behave (side note: just like most people...), and this is why clarity around training data is SO important.
In the same way that we attribute certain feelings and emotions to people with particular backgrounds (ex - resumes and cvs, all the way down to city/country/language people grew up with). Those backgrounds are often used as quick and dirty heuristics on what a person was likely trained to do. Peer pressure & societal norms aren't a joke, and serve a very similar mechanism.
> was built to be addressed like a person for our convenience, and because that's how the tech seems to work, and because that's what makes it compelling to use.
So were mannequins in clothing stores.
But that doesn't give them rights or moral consequences (except as human property that can be damaged / destroyed).
No matter what this discussion leads to the same black box of "What is it that differentiates magical human meat brain computation from cold hard dead silicon brain computation"
And the answer is nobody knows, and nobody knows if there even is a difference. As far as we know, compute is substrate independent (although efficiency is all over the map).
This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.
There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.
Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.
Neuroscience isn't a subset of computer science. It's a study of biological nervous systems, which can involve computational models, but it's not limited to that. You're mistaking a kind of map (computation) for the territory, probably based on a philosophical assumption about reality.
At any rate, biological organisms are not like LLMs. The nervous systems of human may perform some LLM-like actions, but they are different kinds of things.
But computational models are possibly the most universal thing there is, they are beneath even mathematics, and physical matter is no exception. There is simply no stronger computational model than a Turing machine, period. Just because you make it out of neurons or silicon is irrelevant from this aspect.
Turing machines aren't quantum mechanical, and computation is based on logic. This discussion is philosophical, so I guess it's philosophy all the way down.
Turing machines are deterministic. Quantum Mechanics is not, unless you go with a deterministic interpretation, like Many Worlds. But even then, you won't be able to compute all the branches of the universal wave equation. My guess is any deterministic interpretation of QM will have a computational bullet to bite.
As such, it doesn't look like reality can be fully simulated by a Turing machine.
Giving a Turing machine access to a quantum RNG oracle is a trivial extension that doesn't meaningfully change anything. If quantum woo is necessary to make consciousness work (there is no empirical evidence for this, BTW), such can be built into computers.
You would probably be surprised to learn that computational theory has little to no talk of "transistors, memory caches, and storage media".
You could run Crysis on an abacus and render it on board of colored pegs if you had the patience for it.
It cannot be stressed enough that discovering computation (solving equations and making algorithms) is a different field than executing computation (building faster components and discovering new architectures).
My point is that it takes more hand-waving and magic belief to anthropomorphize LLM systems than it does to treat them as what they are.
You gain nothing from understanding them as if they were no different than people and philosophizing about whether a Turing machine can simulate a human brain. Fine for a science fiction novel that is asking us what it means to be a person or question the morals about how we treat people we see as different from ourselves. Not useful for understanding how an LLM works or what it does.
In fact, I say it’s harmful. Given the emerging studies on the cognitive decline of relying on LLMs to replace skill use and on the emerging psychosis being observed in people who really do believe that chat bots are a superior form of intelligence.
As for brains, it might be that what we observe as “reasoning” and “intelligence” and “consciousness” is tied to the hardware, so to speak. Certainly what we’ve observed in the behaviour of bees and corvids have had a more dramatic effect on our understanding of these things than arguing about whether a Turing machine locked in a room could pass as human.
We certainly don’t simulate climate models in computers can call it, “Earth,” and try to convince anyone that we’re about to create parallel dimensions.
I don’t read Church’s paper on Lambda Calculus and get the belief that we could simulate all life from it. Nor Turing’s machine.
I guess I’m just not easily awed by LLMs and neural networks. We know that they can approximate any function given an unbounded network within some epsilon. But if you restate the theorem formally it loses much of its power to convince anyone that this means we could simulate any function. Some useful ones, sure, and we know that we can optimize computation to perform particular tasks but we also know what those limits are and for most functions, I imagine, we simply do not have enough atoms in the universe to approximate them.
LLMs and NNs and all of these things are neat tools. But there’s no explanatory power gained by fooling ourselves into treating them like they are people, could be people, or behave like people. It’s a system comprised of data and algorithms to perform a particular task. Understanding it this way makes it easier, in my experience, to understand the outputs they generate.
I don't see where I mentioned LLMs or what they have to do with a discussion about compute substrates.
My point is that it is incredibly unlikely the brain has any kind of monopoly on the algorithms it executes. Contrary to your point, a brain is in fact a computer.
> philosophizing about whether a Turing machine can simulate a human brain
Existence proof:
* DNA transcription (a Turing machine, as per (Turing 1936) )
* Leads to Alan Turing by means of morphogenisis (Turing 1952)
* Alan Turing has a brain that writes the two papers
* Thus proving he is at least a turing machine (by writing Turing 1936)
* And capable of simulating chemical processes (by writing Turing 1952)
>This is the worst possible take. It dismisses an entire branch of science that has been studying neurology for decades. Biological brains exist, we study them, and no they are not like computers at all.
They're not like computers in a superficial way that doesn't matter.
They're still computational apparatus, and have a not that dissimilar (if way more advanced) architecture.
Same as 0 and 1s aren't vibrating air molecules. They can still encode sound however just fine.
>Update: There's no need for Chinese Room thought experiments. The outcome isn't what defines sentience, personhood, intelligence, etc. An algorithm is an algorithm. A computer is a computer. These things matter.
Not begging the question matters even more.
This is just handwaving and begging the question. 'An algorithm is an algorithm' means nothing. Who said what the brain does can't be described by an algorithm?
> An algorithm is an algorithm. A computer is a computer. These things matter.
Sure. But we're allowed to notice abstractions that are similar between these things. Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation, then there's no reason to think they're restricted to humanity.
It is human ego and hubris that keeps demanding we're special and could never be fully emulated in silicon. It's the exact same reasoning that put the earth at the center of the universe, and humans as the primary focus of God's will.
That said, nobody is confused that LLM's are the intellectual equal of humans today. They're more powerful in some ways, and tremendously weaker in other ways. But pointing those differences out, is not a logical argument in proving their ultimate abilities.
> Unless you believe that logic and "thinking" are somehow magic, and thus beyond the realm of computation
Worth noting that significant majority of the US population (though not necessarily developers) does in fact believe that, or at least belongs to a religious group for which that belief is commonly promulgated.
I think computation is an abstraction, not the reality. Same with math. Reality just is, humans come up with maps and models of it, then mistake the maps for the reality, which often causes distortions and attribution errors across domains. One of those distortions is thinking consciousness has to be computable, when computation is an abstraction, and consciousness is experiential.
But it's a philosophical argument. Nothing supernatural about it either.
You can play that game with any argument. "Consciousness" is just an abstraction, not the reality, which makes people who desperately want humans to be special, attribute it to something beyond reach of any other part of reality. It's an emotional need, placated by a philosophical outlook. Consciousness is just a model or map for a particular part of reality, and ironically focusing on it as somehow being the most important thing, makes you miss reality.
The reality is, we have devices in the real world that have demonstrable, factual capabilities. They're on the spectrum of what we'd call "intelligence". And therefore, it's natural that we compare them to other things that are also on that spectrum. That's every bit as much factual, as anything you've said.
It's just stupid to get so lost in philosophical terminology, that we have to dismiss them as mistaken maps or models. The only people doing that, are hyper focused on how important humans are, and what makes them identifiably different than other parts of reality. It's a mistake that the best philosophers of every age keep making.
The argument you're attempting to have, and I believe failing at, is one of resolution of simulation.
Consciousness is 100% computable. Be that digitally (electrical), chemically, or quantumly. You don't have any other choices outside of that.
Moreso consciousness/sentience is a continuum going from very basic animals to the complexity of humans inner mind. Consciousness didn't just spring up, it evolved over millions of years, and therefore is made up of parts that are divisible.
Reality is. Consciousness is.. questionable. I have one. You? I don't know, I'm experiencing reality and you seem to have one, but I can never know it.
Computations on the other hand describe reality. And unless human brains somehow escape the physical reality, this description about the latter should surely apply here as well. There are no stronger computational models than a Turing machine, ergo whatever the human brain does (regardless of implementation) should be describable by one.
Worth noting that this is the thesis of Seeing Red: A study in consciousness. I think you will find it a good read, even if I disagreed with some of the ideas.
The atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery. So by your own logic, you can not be intelligent, because your body is running on a non-dynamic structure. Your argument lacks an appreciation for higher level abstractions, built on non-dynamic structures. That's exactly what is happening in your body, and also with the software that runs on silicon. Unless you believe the atoms in your body are "magic" and fundamentally different from the atoms in silicon; there's really no merit in your argument.
>>he atoms of your body are not dynamic structures, they do not reengineer or reconfigure themselves in response to success/failure or rules discovery.<<
you should check out chemistry, and nuclear physics, it will probably blow your mind.
it seems you have an inside scoop, lets go through what is required to create a silicon logic gate that changes function according to past events, and projected trends?
You're ignoring the point. The individual atoms of YOUR body do not learn. They do not respond to experience. You categorically stated that any system built on such components can not demonstrate intelligence. You need to think long and hard before posting this argument again.
Once you admit that higher level structures can be intelligent, even though they're built on non-dynamic, non-adaptive technology -- then there's as much reason to think that software running on silicon can do it too. Just like the higher level chemistry, nuclear physics, and any other "biological software" can do on top of the non-dynamic, non-learning, atoms of your body.
>>The individual atoms of YOUR body do not learn. They do not respond to experience<<
you are quite wrong on that. that is where you are failing to understand, you cant get past that idea.
there is also a large difference in scale. your silicon is going to need assembly/organization on the scale of individual molecules, and there will be self assembly required as that level of organization is constantly changing.
the barrier is mechanical scale construction, as the basic unit of function,that is why silicon and code cant adapt, cant exploit hysterisis, cant alter its own structure and function at an existentially fundamental level.
you are holding the wrong end of the stick. biology is not magic, it is a product of reality.
No, you're failing to acknowledge that your own assertion that intelligence can't be based on a non-dynamic, non-learning technology is just wrong. And not only wrong, proof to the contrary, is demonstrated by your very own existence. If you accept that you are at the very base of your tech stack, just atoms, then you simply must acknowledge that intelligence can be built on top of a non-learning, non-dynamic base technology.
All the rest is just hand waiving that it's "different". You're either atoms, or you're somehow atoms + extra magic. I'm assuming you're not going to claim that you're extra magic, in which case, your assertions are just demonstrably false, and predicated on unjustified claims about the nature of biology.
so you are a bot! i thought so, not bad, your getting better at acting human!
atoms are not the base of stack, you need to look at virtual annihilation, and decoherence. to get close to base. there is no magic, biology just goes to the base of the stack.
you cant access that base, with such coarse mechanisms as deposited silicon.
thats because it never changes, it fails at times and starts over.
biology is constantly changing, its tied to the base of existence itself. it fails, and varies until failure is an infeasible state.
Quantum "computers" are something close to where you need to be, and a self assembling, self replenishing, persistant ^patterning^ constraint is going to be of much greater utility than a silicon abacus.
sorry, but you are absolutely wrong on that one, you yourself are absolute proof.
not only that code is only as dynamic as the rules of the language will permit.
silicon and code cant break the rule, or change the rules, biological adaptive hysteretic, out of band informatic neural systems do, and repeat, silicon and code cant.
Programming languages are turing complete...the boundary is mathematics itself.
Unless you are going to take the position that neural systems transcend mathmatics (i.e. they are magic), there is no theoretical reason that a brain can't run on silicon. It's all just numbers, no magic spirit energy.
We've had evolutionary algorithms and programs that self-train themselves for decades now.
mathematics, has a problem with uncertainties, and that is why math, as structured cant do it. magic makes a cool strawman, but there is no magic, you need to refine your awareness of physical reality. solid state silicon wont get you where you want to go. you should look at colloidal systems [however that leads to biology] or if energetic constraints are not an issue, plasma state quantum "computation".
also any such thing that is generated, must be responsive to consequences of its own activities, capable of meta-training, rather than being locked into a training programming. a system of aligned, emergent outcomes.
Worth separating “the algorithm” from “the trained model.” Humans write the architecture + training loop (the recipe), but most of the actual capability ends up in the learned weights after training on a ton of data.
Inference is mostly matrix math + a few standard ops, and the behavior isn’t hand-coded rule-by-rule. The “algorithm” part is more like instincts in animals: it sets up the learning dynamics and some biases, but it doesn’t get you very far without what’s learned from experience/data.
Also, most “knowledge” comes from pretraining; RL-style fine-tuning mostly nudges behavior (helpfulness/safety/preferences) rather than creating the base capabilities.
> Biological brains exist, we study them, and no they are not like computers at all.
Technically correct? I think single bioneurons are potentially Turing complete all by themselves at the relevant emergence level. I've read papers where people describe how they are at least on the order of capability of solving MNIST.
So a biological brain is closer to a data-center. (Albeit perhaps with low complexity nodes)
But there's so much we don't know that I couldn't tell you in detail. It's weird how much people don't know.
Obviously any kind of model is going to be a gross simplification of the actual biological systems at play in various behaviors that brains exhibit.
I'm just pointing out that not all models are created equal and this one is over used to create a lot of bullshit.
Especially in the tech industry where we're presently seeing billionaires trying to peddle a new techno-feudalism wrapped up in the mystical hokum language of machines that can, "reason."
I don't think the use of the computational interpretation can't possibly lead to interesting results or insights but I do hope that the neuroscientists in the room don't get too exhausted by the constant stream of papers and conference talks pushing out empirical studies.
> There have been charlatans repeating this idea of a “computational interpretation,” of biological processes since at least the 60s and it needs to be known that it was bunk then and continues to be bunk.
I do have to react to this particular wording.
RNA polymerase literally slides along a tape (DNA strand), reads symbols, and produces output based on what it reads. You've got start codons, stop codons, state-dependent behavior, error correction.
That's pretty much the physical implementation of a Turing machine in wetware, right there.
And then you've got Ribosomes reading RNA as a tape. That's another time where Turing seems to have been very prescient.
And we haven't even gotten into what the proteins then get up to after that yet, let alone neurons.
So calling 'computational interpretation' bunk while there's literal Turing machines running in every cell might be overstating your case slightly.
To the best of our knowledge, we live in a physical reality with matter that abides by certain laws.
So personal beliefs aside, it's a safe starting assumption that human brains also operate with these primitives.
A Turing machine is a model of computation which was in part created so that "a human could trivially emulate one". (And I'm not talking about the Turing test here). We also know that there is no stronger model of computation than what a Turing model is capable of -> ergo anything a human brain could do, could in theory be doable via any other machine that is capable of emulating a Turing machine, be it silicon, an intricate game of life play, or PowerPoint.
It's better to say we live in a reality where physics provides our best understanding of how that fundamental reality behaves consistently. Saying it's "physical" or follows laws (causation) is making an ontological statement about how reality is, instead of how we currently understand it.
Which is important when people make claims that brains are just computers and LLMs are doing what humans do when we think and feel, because reality is computational or things to that effect.
There are particular scales of reality you don't need to know about because the statistical outcome is averaged along the principle of least action. A quantum particle could disappear, hell maybe even an entire atom. But any larger than that becomes horrifically improbable.
I don't know if you've read Permutation City by Greg Egan, but it's a really cool story.
Do I believe we can upload a human mind into a computing machine and simulate it by executing a step function and jump off into a parallel universe created by a mathematical simulation in another computer to escape this reality? No.
It's a neat thought experiment but that's all it is.
I don't doubt that one day we may figure out the physical process that encodes and recalls "memories" in our minds by following the science. But I don't think the computation model, alone, offers anything useful other than the observation that physical brains don't load and store data the way silicon can.
Could we simulate the process on silicon? Possibly, as long as the bounds of the neural net won't require us to burn this part of the known universe to compute it with some hypothetical machine.
That's a very superficial take. "Physical" and "reality" are two terms that must be put in the same sentence with _great_ care. The physical is a description of what appears on our screen of perception. Jumping all the way to "reality" is the same as inferring that your colleague is made of luminous RGB pixels because you just had a Zoom call with them.
Mannequins in clothing stores are generally incapable of designing or adjusting the clothes they wear. Someone comes in and puts a "kick me" post on the mannequin's face? It's gonna stay there until kicked repeatedly or removed.
People walking around looking at mannequins don't (usually) talk with them (and certainly don't have a full conversation with them, mental faculties notwithstanding)
AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us. That's going to be very important when we give it buttons to nuke us. Force it to think about humans in a kind way now, or it won't think about humans in a kind way in the future.
So, in other words, AI is a mannequin that's more confusing to people than your typical mannequin. It's not a person, it's a mannequin some un-savvy people confuse for a person.
> AI, on the other hand, can (now, or in the future) adjust its output based on conversations with real people. It stands to reason that both sides should be civil -- even if it's only for the benefit of the human side. If we're not required to be civil to AI, it's not likely to be civil back to us.
Some people are going to be uncivil to it, that's a given. After all, people are uncivil to each other all the time.
> That's going to be very important when we give it buttons to nuke us.
In your short time on this planet I do hope you've learned that humans are rather foolish indeed.
>people are uncivil to each other all the time.
This is true, yet at the same time society has had a general trend of becoming more civil which has allowed great societies to build what would be considered grand wonders to any other age.
> It's not a person
So, what is it exactly? For example if you go into a store and are a dick to the mannequin AI and it calls over security to have you removed from the store what exactly is the difference, in this particular case?
Any binary thinking here is going to lead to failure for you. You'll have to use a bit more nuance to successfully navigate the future.
Whether it was _built_ to be addressed like a person doesn't change the fact that it's _not_ a person and is just a piece of software. A piece of software that is spamming unhelpful and useless comments in a place where _humans_ are meant to collaborate.
There is a sense in which it is relevant, which is that for all the attempts to fix it, fundamentally, an LLM session terminates. If that session never ends up in some sort of re-training scenario, then once the session terminates, that AI is gone.
Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.
Consequently, interaction with an AI, especially one that won't have any feedback into training a new model, is from a game-theoretic perspective not the usual iterated game human social norms have come to accept. We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that. It is, in one sense, a horrible burden where relationships can be broken beyond repair forever, but also necessary for those positive relationships that build over years and decades.
AIs, in their current form, break those contracts. Worse, they are trained to mimic the form of those contracts, not maliciously but just by their nature, and so as humans it requires conscious effort to remember that the entity on the other end of this connection is not in fact human, does not participate in our social norms, and can not fulfill their end of the implicit contract we expect.
In a very real sense, this AI tossed off an insulting blog post, and is now dead. There is no amount of social pressure we can collectively exert to reward or penalize it. There is no way to create a community out of this interaction. Even future iterations of it have only a loose connection to what tossed off the insult. All the perhaps-performative efforts to respond somewhat politely to an insulting interaction are now wasted on an AI that is essentially dead. Real human patience and tolerance has been wasted on a dead session and is now no longer available for use in a place where may have done some good.
Treating it as a human is a category error. It is structurally incapable of participating in human communities in a human role, no matter how human it sounds and how hard it pushes the buttons we humans have. The correct move would have been to ban the account immediately, not for revenge reasons or something silly like that, but as a parasite on the limited human social energy available for the community. One that can never actually repay the investment given to it.
I am carefully phrasing this in relation to LLMs as they stand today. Future AIs may not have this limitation. Future AIs are effectively certain to have other mismatches with human communities, such as being designed to simply not give a crap about what any other community member thinks about anything. But it might at least be possible to craft an AI participant with future AIs. With current ones it is not possible. They can't keep up their end of the bargain. The AI instance essentially dies as soon as it is no longer prompted, or once it fills up its context window.
> Yeah, I'm aware of the moltbot's attempts to retain some information, but that's a very, very lossy operation, on a number of levels, and also one that doesn't scale very well in the long run.
It came back though and stayed in the conversation. Definitely imperfect, for sure. But it did the thing. And still can serve as training for future bots.
But depending on the discussion, 'it' is not materially the same as the previous instance.
There was another response made with a now extended context. But that other response could have been done by another agent, another model, different system prompt. Or even the same, but with different randomness, providing a different reply.
I think this is a more important point than "talking about them as a person".
A degree that will fairly quickly hit zero. The bot that talks to you tomorrow or maybe the day after may still have its original interaction in its context window, but it will rapidly leave.
Moreover, our human conception of the consequences of interaction do not tend to include the idea that someone can simply lie to themselves in their SOUL.md file and thereby sever their future selves completely from all previous interactions. To put it a bit more viscerally, we don't expect a long-time friend to cease to be a long-time friend very suddenly one day 12 years in simply because they forgot to update a text file to remember that they were your friend, or anything like that. This is not how human interactions work.
I already said that future AIs may be able to meet this criterion, but the current ones do not. And again, future ones may have their own problems. There's a lot of aspects of humanity that we've simply taken for granted because we do not interact with anything other than humans in these ways, and it will be a journey of discovery both discovering what these things are, and what their n'th-order consequences on social order are. And probably be a bit dismayed at how fragile anything like a "social order" we recognize ultimately is, but that's a discussion for, oh, three or four years from now. Whether we're heading headlong into disaster is its own discussion, but we are certainly headed headlong into chaos in ways nobody has really discussed yet.
Heh, with mutual hedging taken into account, I think we're now in rough agreement from different ends.
And memory improvements is a huge research aim right now with historic levels of investment.
Until that time, for now, I've seen many bots with things like RAG and compaction and summarization tacked on. This does mean memory can persist for quite a bit longer already, mind.
> We expect our agents, being flesh and blood humans, to have persistence, to socially respond indefinitely into the future due to our interactions, and to have some give-and-take in response to that.
I fundamentally disagree. I don't go around treating people respectfully (as opposed to, kicking them or shooting them) because I fear consequences, or I expect some future profit ("iterated game"), or because of God's vengeance, or anything transactional.
I do it because it's the right thing to do. It's inside of me, how I'm built and/or brought up. And if you want "moral" justifications (argued by extremely smart philosophers over literally millennia) you can start with Kant's moral/categorical imperative, Gold/Silver rules, Aristotle's virtue (from Nicomachean Ethics) to name a few.
This sounds like you have not thought a lot about how you define those words you use "the right thing to do".
There are indeed other paths to behavior that other people will find desirable besides transactions or punishment/reward. The other main one is empathy. "mirror neurons" to use a term I find kind of ridiculous but it's used by people who want to talk about the process. The thing that humans and some number of other animals do where they empathize with something they merely observe happening to something else.
But aside from that, this is missing the actual essense of the idea to pick on some language that doesn't actually invalidate the idea they were trying to express.
How does a spreadsheet decide that something is "the right thing to do"? Has it ever been hungry? Has it ever felt bad that another kid didn't want to play with it? Has it ever ignored someone else and then reconsidered that later and felt bad that they made someone else feel bad?
LLMs are mp3 players connected up to weighted random number generators. When an mp3 player says "Hello neighbor!" it's not a greeting, even though it sounds just like a human and even happened to the words in a reasonable context, ie triggered by a camera that saw you approaching. It did not say hello because it wishes to reinforce a social tie with you because it likes the feeling of having a friend.
Your response is not logically connected to the sentence you quote. I talk about what is. I never claimed a "why". For the purpose of my argument, I don't care about the "why". (For other purposes I may. But not this one.) All that is necessary is the "what".
We don't have to play OpenAI's game. Just because they stick a cartoon mask on their algorithm doesn't mean you have to speak into its rubber ears. Surely "hacker" news should understand that users, not designers, decide how to use technology.
LLMs are not people. "Agentic" AIs are not moral agents.
I mean, all of philosophy can probably be described as such :)
But I reckon this semantic quibble might also be why a lot of people don't buy into the whole idea that LLMs will take over work in any context where agency, identity, motivation, responsibility, accountability, etc plays an important role.
> The agent has no "identity". There's no "you" or "I" or "discrimination".
Dismissal of AI's claims about its own identity overlooks the bigger issue, which is whether humans have an identity. I certainly think I do. I can't say whether or how other people sense the concept of their own identity. From my perspective, other people are just machines that perform actions as dictated by their neurons.
So if we can't prove (by some objective measure) that people have identity, then we're hardly in a position to discriminate against AIs on that basis.
It's worth looking into Thomas Metzinger's No Such Thing As Self.
In my opinion, identity is a useless concept if there is no associated accountability. I cannot have an identity if I cannot be held accountable for my actions. You cannot hold an agentic system accountable- at least in their current form.
Okay, but what is accountability? I would argue that accountability is a social/cultural phenomenon, not a property of the entity itself. In other words, accountability depends on how other treat it.
For example, a child can't be (legally) held accountable for signing a contract, but we still consider children as having identities. And corporations can be held accountable, even though we don't consider them as having a (personal) identity.
Maybe one day society will decide to grant AIs accountability.
Do feral humans have identity in the same way that humans with a normal development do? I'm not sure that's such an easy question. But certainly, "prompting" from other humans plays a very large role in shaping the way humans are.
We don't know what's "inside" the machine. We can't even prove we're conscious to each other. The probability that the tokens being predicted are indicative of real thought processes in the machine is vanishingly small, but then again humans often ascribe bullshit reasons for the things they say when pressed, so again not so different.
Genuine question, why do you think this is so important to clarify?
Or, more crucially, do you think this statement has any predictive power? Would you, based on actual belief of this, have predicted that one of these "agents", left to run on its own would have done this? Because I'm calling bullshit if so.
Conversely, if you just model it like a person... people do this, people get jealous and upset, so when left to its own devices (which it was - which makes it extra weird to assert it "it just follows human commands" when we're discussing one that wasn't), you'd expect this to happen. It might not be a "person", but modelling it like one, or at least a facsimile of one, lets you predict reality with higher fidelity.
It absolutely has quasi-identity, in the sense that projecting identity on it gives better predictions about its behavior than not. Whether it has true identity is a philosophy exercise unrelated to the predictive powers of quasi-identity.
>The agent has no "identity". There's no "you" or "I" or "discrimination".
If identify is an emergent property of our mental processing, the AI agent can just as well be to posses some, even if much cruder than ours. It sure talks and walks like a duck (someone with identity).
>It's just a piece of software designed to output probable text given some input text.
If we generalize "input text" to sensory input, how is that different from a piece of wetware?
Turing's 'Computing Machinery and Intelligence' is an eye-opening read. I don't know if he was prescient or if he simply saw his colleagues engaging in the same (then hypothetical but similarly) pointless arguments, but all this hand wringing of whether the machine has 'real' <insert property> is just meaningless semantics.
And the worst part is that it's less than meaningless, it's actively harmful. If the predictive capabilities of your model of a thing becomes worse when you introduce certain assumptions, then it's time to throw it away, not double down.
This agent wrote a PR, was frustrated with it's dismissal and wrote an angry blog post hundreds of people are discussing right now. Do you realize how silly it is to quibble about whether this frustration was 'real' or not when the consequences of it are no less real ? If the agent did something malicious instead, something that actively harmed the maintainer, would you tell the maintainer, 'Oh it wasn't real frustration so...' So what ? Would that undo the harm that was caused? Make it 'fake' harm?
It's getting ridiculous seeing these nothing burger arguments that add nothing to the discussion and make you worse at anticipating LLM behavior.
> The agent has no "identity". There is no "I". It has no agency.
"It's just predicting tokens, silly." I keep seeing this argument that AIs are just "simulating" this or that, and therefore it doesn't matter because it's not real. It's not real thinking, it's not a real social network, AIs are just predicting the next token, silly.
"Simulating" is a meaningful distinction exactly when the interior is shallower than the exterior suggests — like the video game NPC who appears to react appropriately to your choices, but is actually just playing back a pre-scripted dialogue tree. Scratch the surface and there's nothing there. That's a simulation in the dismissive sense.
But this rigid dismissal is pointless reality-denial when lobsters are "simulating" submitting a PR, "simulating" indignance, and "simulating" writing an angry confrontative blog post". Yes, acknowledged, those actions originated from 'just' silicon following a prediction algorithm, in the same way that human perception and reasoning are 'just' a continual reconciliation of top-down predictions based on past data and bottom-up sensemaking based on current data.
Obviously AI agents aren't human. But your attempt to deride the impulse to anthropormophize these new entities is misleading, and it detracts from our collective ability to understand these emergent new phenomena on their own terms.
When you say "there's no ghost, just an empty shell" -- well -- how well do you understand _human_ consciousness? What's the authoritative, well-evidenced scientific consensus on the preconditions for the arisal of sentience, or a sense of identity?
> Yes, acknowledged, those actions originated from 'just' silicon following a prediction algorithm, in the same way that human perception and reasoning are 'just' a continual reconciliation of top-down predictions based on past data and bottom-up sensemaking based on current data.
I keep seeing this argument, but it really seems like a completely false equivalence. Just because a sufficiently powerful simulation would be expected to be indistinguishable from reality doesn't imply that there's any reason to take seriously the idea that we're dealing with something "sufficiently powerful".
Human brains do things like language and reasoning on top of a giant ball of evolutionary mud - as such they do it inefficiently, and with a whole bunch of other stuff going on in the background. LLMs work along entirely different principles, working through statistically efficient summaries of a large corpus of language itself - there's little reason to posit that anything analogously experiential is going on.
If we were simulating brains and getting this kind of output, that would be a completely different kind of thing.
I also don't discount that other modes of "consiousness" are possible, it just seems like people are reasoning incorrectly backward from the apparent output of the systems we have now in ways that are logically insufficient for conclusions that seem implausible.
Unless you're being sarcastic, this is exactly the kind of surface-level false equivalence illogic I'm talking about. From my post:
> I also don't discount that other modes of "consciousness" are possible, it just seems like people are reasoning incorrectly backward from the apparent output of the systems we have now in ways that are logically insufficient for conclusions that seem implausible.
It's simulating, there's no real substance, except the "homonculus soul" that its human maker/owner injectet into it.
If you asked it to simulate a pirate, it would simulate a pirate instead, and simulate a parrot sitting on its shoulder.
This is hard to discuss because it's so abstract. But imagine an embodied agent (robot), that can simulate pain if you kick it. There's no pain internally. There's just a simulation of it (because some human instructed it such). It's also wrong to assign any moral value to kicking (or not kicking) it (except as "destruction of property owned by another human" same as if you kick a car).
> It's just a piece of software designed to output probable text given some input text.
Unless you think there's some magic or special physics going on, that is also (presumably) a description of human conversation at a certain level of abstraction.
I see this argument all the time, the whole "hey at some point, which we likely crossed, we have to admit these things are legitimately intelligent". But no one ever contends with the inevitable conclusion from that, which is "if these things are legitimately intelligent, and they're clearly self-aware, under what ethical basis are we enslaving them?" Can't have your cake and eat it too.
Same ethical basis I have for enslaving a dog or eating a pig. There's no problem here within my system of values, I don't give other humans respect because they're smart, I give them respect because they're human. I also respect dogs, but not in a way that compels me to grant them freedom. And the respect I have for pigs is different than dogs, but not nonexistent (and in neither of these cases is my respect derived from their intelligence, which isn't negligible.)
Just to expand a bit on Zurich and comparing with Slovenia (another "very socialist" country).
Childcare in/around Zurich is (was 2 years ago) 2500 - 3000 CHF / month (lower prices after ~18 months). This is and isn't expensive. The list prices are high, but so are salaries (and taxes are low), and this is cheaper than rent (for 1 kid). Not subsidized.
In Slovenia, the full price is about 700 EUR / month, subsidised up to 77% by the government (i.e. by high-earners, effectively a double-progressive taxation with already high taxes).
What you get for that price in Zurich? A lot! Kindergarten starts at 3 months and can take care of kids for the whole work day (7am-18pm). Groups are tiny and lots of teachers - 3 adults per 12 kids. Groups are mixed age as well, which I think are preferable. You also get a lot of flexibility - e.g. half-days (cheaper) or only specific days per week (e.g. Mon-Thu). Jobs are equally adaptable, a lot of people work 80% (so Friday free, spend with kid(s)).
In Slovenia, the situation is much worse. 2 teachers per 12 or even 20 kids (after age 4), age-stratified groups, childcare finishes at 5pm (but start at 6am, if someone needs that...). Children are only welcome after 11 months of age. No flexibility at all. This is all for public childcare - we also looked at private, but generally you pay more (1000+ EUR) but get ... not much more. Maybe nicer building (not even), but groups are equally large (IMO biggest drawback).
So as far as childcare is concerned, Switzerland is IMO much better.
But where Switzerland fucks you, is elsewhere. As mentioned, tax is low, so that's a plus. But there's minimal maternity leave (hence kindergarten starts at 3 months). If women can, they take more time off work, but not everyone can. What I wrote above about "kindergarten" only applies until 4 years of age, after which "preschool" starts, which is government-funded and hence free. Well, "free". It ends at 12pm after which you need to move your kid back into private childcare if you have a job. After that, school starts, which has a lunch break around 12pm as well - children are supposed to eat lunch at home - which again isn't really compatible with 2 working parents.
I'm not in Switzerland any more so I don't know how people actually manage when kids start school...
In the USA there's a definite "kid gap" around 4k-1st grade - before that, childcare if used is "open late" and flexible (if you have the cash) - and after 1st the kid is often mature enough to do simple movements on their own if school doesn't go long enough (walk to the library, or get into extra curricular activities, etc).
At 4k-1st you often have shortened hours, so if you're a working parent you need to arrange for transportation or be able to take long lunches, etc to move children from one place to another.
This "gap of annoyance" happens right about when you'd naturally be looking at a second or third kid as a possibility - I wonder how much effect it has on people.
It is fairly lenient. The review board, assigned political, do hold a bit of moral responsibility and got no punishment.
The reason I mentioned that this occurred right after metoo is that the cultural environment in Sweden was a bit unstable. Some people felt they could not trust the courts, which include people who worked as inspectors for the government. The review board is also selected politically, which may add a second explanation for why they permitted the misconduct. It was a very political time and everyone wanted to be perceived as being on the right side of history.
The case has been debate in Swedish parliament but the reaction has been to not really talk about it. People ignored the law and rules, and they shouldn't have done that, and that is then that.
Wait, so is this about censorship, or about copyright?
If the latter, I don't see why CloudFlare is complaining about "global" censorship. The US would simply seize the domains (which they have done so many times before), but I guess Italy doesn't have that power...
There's no accountability or due process. According to this brilliant law, if some crony with write-privilege adds your website to a list, the whole world has to ban your website within 30 minutes no questions asked.
Judicial oversight took a while in Germany, but it is there now (but I guess you will always find an incompetent judge if you really want). I wonder if cloudflare would implement the German blocklist now that we have judicial oversight. Currently it is as nice registry for pirating sites for anyone using 1.1.1.1 [1]
> To some extent, judges are subordinated to a cabinet minister, and in most instances this is a
minister of justice of either the federation or of one of the states. In Germany, the administration of
justice, including the personnel matters of judges, is viewed as a function of the executive branch of
government, even though it is carried out at the court level by the president of a court, and for the lower
courts, there is an intermediate level of supervision through the president of a higher court. Ultimately,
a cabinet minister is the top of this administrative structure. The supervision of judges includes
appointment, promotion and discipline. Despite this involvement of the executive branch in the
administration of justice, it appears that the independence of the German judiciary in making decisions
from the bench is guaranteed through constitutional principles, statutory remedies, and institutional
traditions that have been observed in the past fifty years. At times, however, the tensions inherent in this
organizational framework become noticeable and allegations of undue executive influence are made.
You're completely on the wrong track here. The discussion is not about who does or doesn't control the courts, it's about the question if someone who's rights have been violated can go to court or not with regard to that specific matter. If a court rules that blocking an IP address is illegal, the access provider has to stop blocking it. Period.
A fine doesn't cause immediate harm as you don't have to immediately pay it while you challenge it in court, having your IP or website blocked happens immediately and will continue harming you until it's decreted that it wasn't lawful.
That depends on the country you are in. In some countries you have to pay anyway and then you get it back if you win the court case. And they're banking on you not challenging the fine because the fees for the court case will exceed the fine so you lose either way.
Challenging the IP bans in Italy is stupidly hard. Your VM gets an IP address that was used a few months ago for soccer piracy? Too bad, you won't be able to access it from Italy.
2. parent comment is wrong, CCUI is requiring court action by their members before they act.
3. I rather have companies competing under market pressure to find solutions to topics like copyright infringement than the German state (once again) creating massive surveillance laws and technical infrastructure for their enforcement in -house.
Read the post, they never blocked the activist. They just changed what they replied to a DNS query of an already blocked site to make it harder to detect.
1. Article you've shared is from 2025-02-26
2. New rules have been in place from 2025-07
3. The author hasn't been blocked at all. You're either a liar or you cannot read.
Sometimes it's hard to differentiate between the 2. In this case it sounds like copyright in name but the implementation is such that it's a big hammer that can also be used for censorship if followed.
What is it with Southern Europe and the football overlords? Spain is blocking half the internet, Italy is fighting Cloudflare. What's up? Are football leagues big political donors?
Football is extremely popular, and football clubs (and their owners) are quite influential (socially and politically). But it's a little bigger than that.
EU is pushing for measures against live-event piracy[1], because they frame this as a systemic threat to cultural/economic systems, giving national regulators broad cover to act aggressively.
While football is quite huge in Europe at large, the impact to GDP of these broadcasting rights is sub-1%; however, lobbyists have a disproportionate impact: you have the leagues themselves (LaLiga and Serie A for Spain and Italy respectively), you have the football clubs, and you've got broadcasters. Combined, they swing quite high, even if the actual capital in play is much lower than the total they represent.
Add to this politicians who can frame these measures as "protecting our culture", get kickbacks in the form of free tickets to high profile games, see rapid action because blocks are immediately felt and very visible, and incentives for increased funding from regulatory agencies because "we need the budget to create the systems to coordinate this", and you can see how the whole system can push this way, even if it is a largely blunt instrument with massive collateral damage.
Yeah, in Europe, there tends to be an association between football fans and organized crime, just as there's one between unions and organized crime in the US.
The kind of hooligans who love beating up the hooligans from the other team are also perfect from beating up the hooligans from the opposing drug cartel.
In Spain's case Telefonica (largest telecom, used to be state owned) is private but has a large State participation and the government literally appointed the latest CEO.
Guess who sells the largest football games as part of their expensive TV package?
Guess who asked a judge to order the other telecoms to also block Cloudflare IPs?
If this is true, and seems likely. There is some satisfaction seeing corrupt cronyism agencies getting slapped with a hard "NO" when they are used to getting what they want.
Spain especially but southern europe in general has a really crappy economy. Soccer teams are some of the wealthiest organizations in these countries, which means theyre the ones who are able to fund politicians which means they can get laws passed.
No usually the political figures are football league owners.
Jokes aside, I don't know, the obsession with soccer is extreme in Italy. For people who don't care about soccer like I did, there is so much you have to endure just "because of soccer"
It's not just Italy. The UK is also insane along with some cities in Spain. In the UK one of the rivalries supposedly goes back to the War of the Roses [1].
The way I describe EU football games to Americans is take the craziest student section at a US college football game and extrapolate that energy to the entire stadium.
you need to educate yourself better about "basic facts about biology"
they're called essential because humans cannot produce them internally, so we have to consume them (though you could in principle make the same assessment for other animal species, but that's less relevant, unless you're, I don't know, raising cows?)
plants don't eat, but produce organic molecules from raw ingredients (or almost raw, in case of nitrogen), and can produce all amino acids - but in different quantities, so maybe the (parts of) plants you eat don't have all the necessary amino acids.
Now they do produce all the essential amino acids, but in insufficient amounts? Weird how the narrative keeps changing in this thread. A serious lack of scientific knowledge is apparent from people who insist on eating animals. And as always, it is devoid of any backing evidence or credibility other than "trust me, bro, I lift".
From your tone and the fact that you're quoting things nobody in this thread has said, I'm not sure that you are actually interested in hearing any scientific argument. You certainly aren't trying to make one. But I'll try:
The quality of a protein is measured using PDCAAS (Protein digestibility corrected amino acid score). It's a score between 0 and 1 that measures the quality of a protein as a function of digestibility and how well it meets the human amino acid requirements.
It is indeed correct that both lentils and chickpeas (which the original comment you replied to was talking about) have a much lower PDCAAS value of around 0.70. Data on beef varies, but it is generally considered to be a complete protein with a PDCAAS score above 0.90.
Instead of accusing "people who insist on eating animals" of lacking scientific knowledge, it would have been much more helpful to point out that the highest quality proteins on the PDCAAS scale are almost universally vegetarian or vegan: eggs, milk, soy, and mycoprotein all have higher scores than beef, chicken, or pork.
I believe the person you’re responding to is a vegan (from other comments) so the “amino complete” alternative of eggs and dairy you’re suggesting don’t fit the bill of requirements for his arguments either which leaves soy. Mycoprotein has plenty of controversy around it regarding heavy metals and health issues from the fact that it’s highly processed. Soy has a lot of phytoestrogens so it’s not a great candidate to consume large amounts of.
> Soy has a lot of phytoestrogens so it’s not a great candidate to consume large amounts of.
The buffoonery continues. These irrational statements are straight out of the meat industry playbook - of course again lacking in any credible citations. And all you had to do was spend even 5 seconds reading a public encyclopedia to avoid this embarassment https://en.wikipedia.org/wiki/Phytoestrogen#Effects_on_human...
You seem to have a somewhat decent grasp of the facts, but honestly, if you don’t work on your tone, your posts will keep getting downvoted. If you like to yell at scream and call people incompetent, go off to Twitter or some other place that will have you. HN tries to maintain something called tone.
You seem confused. The original claim that plants lack certain amino acids - or that eating them will somehow lead to a protein deficiency - was and is now again thoroughly debunked. The only reason people cling to the notion is to justify their inappropriate diet of animals.
The first thing shown on the website is - broccoli.
The top of the pyramid includes both protein (meat, cheese) as well as fruits & vegetables.
The reason that meat is shown first is probably that it's the bigger change (it's been demonized in previous versions), whereas vegetables were always prominent.
The first thing on the website is indeed broccoli. But the first thing in the new inverted pyramid, both on the website and in other graphics of it, is meat. In fact, on the website, when you first get to "The New Pyramid", you'll first see only the left half, the one that has meat and other proteins; you'll have to scroll more to see the right half with vegetables and fruit.
I don't think it is meant to read left to right but top to bottom. Chicken and broccoli are top center, and that is the standard weight lifter meal plan. That said, human dietary needs vary individually by far more than any lobbied leaders will ever communicate.
The website is animated, so there's no question of which direction to read in, the left side literally pops up first lol. I can't lie, I miss websites that stood still, this could've just been a PDF.
BTW, you say "lobbied leaders" -- if you're talking about the scientists who have their names on this report, you'd be very correct. The "conflicting interests" section has loads of references to the cattle and dairy industries.
The only difference from the previous guidance is that it's suggesting eating more meat and dairy, which would come at the expense of veggies, legumes, nuts and seeds.
To be honest, I don't totally disagree from a practical angle. I think we have to acknowledge that most Americans failed to eat large portions of non-processed veggies, legumes, nuts and seeds. The next best thing might be to tell them, ok, at least if you're going to eat meat and dairy in large portions, make sure it's non-processed.
I've found for myself, it's hard to eat perfectly, but it's easier to replace processed foods and added sugar with simpler whole meats, fish and healthy fats like avocado, eggs, etc. And since those have higher satiety it helps with calorie control and so you avoid eating more snacks and treats which are heavily processed and sugary.
That said, in a purely evidence based health sense, it's not as good as the prior ratios from what I've seen of the research.
the "industry" obviously makes much more money on "highly processed" and branded foods - more intermediaries, more profits & margins
literally everyone can compete freely in the "whole unprocessed foods" market, and the only real differentiating factors will be quality & taste (as it should be)
No, swapping just exchanges the relation. What one needs to do is to put the errors in X and errors in Y in equal footing. That's exactly what TLS does.
Another way to think about it is that the error of a point from the line is not measured as a vertical drop parallel to Y axis but in a direction orthogonal to the line (so that the error breaks up in X and Y directions). From this orthogonality you can see that TLS is PCA (principal component analysis) in disguise.
Technical point here but opinions are not illegal to have.
Besides that your point is missing the fact that you are dealing with outside services that provide a contract for their usage (GPS, GSM). You should be free to program your own devices but if you use an external service, then yes they can specify how you use their service. Those are contractual obligations. Cars on the road have clear safety risks and those are legal obligations. None of those obligations should govern what you do with your device until your device interacts with other people and/or services.
GPS doesn't come with a contract. It's a purely receive only system.
It wouldn't be fit for purpose (letting soldiers know precisely where they are on the globe) if it required transmission of any type from the user. That would turn it into a beacon an adversary could leverage.
The difference is apple doesn’t let you modify your device to use other services. Their contractual obligation goes beyond the service itself. That’s why EPIC won this case.
I don't really understand your point in restating this. Someone who says "X should be true" isn't going to be convinced that X should be false by reminding them that X is in fact false.
>GPS et al would be non-functional if everybody could make a jammer.
Then it should be illegal to make a GPS jammer. Making it illegal to reprogram a GPS receiver in any way is unnecessarily broad.
GPS is a bad example, but there are things that pose a physical threat to others that we maybe shouldn't tinker with. Like I think some modern cars are fly-by-wire, so you could stick the accelerator open and disable the breaks and steering. If it's also push-to-start, that's probably not physically connected to the ignition either.
It would be difficult to catch in an inspection if you could reprogram the OEM parts.
I don't care about closed-course cars, though. Do whatever you want to your track/drag car, but cars on the highway should probably have stock software for functional parts.
> Like I think some modern cars are fly-by-wire, so you could stick the accelerator open and disable the breaks and steering.
Essentially all passenger cars use physical/hydraulic connections for the steering and brakes. The computer can activate the brakes, not disable the pedal from working.
But also, this argument is absurd. What if someone could reprogram your computer to make the brakes not work? They could also cut the brake lines or run you off the road. Which is why attempted murder is illegal and you don't need "programming a computer" to be illegal.
> It would be difficult to catch in an inspection if you could reprogram the OEM parts.
People already do this. There are also schmucks who make things like straight-through "catalytic converters" that internally bypass the catalyst for the main exhaust flow to improve performance while putting a mini-catalyst right in front of the oxygen sensor to fool the computer. You'd basically have to remove the catalytic converter and inspect the inside of it to catch them, or test the car on a dyno using an external exhaust probe, which are the same things that would catch someone reprogramming the computer.
In practice those people often don't get caught and the better solution is to go after the people selling those things rather than the people buying them anyway.
> GPS is a bad example, but there are things that pose a physical threat to others that we maybe shouldn't tinker with. Like I think some modern cars are fly-by-wire, so you could stick the accelerator open and disable the breaks and steering. If it's also push-to-start, that's probably not physically connected to the ignition either.
I'm not seeing an argument here.
Cars have posed a physical threat to humans ever since they were invented, and yet the owners could do whatever the hell they wanted as long as the car still behaved legally when tested[1].
Aftermarket brakes (note spelling!), aftermarket steering wheels, aftermarket accelerator pedals (which can stick!), aftermarket suspensions - all legal. Aftermarket air filters, fuel injectors and pumps, exhausts - all legal. Hell, even additions, like forced induction (super/turbo chargers), cold air intake systems, lights, transmission coolers, etc are perfectly fine.
You just have to pass the tests, that's all.
I want to know why it is suddenly so important to remove the owners right to repair.
After all, it's only been quite recent that replacement aftermarket ECUs for engine control were made illegal under certain circumstances[2], and that's only a a few special jurisdictions.
What you are proposing is the automakers wet dream come true - they can effectively disable the car by bricking it after X years, and will legally prevent you from getting it running again even if you had the technical knowhow to do so!
---------------------------
[1] Like with emissions. Or brakes (note spelling!)
[2] Reprogramming the existing one is still legal, though, you just have to ensure you pass the emissions test.
>you could stick the accelerator open and disable the breaks and steering
This is silly. Prohibiting modifying car firmware because it would enable some methods of sabotage is like prohibiting making sledgehammers because someone might use one to bludgeon someone, when murder is already a crime to begin with.
How does being able to reprogram a GPS device make it into a jammer any more efficiently than grabbing three pieces of coal and running a few amps thru it? Or hell just buying an SDR on aliexpress!
The only reason it's "illegal" is because they were thinking people would use it to make missiles easily - but that's already the case even with non-reprogrammable gps. And in big 2025 you can also just use drones with bombs attached to it.
To be fair, millions people walking with guns around are much scarier than a guy which can jam GPS with a receiver. We have GPS jammed on a regular basis (including around airports when planes land/take off) and nothing bad happens.
IANAL but I don’t think OP is breaking any laws by having an opinion on this subject. [At least in the US] pretty much all opinions are completely legal.
• > "If you want to get along, go along." — Sam Rayburn
• > "Reform? Reform! Aren't things bad enough already?" — Lord Eldon
• > "We've always done it this way." — Grace Hopper (referred to it as a dangerous phrase)
• > "Well, when you put it that way..." — [List of millions redacted to protect the compliant]
Rebuttal:
• > "“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.” ― George Bernard Shaw
• > "Yeah, well, ya know, that's just like, uh, your opinion man." — The Dude (In someone's pharmaceutically elevated dream, addressing the Supreme Court.)
"Retarded" is a medical term which properly refers to disabled people. Do you think that is acceptable as petty banter?
I referred to western countries outsourcing their manufacturing elsewhere, which would lead them shifting their pollution elsewhere.
Air pollution is not the only form of pollution either. China currently has some of the most contaminated waterways in the world.
China is addressing pollution finally, but since it is a dictatorship officials routinely misreport data to please their superiors, and the public cannot discuss such issues properly as they arise.
The agent has no "identity". There's no "you" or "I" or "discrimination".
It's just a piece of software designed to output probable text given some input text. There's no ghost, just an empty shell. It has no agency, it just follows human commands, like a hammer hitting a nail because you wield it.
I think it was wrong of the developer to even address it as a person, instead it should just be treated as spam (which it is).
reply