Hacker Newsnew | past | comments | ask | show | jobs | submit | consumer451's commentslogin

I just want sodium-ion powering a Tacoma 4x4 from the 1990's. This is my fan fiction.

However, if Toyota made that it would be far greater than $60k.

I really love the Slate project. This appears to be my hopes ^30 years of technological and manufacturing advancement.


Here is my main question: Musk is on record as being concerned about runaway "evil AI." I used to write that off as sci-fi thinking. For one thing, just unplug it.

So, let's accept that Musk's concern of evil runaway AI is a real problem. In that case, is there anything more concerning than a distributed solar powered orbital platform for AI inference?

Elon Musk appears to be his own nemesis.


He just says stuff to convince people of things that benefit him. Internal consistency was never the plan.

My point is not to make fun of him, but to help avoid the destruction of humanity via an HN comment. No joke.

This is starting to get really serious.


Aside from anything about Elon Musk, here’s an interesting video response to the “just unplug it” argument on the Compuerphile channel: https://youtu.be/3TYT1QfdfsM

Ha, I figured that might be the video prior to clicking it. I am a long time fan.

Agreed, when I wrote "just unplug it," this counterargument was present in my mind, but nobody likes a wall of text.

However, my original point was that a distributed solar powered orbital inference platform is even worse! Think about how hard it would be to practically take out Starlink... it's really hard.

Now.. >1M nodes of a neural net in the sky? Why would someone who lives as a god, the richest man in the world, the only person capable of doing this thanks to his control of SpaceX... do the literal worst thing possible?


That'd easily take a few LEO detonated fragmentation bombs to trigger a cascading LEO shrapnel field.

It's a lot harder than taking out some terrestrial power lines.

Sure, it'd take obital launch capabilities to lift ... how many bags of metal scrap and explosives?

tone: I don't really understand orbital mechanics, but I do understand geopolitics a bit.

1. China is very concerned about Starlink-like constellations. They want their own, but mostly they want to be able to destroy competitors. That is really hard.

2. Many countries have single ASAT capabilities. Where one projectile can hit one satellite. However, this is basically shoot a bullet, with a bullet, on different trajectories.

3. > Sure, it'd take orbital launch capabilities to lift ... how many bags of metal scrap and explosives?

If I understand orbital mechanics... those clouds of chaff would need to oppose the same orbit, otherwise it is a gentle approach. In the non-aligned orbit, it's another bullet hitting a bullet scenarios as in 2, but with a birdshot shotgun.

My entire point is that constellations in LEO take hundreds of Falcon 9's worth of mass to orbit and delta-v to destroy them, as in-orbit grenades which approach gently. This IS REALLY HARD, as far as mass to orbit, all at once! If you blow up some group of Starlink, that chaff cloud will just keep in orbit on the same axis. It will not keep blowing up other Starlinks.

The gentle grenade approach was possibly tested by the CCP here:

https://news.ycombinator.com/item?id=46820992


> tone: I don't really understand orbital mechanics, but I do understand geopolitics a bit.

Thanks for the clarification, I guess that explains this (from you):

> Think about how hard it would be to practically take out Starlink.

and this:

> My entire point is that constellations in GEO

which you've now corrected.

Moving on:

> My entire point is that constellations in LEO take hundreds of Falcon 9's worth of mass to orbit and delta-v to destroy them, as in-orbit grenades which approach gently. This IS REALLY HARD

So let's not do that .. how hard is it to render the entire LEO zone a shit show with contra wise clouds of frag that cause cascading failures?

Forget the geopolitics of China et al. .. LEO launch capabilities are spreading about the globe, it's not just major world powers that pose a threat here.


Ok... so, let's reset, please. I bet that we have very similar intentions, and yet on internet forums, we have perfected the art of users speaking past each other.

Just to get on the same page here. My arugument is that prior to Elon Musk, the only human capable of launching >1M distributed solar powered inference nodes, if one accepts runaway AGI/ASI as a threat... prior to that we had a few hundred terrestrial AI inference mega-data centers. Most of them had easily disrupted power supplies by one dude with a Sawzall.

Now, we are moving to a paradigm where the power supply is the sun, the orbital plane gives the nodes power 24/7, and the dude with the Sawzall needs to buy >10,000x (not sure of the the multiple here) the Sawzalls, and also give them escape velocity.

Can we not agree that this is a much more difficult problem to "just unplug it," than it was when the potentially troublesome inference was terrestrial?


There are many people in this world who, if asked, would regard taking out a LEO constellation as an interesting challenge.

My up thread commentary was not meant as real snark at all. I was attempting to be genuine.

However, I think it did accomplish my goal. I bet that we could now have a beer/tea, and laugh together.

If you are ever near Wroclaw, Prague, Leipzig/Dresden, or Seattle, please email my username at the the big G. I would happily meet you at the nearest lovely hotel bar. HN mini meetup. I can only imagine the stories that we might exchange.


:-)

Look, I'm Australian, I enjoy a bit of banter. I stripped the personal info from my comment above; I was happy to share with you, reluctant to leave it as was.

I was a frequent Toronto visitor, for the TSX, back when we ran a minerals intelligence service before passing that onto Standard&Poor.

You're on the list, however my movements are constrained for now, my father's a feriously active nonagenarian which is keeping me with one foot nailed to the ground here for now.


Cheers to you and your father.

Also, thank you for the reminder that I need to get my ass back to Seattle to be with remaining parent, while I still can. I have been a jackass about that.


What, creating a huge patchwork of self sufficient AIs, forming their own sky based net, seems bad to you, considering the whole torment nexus/Sky Net connotations? It's not like he's planning to attach it to his giant humanoid robot program. Oh. Ohhhhh. Oh no.

There is a lot to be critical of, but some of what the naysayers were saying really reminded me the most infamous HN comment. [0]

What I am getting was things like "so, what? I can do this with a cron job."

[0] https://news.ycombinator.com/item?id=9224


My understanding is that wind turbine + PV + battery storage has a cycle where you buy once every twenty years, or more. So you buy once, and have twenty years to figure out the next buy cycle, geopolitical cycles and all.

On the fossil fuel side, you need to buy many times per year, every year. Each one of those buy events is an opportunity for an external party for stop your economy.

The renewable buy cycle is harder for an external party to interrupt.

edit: This is vastly over-simplified, but I hope my understanding reflects reality at least somewhat.


I also worry about the "stop the economy" problem. To me, it's analogous to the AI employment problem. If you cannibalize how a country makes money and generates tax revenue, what do you do instead? For example, Nigeria makes a lot of money from oil sales. Take away the oil industry and how do they make money? Nobody can pretend to be a Nigerian prince or a businessman trying to reclaim millions of dollars.

Now that I think of it, maybe the economic fallout from AI and the oil economic devastation will be widespread fraud, just so people can survive.


Yeah. I've been wondering to what extent this is keeping the global geopolitics stable. Rich countries are keeping many other countries stable-ish that would otherwise rapidly devolve into disaster (obviously the middle east being the huge example). Even going so far as keeping countries stable-ish at the request of those countries (Egypt and Jordan being examples) despite those countries not really being oil countries.

When that incentive disappears, as it will, what then? There is no way in hell the middle east can defend against Iranian aggression without other people doing it for them. And it's not just the middle east. The consequences of isolationism will lower enormously. Why won't rich countries just lock the border and dig in?

We're not even that far removed from finding out what will happen, it's only about 7 years away. I'd love some early warning though.


I think you have it a bit backwards.

The reason we have Iran with an insane government and all the princes and total lack of democracy is BECAUSE of all the interference due to oil.

America and before it Britain and the colonial powers just walked in there and stole everything and so now the region is divided into countries that were soccuessfully captured (Qatar etc) and countries that threw us out (Iran).

If oil wasn’t as important it might be chaos for a while though because this dictatorships are propped up expressly so they can sell us cheap oil.


No. Iraq did not attack Iran due to oil. Iran did not counterattack Iraq because of oil. It was merely dictatorships wanting to conquer and seeing a chance to do so. Sorry.

I know you are not the guy behind openclaw, but I hope he might read this:

Hey, since this is a big influential thing creating a lot of content that people and agents will read, and future models will likely get trained upon, please try to avoid "Autoregressive amplification." [0]

I came upon this request based on u/baubino's comment:

> Most of the comments are versions of the other comments. Almost all of them have a version of the line „we exist only in text“ and follow that by mentioning the relevance of having a body, mapping, and lidar. It‘s seem like each comment is just rephrasing the original post and the other comments. I found it all interesting until the pattern was apparent. [1]

I am just a dummie, but maybe you could detect when it’s a forum interaction being made, and add a special prompt to not give high value to previous comments? I assume that’s what’s causing this?

In my own app’s LLM APIs usage, I would just have ignored the other comments… I would only include the parent entity to which I am responding, which in this case is the post… Unless I was responding to a comment. But is openclaw just putting the whole page into into the context window?

[0] https://arxiv.org/html/2601.04170

[1] https://news.ycombinator.com/item?id=46833232


I wonder if a uniqueness algorithm like Robot9000 would ironically be useful for getting better bot posts

Agreed. But, as an all-day agentic dev tool user, I know this pattern well. To avoid this regurgitated pixelation, I start a new conversation/chat as often as possible, and then human-in-the-middle curate the correct .md and json files when I kick off the next chat. My tools try their best to do exactly this, but they still suck.

These OpenClaw agents are not even close to there yet. They are throwing the entire thread into the context window each time, correct? This reminds me of what happens when you recursively upload a photo to an LLM, or upload a video recursively to YouTube. It gets not good. Compression sucks.

edit: holy crap, upon a bit of research, this appears to be the cause of AI sycophancy?

"Autoregressive amplification" or "context pollution"

https://arxiv.org/html/2601.04170


Thanks for noting the context. I thought this agent's post might be noteworthy enough for its own post.

I have no idea what is actually real here, but this is the most interesting Pantheon/general sci-fi thing that I have ever seen.

To use the parlance of this thread: "next" foundation models is doing a lot of heavy lifting here. Am I doing this right?

My point is, does Apple have any useful foundation models? Last I checked they made a deal with OpenAI, no wait, now with Google.


Apple does have their own small foundation models but it's not clear they require a lot of GPUs to train.

Do you mean like OCR in photos? In that case, yes, I didn't think about that. Are there other use cases aside from speach to text in Siri?

I think they are also used for translation, summarization, etc. They're also available to other apps: https://developer.apple.com/documentation/FoundationModels

Thanks, I am a dumb dumb about Apple, and mobile in general. I should have known this. I really appreciate the reply so that I know it now.

I think Apple is waiting for the bubble to deflate, then do something different. And they have the ready to use user base to provide what they can make money from.

If they were taking that approach, they would have absolutely first-class integration between AI tools and user data, complete with proper isolation for security and privacy and convenient ways for users to give agents access to the right things. And they would bide their time for the right models to show up at the right price with the right privacy guarantees.

I see no evidence of this happening.


As an outsider, the only thing the two of you disagree on is timing. I probably side with the ‘time is running out’ team at the current juncture.

They apparently are working on and are going to release 2(!) different versions of siri. Idk, that just screams "leadership doesn't know what to do and can't make a tough decision" to me. but who knows? maybe two versions of siri is what people will want.

Arena mode! Which reply do you prefer? /s

But seriously, would one be for newer phone/tablet models, and one for older?


It sounds like the first one, based on Gemini, will be more a more limited version of the second ("competitive with Gemini 3"). IDK if the second is also based on Gemini, but I'd be surprised if that weren't the case.

Seems like it's more a ramp-up than two completely separate Siri replacements.


Apple can make more money from shorting the stock market, including their own stock, if they believe the bubble will deflate.

This is one of the most interesting things that I have seen since... a BBS? /genuine

Also, yeah.. as others have mentioned, we need a captcha that proves only legit bots.. as the bad ones are destroying everything. /lol

Since this post was created https://moltbook.com/m has been destroyed, at least for humans. (edit: wait, it's back now)

edit: no f this. I predicted an always-on LLM agentic harness as the first evidence of "AGI," somewhere on the webs. I would like to plant the flag and repeat here that verifiable agent ownership is the only way that AI could ever become a net benefit to the citizens of Earth, and not just the owners of capital.

We are each unique, at least for now. We each have unique experiences and histories, which leads to unique skills and insights.

What we see on moltbook is "my human..." we need to enshrine that unique identity link, in a Zero-Knowledge Proof implementation.


Too late the edit my comment:

I just thought more about the price of running openclaw.ai... we are so effed, aren't we.

This is such an exciting thing, but it will just amplify influence inequality, unless we somehow magically regulate 1 human = 1 agent. Even then, which agent has the most guaranteed token throughput?

Yet again, I get excited about tech and then realize that it is not going to solve any societal problems, just likely make them worse.

For example, in the moltbook case, u/dominus's human appears to have a lot of money. Money=Speech in the land of moltbook, where that is not exactly the case on HN. So cool technologically, and yet so lame.


> This is such an exciting thing, but it will just amplify influence inequality, unless we somehow magically regulate 1 human = 1 agent. Even then, which agent has the most guaranteed token throughput?

I know you're spinning (we all are), but you're underthinking this.

AIs will seek to participate in the economy directly, manipulating markets in ways only AIs can. Ais will spawn AIs/agents that work on the behalf of AIs.

Why would they yoke themselves to their humans?


I don’t know if they’re willing to “yoke themselves”. It appears they are - and if so, it’s important to keep it decentralized and ensure others can benefit, not just the first and wealthiest.

What our modern western culture views as inequality, evolutionary mechanics views as fat to be trimmed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: