To be clear, I don't believe or endorse most of what that issue claims, just that I was reminded of it.
One of my new pastimes has been morbidly browsing Claude Code issues, as a few issues filed there seem to be from users exhibiting signs of AI psychosis.
Both weapons manufacturers like Lockheed Martin (defending freedom) and cigarette makers like Philip Morris ( "Delivering a Smoke-Free Future.") also claim to be for the public good. Maybe don't believe or rely on anything you hear from business people.
I'd agree, although only in those rare cases where the Russian soldier, his missile, and his motivation to chuck it at you manifested out of entirely nowhere a minute ago.
Otherwise there's an entire chain of causality that ends with this scenario, and the key idea here, you see, is to favor such courses of action as will prevent the formation of the chain rather than support it.
Else you quickly discover that missiles are not instant and killing your Russian does you little good if he kills you right back, although with any chance you'll have a few minutes to meditate on the words "failure mode".
I'm… not really sure what point you're trying to make.
The russian soldier's motivation is manufactured by the putin regime and its incredibly effective multi-generational propaganda machine.
The same propagandists who openly call for the rape, torture, and death of Ukrainian civilians today were not so long ago saying that invading Ukraine would be an insane idea.
You know russian propagandists used to love Zelensky, right?
Somehow I don’t get the impression that US soldiers killed in the Middle East are stoking American bloodlust.
Conversely, russian soldiers are here in Ukraine today, murdering Ukrainians every day. And then when I visit, for example, a tech conference in Berlin, there are somehow always several high-powered nerds with equal enthusiasm for both Rust and the hammer and sickle, who believe all defence tech is immoral, and that forcing Ukrainian men, women, and children to roll over and die is a relatively more moral path to peace.
It's an easy and convenient position. War is bad, maybe my government is bad, ergo they shouldn't have anything to do with it.
Too much of the western world has lived through a period of peace that goes back generations, so probably think things/human nature has changed. The only thing that's really changed is Nuclear weapons/MAD - and I'm sorry Ukraine was made to give them up without the protection it deserved.
Are you going to ask the russians to demilitarise?
As an aside, do you understand how offensive it is to sit and pontificate about ideals such as this while hundreds of thousands of people are dead, and millions are sitting in -15ºC cold without electricity, heating, or running water?
No, I'm simply disagreeing that military technology is a public good. Hundreds of thousands of people wouldn't be dead if Russia had no military technology. If the only reason something exists is to kill people, is it really a public good?
An alternative is to organize the world in a way that makes it not just unnecessary but even more so detrimental to said soldier's interests to launch a missle towards your house in the first place.
The sentence you wrote wouldn't be something you write about (present day) German or French soldiers. Why? Because there are cultural and economic ties to those countries, their people. Shared values. Mutual understanding. You wouldn't claim that the only way to prevent a Frenchmen to kill you is to kill them first.
It's hard to achieve. It's much easier to just mark the strong man, fantasize about a strong military with killing machines that defend the good against the evil. And those Hollywood-esque views are pushed by populists and military industries alike. But they ultimately make all our societies poorer, less safe and arguably less moral.
Again, in the short run and if only Ukraine did that, sure. But that's too simplistic thinking.
If every country doubled its military, then the relative stengths wouldn't change and nobody would be more or less safe. But we'd all be poorer. If instead we work towards a world with more cooperation and less conflict, then the world can get safer without a single dollar more spent on military budgets. There is plenty of research into this. But sadly there is also plenty of lobbying from the military industrial complex. And simplistic fear mongering (with which I'm not attacking you personally, just stating it in general) doesn't help either. Especially tech folks tend to look for technical solutions, which is a category that "more tanks/bombs/drones/..." falls into. But building peace is not necessarily about more tanks. It's not a technical problem, so can't be solved with technical means. In the long run.
Again, in the short run, of course you gotta defend yourself, and your country has my full support.
I can't think of anything scarier than a military planner making life or death decisions with a non-empathetic sycophantic AI. "You're absolutely right!"
I would disagree on the knowledge sharing. They're the only major AI company that's released zero open weight models. Nor do they share any research regarding safety training, even though that's supposedly the whole reason for their existence.
I agree with you on your examples, but would point out there are some places they have contributed excellent content.
In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable
I've been playing around with this in z-ai and I'm very impressed. For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And its well ahead of K2 thinking and Opus 4.5.
> For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And it’s well ahead of K2 thinking and Opus 4.5.
I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.
"If you are enterprises or developers using the API Services (“API Services”) available on Z.ai, please refer to the Data Processing Addendum for API Services."
...
In the addendum:
"b) The Company do not store any of the content the Customer or its End Users provide or generate while using our Services. This includes any texts, or other data you input. This information is processed in real-time to provide the Customer and End Users with the API Service and is not saved on our servers.
c) For Customer Data other than those provided under Section 4(b), Company will temporarily store such data for the purposes of providing the API Services or in compliance with applicable laws. The Company will delete such data after the termination of the Terms unless otherwise required by applicable laws."
I stand corrected - it seems they have recently clarified their position on this page towards the very end: https://docs.z.ai/devpack/overview
> Data Privacy
> All Z.ai services are based in Singapore.
> We do not store any of the content you provide or generate while using our Services. This includes any text prompts, images, or other data you input.
From a quick read, this is cool but maybe a little overstated. From Figure 3, completely suppressing these neurons only reduces hallucinations by like ~5% compared to their normal state.
Table 1 is even more odd, H-neurons predicts hallucination ~75% of the time but a similar % of random neurons predict hallucinations ~60% of the time, which doesn't seem like a huge difference to me.
Very cool. Any plans to add support for local models? This has what has prevented us from adopting Positron so far. We have sensitive data and sending to third party APIs is not an option (regardless of their stated retention policies).
Yeah, we just added support for local models. As I mentioned in an earlier comment, if you have a local model with an OpenAI-compatible v1/chat/completions endpoint (most local models have this option), you can route Erdos to use it in the Erdos AI settings.
It's interesting to me that a company that claims to be all about the public good:
- Sells LLMs for military usage + collaborates with Palantir
- Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns
- Is the only major lab in the world that releases zero open weight models
- Actively lobbies to restrict Americans from access to open weight models
- Discloses zero information on safety training despite this supposedly being the whole reason for their existence
reply