Hacker Newsnew | past | comments | ask | show | jobs | submit | buppermint's commentslogin

Anthropic has already has lower guardrails for DoD usage: https://www.theverge.com/ai-artificial-intelligence/680465/a...

It's interesting to me that a company that claims to be all about the public good:

- Sells LLMs for military usage + collaborates with Palantir

- Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns

- Is the only major lab in the world that releases zero open weight models

- Actively lobbies to restrict Americans from access to open weight models

- Discloses zero information on safety training despite this supposedly being the whole reason for their existence


This comment reminded me of a Github issue from last week on Claude Code's Github repo.

It alleged that Claude was used to draft a memo from Pam Bondi and in doing so, Claude's constitution was bypassed and/or not present.

https://github.com/anthropics/claude-code/issues/17762

To be clear, I don't believe or endorse most of what that issue claims, just that I was reminded of it.

One of my new pastimes has been morbidly browsing Claude Code issues, as a few issues filed there seem to be from users exhibiting signs of AI psychosis.


Wow. That's one of the clearest case of AI psychosis I've seen.

Issue author does not even attempt to hide their obsession with Israel, damn

Both weapons manufacturers like Lockheed Martin (defending freedom) and cigarette makers like Philip Morris ( "Delivering a Smoke-Free Future.") also claim to be for the public good. Maybe don't believe or rely on anything you hear from business people.

> Releases by far the least useful research of all the major US and Chinese labs, minus vanity interp projects from their interns

From what I've seen the anthropic interp team is the most advanced in the industry. What makes you think otherwise?


You just need to hear the guy stance on china open models to understand They not the goods guys.

Thanks for pointing these concerns out.

I had considered Anthropic one of the "good" corporations because of their focus on AI safety & governance.

I never actually considered whether their perspective on AI safety & governance actually matched my own. ^^;



"Actively lobbies to restrict Americans from access to open weight models"

Do you have a reference/link?


Military technology is a public good. The only way to stop a russian soldier from launching yet another missile at my house is to kill him.

I'd agree, although only in those rare cases where the Russian soldier, his missile, and his motivation to chuck it at you manifested out of entirely nowhere a minute ago.

Otherwise there's an entire chain of causality that ends with this scenario, and the key idea here, you see, is to favor such courses of action as will prevent the formation of the chain rather than support it.

Else you quickly discover that missiles are not instant and killing your Russian does you little good if he kills you right back, although with any chance you'll have a few minutes to meditate on the words "failure mode".


I'm… not really sure what point you're trying to make.

The russian soldier's motivation is manufactured by the putin regime and its incredibly effective multi-generational propaganda machine.

The same propagandists who openly call for the rape, torture, and death of Ukrainian civilians today were not so long ago saying that invading Ukraine would be an insane idea.

You know russian propagandists used to love Zelensky, right?


I don't think U.S.-Americans would be quite so fond of this mindset if every nation and people their government needlessly destroyed thought this way.

Doesn't matter if it happened through collusion with foreign threats such as Israel or direct military engagements.


Somehow I don’t get the impression that US soldiers killed in the Middle East are stoking American bloodlust.

Conversely, russian soldiers are here in Ukraine today, murdering Ukrainians every day. And then when I visit, for example, a tech conference in Berlin, there are somehow always several high-powered nerds with equal enthusiasm for both Rust and the hammer and sickle, who believe all defence tech is immoral, and that forcing Ukrainian men, women, and children to roll over and die is a relatively more moral path to peace.


It's an easy and convenient position. War is bad, maybe my government is bad, ergo they shouldn't have anything to do with it.

Too much of the western world has lived through a period of peace that goes back generations, so probably think things/human nature has changed. The only thing that's really changed is Nuclear weapons/MAD - and I'm sorry Ukraine was made to give them up without the protection it deserved.


If there was less military technology, the Russian soldier wouldn't have yet another missile to launch at your house in the first place

Are you going to ask the russians to demilitarise?

As an aside, do you understand how offensive it is to sit and pontificate about ideals such as this while hundreds of thousands of people are dead, and millions are sitting in -15ºC cold without electricity, heating, or running water?


No, I'm simply disagreeing that military technology is a public good. Hundreds of thousands of people wouldn't be dead if Russia had no military technology. If the only reason something exists is to kill people, is it really a public good?

Unfortunately I think you’re a few thousands years too late with your idea.

———

Come on. This a forum full of otherwise highly intelligent people. How is such incredible naïveté possible?


I know such a thing isn't possible nowadays, I'm just saying that it isn't a good thing that it exists

It's not the only way.

An alternative is to organize the world in a way that makes it not just unnecessary but even more so detrimental to said soldier's interests to launch a missle towards your house in the first place.

The sentence you wrote wouldn't be something you write about (present day) German or French soldiers. Why? Because there are cultural and economic ties to those countries, their people. Shared values. Mutual understanding. You wouldn't claim that the only way to prevent a Frenchmen to kill you is to kill them first.

It's hard to achieve. It's much easier to just mark the strong man, fantasize about a strong military with killing machines that defend the good against the evil. And those Hollywood-esque views are pushed by populists and military industries alike. But they ultimately make all our societies poorer, less safe and arguably less moral.


I'm in Ukraine now.

Tell me how your ideals apply to russia, today.


In the short run, today, you of course have to shoot back. What else can you do..

In the long run, just piling up more military is not the solution.


> In the long run, just piling up more military is not the solution.

Except it would have prevented the invasion in the first place.


Again, in the short run and if only Ukraine did that, sure. But that's too simplistic thinking.

If every country doubled its military, then the relative stengths wouldn't change and nobody would be more or less safe. But we'd all be poorer. If instead we work towards a world with more cooperation and less conflict, then the world can get safer without a single dollar more spent on military budgets. There is plenty of research into this. But sadly there is also plenty of lobbying from the military industrial complex. And simplistic fear mongering (with which I'm not attacking you personally, just stating it in general) doesn't help either. Especially tech folks tend to look for technical solutions, which is a category that "more tanks/bombs/drones/..." falls into. But building peace is not necessarily about more tanks. It's not a technical problem, so can't be solved with technical means. In the long run.

Again, in the short run, of course you gotta defend yourself, and your country has my full support.


Do you think dod would use Anthropic even with lower guardrails?

How can I kill this terrorist in the middle on civilians with max 20% casualties?

If Claude will answer: “sorry can’t help with that “ won’t be useful, right?

Therefore the logic is they need to answer all the hard questions.

Therefore as I’ve been saying for many times already they are sketchy.


I can't think of anything scarier than a military planner making life or death decisions with a non-empathetic sycophantic AI. "You're absolutely right!"


shot on target

Perfect!


Now imagine it spoken by Cortana from the Halo series for the full effect

I am downvoted because sod would never need to ask that or because Claude would never answer that? I’m curious

Because you are a ghoul

I think it’s a very realistic scenario. You think did wouldn’t plan an assassination?

I would disagree on the knowledge sharing. They're the only major AI company that's released zero open weight models. Nor do they share any research regarding safety training, even though that's supposedly the whole reason for their existence.


I agree with you on your examples, but would point out there are some places they have contributed excellent content.

In building my custom replacement for Copilot in VS Code, Anthropic's knowledge sharing on what they are doing to make Claude Code better has been invaluable


I've been playing around with this in z-ai and I'm very impressed. For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And its well ahead of K2 thinking and Opus 4.5.


> For my math/research heavy applications it is up there with GPT-5.2 thinking and Gemini 3 Pro. And it’s well ahead of K2 thinking and Opus 4.5.

I wouldn’t use the z-ai subscription for anything work related/serious if I were you. From what I understand, they can train on prompts + output from paying subscribers and I have yet to find an opt-out. Third party hosting providers like synthetic.new are a better bet IMO.


From their privacy policy:

"If you are enterprises or developers using the API Services (“API Services”) available on Z.ai, please refer to the Data Processing Addendum for API Services."

...

In the addendum:

"b) The Company do not store any of the content the Customer or its End Users provide or generate while using our Services. This includes any texts, or other data you input. This information is processed in real-time to provide the Customer and End Users with the API Service and is not saved on our servers.

c) For Customer Data other than those provided under Section 4(b), Company will temporarily store such data for the purposes of providing the API Services or in compliance with applicable laws. The Company will delete such data after the termination of the Terms unless otherwise required by applicable laws."


I stand corrected - it seems they have recently clarified their position on this page towards the very end: https://docs.z.ai/devpack/overview

> Data Privacy

> All Z.ai services are based in Singapore.

> We do not store any of the content you provide or generate while using our Services. This includes any text prompts, images, or other data you input.


From a quick read, this is cool but maybe a little overstated. From Figure 3, completely suppressing these neurons only reduces hallucinations by like ~5% compared to their normal state.

Table 1 is even more odd, H-neurons predicts hallucination ~75% of the time but a similar % of random neurons predict hallucinations ~60% of the time, which doesn't seem like a huge difference to me.


This is because Grok Code Fast is free via Kilo Code/Cline and has been for months


Thanks - I didn't know this but that makes sense!


Very cool. Any plans to add support for local models? This has what has prevented us from adopting Positron so far. We have sensitive data and sending to third party APIs is not an option (regardless of their stated retention policies).


Yeah, we just added support for local models. As I mentioned in an earlier comment, if you have a local model with an OpenAI-compatible v1/chat/completions endpoint (most local models have this option), you can route Erdos to use it in the Erdos AI settings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: