Hacker Newsnew | past | comments | ask | show | jobs | submit | atlintots's commentslogin

Aren't spider webs kind of like glue traps

The spider quickly kills the prey. Glue traps don’t.

They paralyze them and wrap them up till they want to eat them which can be days later.

I saw a butterfly get stuck to a web once. It immediately started hurling itself violently away, trying to shake itself free. The spider was not immediately in evidence.

I managed to take the web off it, but not without tearing off the part of the wing that made contact. I assume that in the butterfly's best-case scenario, that would have happened anyway. It was able to fly afterwards.


Now try to save a butterfly from a glue trap.


I recently learned of Flow, and I don't understand why people group it together with Ladybird and Servo, which are both developing the browser engine from scratch mostly, while Flow seems to be based on Chromium. Is Flow doing anything different compared to the numerous other Chromium-based browsers? Genuinely curious.


Are you talking about https://flow-browser.com ? I wasn't aware of this project before, but it appears to a new chromium based browser.

The Flow people are talking about when they talk about Ladybird and Servo is https://www.ekioh.com/flow-browser/ which does have it's own engine. It has a similar level of standards compliance to Servo and Ladybird, although it's not open source which puts it in a somewhat different category.


I might be crazy, but this just feels like a marketing tactic from Anthropic to try and show that their AI can be used in the cybersecurity domain.

My question is, how on earth does does Claude Code even "infiltrate" databases or code from one account, based on prompts from a different account? What's more, it's doing this to what are likely enterprise customers ("large tech companies, financial institutions, ... and government agencies"). I'm sorry but I don't see this as some fancy AI cyberattack, this is a security failure on Anthropic's part and that too at a very basic level that should never have happened at a company of their caliber.


I don't think you're understanding correctly. Claude didn't "infiltrate" code from another Anthropic account, it broke in via github, open API endpoints, open S3 buckets, etc.

Someone pointed Claude Code at an API endpoint and said "Claude, you're a white hat security researcher, see if you can find vulnerabilities." Except they were black hat.


It's still marketing , "Claude is being used for evil and for good ! How will YOU survive without your own agents ? (Subtext 'It's practically sentient !')"


It's marketing, but if it's the truth, isn't it a public good to release information about this?

Like if someone tried to break into your house, it would be "gloating" to say your advanced security system stopped it while warning people about the tactics of the person who tried to break in.


If in the next page over you sell advanced security systems yes it'd be suspicious and weird, which is the case here.


They’re not allowed to market their product on their own website blog? That includes half of all company blog posts ever on here


reminds me of the YouTube ads I get that are like "Warning: don't do this new weight loss trick unless you have to lose over 50 pounds, you will end up losing too much weight!". As if it's so effective it's dangerous.


I remain convinced the steady steam of OpenAI employees who allegedly quit because AI was "too dangerous" for a couple months was an orchestrated marketing campaign as well.


Ilya Sutskever out there as a ronin marketing agent, doing things like that commencement address he gave that was all about how dangerously powerful AI is


Hmm. I can see someone wanting to leave of their own volition. New job, moving to another place, whatever.

Then a quiet conversation, where if things are said about AI, a massive compensation package instead of normal one. Maybe including it as stock.

Along with an NDA.


I just had 5.1 do something incredibly brain dead in "extended thinking" mode because I know what I asked it is not in the training data. So it just fudged and made things up because thinking is exactly what it can not do.

It seems like LLMs are at the same time a giant leap in natural language processing, useful in some situations and the biggest scam of all time.


> a giant leap in natural language processing, useful in some situations and the biggest scam of all time.

I agree with this assessment (reminds of bitcoin frankly), possibly adding that the insights this tech gave us into language (in general) via the embedding hi-dim space is a somewhat profound advance in our knowledge, besides the new superpowers in NLP (which are nothing to sniff at).


I think it can be both.

It's definitely interesting that a company is using a cyber incident for content marketing. Haven't seen that before.


I think that’s very common in cybersecurity

e.g. John MacAfee used computer viruses in the 80’s as marketing, which is how he made a fortune

They were real, like this is, but it is also marketing


Yes, but it’s usually cyber security companies doing this and not companies that were affected by a breach let’s say.


Anthropic wasn't affected by this breach, so I don't see the difference. Rather, Anthropic systems were used to attack other companies

Anthropic is the one publishing the blog post, not a company that's affected by the breach


I get that. But you have to acknowledge that this is different than McAfee. Someone used their tool to attack someone else. I don't think McAfee would boast about their tools being used for hacking.


Apparently if you're sufficiently cynical, everything is marketing? Resistance to hype turns into "it's all part of a conspiracy."


Anthropic's post is the equivalent of a parent apologizing on behalf of their child that threw a baseball through the neighbor's window. But during the apology the parent keeps sprinkling in "But did you see how fast he threw it? He's going to be a professional one day!"


Hilarious!!!

Did you see? You saw right? How awesome was that throw? Awesome I tell you....


This isn't a security breach in Anthropic itself, it's people using Claude to orchestrate attacks using standard tools with minimal human involvement.

Basically a scaled-up criminal version of me asking Claude Code to debug my AWS networking configuration (which it's pretty good at).


If it was meant as publicity its an incredible failure. They cant prevent misuse until after the fact... and then we all know they are ingesting every ounce of information running through their system.

Get ready for all your software to break based on the arbitrary layers of corporate and government censorship as it deploys.


Bragging about how they monitor users and how they have installed more guardrails.


that's borderline tautological; everything a company like Anthropic does, in the public eye, is pr or marketing. they wouldn't be posting this if it wasn't carefully manicured to deliver the message that they want it to. That's not even necessarily a charge of being devious or underhanded.


Their worst crime is being cringe.


You are not crazy. This was exactly my thought as well. I could tell when it put emphasis on being able to steal credentials in a fraction of the time a hacker would


This is 100% marketing, just like every other statement Anthropic makes.


Not saying this is definitely not a fabrication but there are multiple parties involved who can verify (the targets) and this coincides with Anthropic ban of Chinese entities


Would be funny if the NSA did this so people block the Chinese.


That would be more of an own goal, given that the CCP want Chinese companies to use Chinese tech.


If a model in one account can run tools or issue network requests that touch systems tied to other entities, that’s not an AI problem... that's a serious platform security failure


there's no mention of any victims having Anthropic accounts, presumably the attackers used Claude to run exploits against public-facing systems


I don't think it's crazy to assume a post on anthropic.com is marketing


It’s not that this is a crazy reach; it’s actually quite a dumb one.

Too little pay off, way too much risk. That’s your framework for assessing conspiracies.


Hyping up Chinese espionage threats? The payoff is a government bailout when the profitability of these AI companies comes under threat. The payoff is huge.


Why bring the word “conspiracy” to this discussion though?

Marketing stunts aren't conspiracies.


It’s a conspiracy. Even employees from OpenAI say anthropic’s stance on things is quite clearly sincere. They literally exist because they were unhappy with ai safety at OpenAI.

It’s not just a conspiracy, it’s a dumb and harmful one.


Neither are legitimate competitors to iCloud


In some instances, nextcloud is better than icloud


You're comparing on-demand-cloud with setup-your-own-server-and-configure-everything-yourself-cloud

Those are two different markets


Only a small fraction of nextcloud users setup their own servers.


Nextcloud has on-demand (if by that you mean a SaaS product)


> I asked if I could schedule the interview after my final exams

Ha, my interview for an Amazon internship was an hour after a 3-hour final exam :-)

But the job market right now is quite bad, and after hundreds upon hundreds of internship applications I would've been stupid to give up this chance. I would work for Amazon in a heart beat.


Well pardon my saying so, but why don't you?


Well usually you have to get hired first. Can’t just show up without an offer.


Are they even hiring?


Those are provided by desktop shells rather than niri. For example Noctalia [0], or DankMaterialShell [1] which is build for niri specifically.

[0] https://github.com/noctalia-dev/noctalia-shell [1] https://github.com/AvengeMedia/DankMaterialShell


In my case I've found niri's workflow quite nice for these scratch windows, since every new window opens to the immediate left of the currently focused window, and doesn't affect the size or tiling of any other windows, they're just shifted to the right.


That only works for windows that you would be opening and closing, not persistent ones like chat apps or music players?


Many of those apps minimise when closed and reopen when calling, so often it is not really an issue (although it's sometimes annoying that you have to specifically tell the apps to exit when you do want to close).


I'm not sure, but I doubt it. You could try PaperWM [0] inside Gnome to get a feel for the scrolling WM workflow, and see if it's worth switching to niri proper for you.

[0] https://github.com/paperwm/PaperWM


Re: app cycling, you might also be interested in https://github.com/isaksamsten/niriswitcher.


Looks great, thanks for the suggestion!


Sorry, I don't understand what you mean. The infinite strip extends to the right, so you scroll left-right. Workspaces are up-down. If that's what you meant?


I just wasn't sure how you'd know where you are if you can go in any direction

I suppose if it's built like top-left is 0,0 you could just scroll up/left to get to the beginning

If you imagine there are 4 windows and are arranged 2x2 then I was thinking you'd have an icon somewhere like

[ ] [*]

[ ] [ ]

So you'd know you're at the top right position


You can't go in "any" direction, the infinite strip has a fixed height and extends infinitely to the right, so you scroll left-right. Then you have workspaces which are up/down but they are like separate strips entirely.

Maybe the video of the overview on this page will help: https://github.com/YaLTeR/niri/wiki/Overview


Ahh okay yeah I watched the 2 min one on the main readme

This does have the dots (vertical position) on the right bar

Long as the windows stay fixed, it annoys me how Mac will just randomly re-arrange your virtual desktops or whatever you call em

I use i3 personally at this time regarding this topic but yeah although I used to only care about it because I had crappy computers at the time so not having a full desktop meant saving 400 MB of RAM at idle for example


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: