Hacker Newsnew | past | comments | ask | show | jobs | submit | keeda's commentslogin

I think we will enter some novel legal territory with cases like this. Intent is a crucial part of the law, and I wonder if we will see "Yes we built this thing, but we had no idea it could do THIS" as a legal defense.

Or, more formally, "these machines have an unprecedented, possibly unlimited, range of capabilities, and we could not have reasonably anticipated this."

There was a thread a few weeks ago (https://news.ycombinator.com/item?id=45922848) about the AI copyright infringement lawsuits where I idly floated this idea. Turns out, in these lawsuits, no matter how you infringed, you're still liable if infringement can be proved. Analogously, in cases with death, even without explicit intent you can still be liable, e.g. if negligence led to the death.

But the intent in these cases is non-existent! And the actions that led to this -- training on vast quantities of data -- are so abstracted from the actual incident that it's hard to make the case for negligence, because negligence requires some reasonable form of anticipation of the outcomes. For instance, it's very clear that these models were not designed to be "rote-learning machines" or "suicide-ideation machines", yet that turned out to be things they do! And who knows what weird failure modes will emerge over time (which makes me a bit sympathetic to the AI doomers' viewpoint.)

So, clearly the questions are going to be all about whether the AI labs took sufficient precautions to anticipate and prevent such outcomes. A smoking gun would be an email or document outlining just such a threat that they dismissed (which may well exist, given what I hear about these labs' "move fast, break people" approach to safety.) But absent that it seems like a reasonable defense.

While that argument may not work for this or other cases, I think it will pop up as these models cause more and more unexpected outcomes, and the courts will have to grapple with it eventually.


Proving even negligence may be hard, but I definitely think they are negligent.

A) This is not the first public incident on people being led down dark and deranged paths by talking with their AI.

B) They record and keep all chat logs, so they had the data to keep an eye out for this even if the AI itself couldn't be stopped in the moment.


And here's where it can get really freaky: what if someone intentionally released a model that led to harmful or illegal outcomes when targeted at specific contexts. Something like "Golden Gate Claude" except it's "Murder-Suicide Claude when the user is Joe Somebody of Springfield, NJ."

Do we have the tools to detect that intentionality from the weights? Or could we see some "intent laundering" of crimes via specialized models? Taken to extremes for the sake of entertainment, I can imagine an "Ocean's 23" movie where the crew orchestrates a heist purely by manipulating employees and patrons of a casino via tampered models...

Interpretability research seems more critical than ever.


This was not difficult to foresee. People were discussing the mental health risks of these models before OpenAI intentionally tuned it to be sycophantic in the pursuit of capturing users. This can easily be argued as gross negligence against OpenAI.

Hmm I don't know that it was that obvious in advance... I think the earliest recorded case of AI psychosis was that employee Google fired for being convinced LaMDA was sentient:

https://www.theguardian.com/technology/2022/jul/23/google-fi...

At that point, most of us only had experience with older chatbots and didn't know what GenAI chatbots were capable of, so I thought this was clearly a one-off with a gullible and unwell individual.

And then ChatGPT was released. I could see this was something else entirely, and I could see how people could get tricked into thinking it was sentient. However, I still didn't make the connection with how this would interact with other types of psychological issues.

This could be because I was an outside observer; it's likely being at the epicenter of things, these companies had way more early signals that they neglected, which is the sort of evidence I think these lawsuits will surface. To OpenAI's credit, when they realized ChatGPT 4o was overly sycophantic they did roll it back pretty quickly, but I'm pretty sure the speed with which they've been moving they've glossed over a lot of issues.


Oh boy are you going to love learning about felony murder. You can be convicted of felony murder without having:

1. Killed anyone

2. Been in the same location of where the killing took place

3. Known about the crime taking place

John Oliver does an excellent segment on how batshit these laws are, but suffice to say you can absolutely be convicted without intent. https://www.youtube.com/watch?v=Y93ljB7sfco


TFA is directionally correct, though it repeats a few cliches which are no longer accurate. E.g. people and some empirical data report improved productivity even on large, brownfield codebases, with the caveat that effectiveness seems to be related more to the quality processes around the code rather than the code itself.

However, this TFA is absolutely correct about the point that it takes a long time to master this technology.

A second, related point is that the users have to adapt themselves to the technology to fully harness it. This is the hardest part. As an example, after writing OO code for my entire career, I use a much more of a functional programming style these days because that's what gets the best results from AI for me.

In fact, if you look at how the most effective users of AI agents do coding, it is nothing like what we are used to. It's more like a microcosm of all the activities that happen around coding -- planning, research, discussions, design, testing, review, etc -- rather than the coding itself. The closest analogy I can think of is the workstyle of senior / staff engineers working with junior team members.

Similarly, organizations will have to rethink their workflows and processes from the ground-up to fully leverage AI. As a trivial example, tasks that used to take days and meetings can now take minutes, but will require much more careful review. So we need support for the humans-in-the-loop to do this efficiently and effectively, e.g. being able to quickly access all the inputs that went into the AI's work product, and spot-check them or run custom validations. This kind of infra would be specific to each type of task and doesn't exist yet and needs to be built.

Just foisting a chatbot on employees is not helpful at all, especially as a top-down mandate with no guidance or training or dedicated time to experiment AND empowerment to shake things up. Without that you will mostly get poor results and resentment against AI, which we are already seeing.

It's only 3 years since ChatGPT was released, so it is still very early days. Given how slow most organizations move, I'm actually surprised that any of them are reporting positive results this early.


Anecdote, but I've never been able to use Claude (directly) because their defense systems seem overly sensitive to your email address. I signed up for Claude using a relatively new Outlook email address that I set up for an independent purpose. My account got instabanned. Like, I couldn't proceed at all. I don't even know what the Claude UI looks like. All I could do was appeal using a Google Form.

I appealed and got a standard Google Forms response. There was no follow-up after that. It never got fixed and I never tried again... plenty of free, more accessible fish out there, and various agents like Copilot give me access to Sonnet anyway.

But now I wonder, what is it about the account that triggered this block. If it was because of the reputation of the account, how did Anthropic even know that this account was created a few weeks ago?


There are vendors like Emailage that somehow determine the age of email addresses. Very useful because fraudsters tend to buy credit cards and bank accounts, then need to complete the identity by registering an email address for that identity.

Historically, outlook emails have been very easy for this compared to gmail addresses, which require phone numbers, etc.


One of the reasons "aged" account marketplaces got more popular. People buy from vendors that farm a ton of these accounts and wait to sell them, or those reselling compromised accounts (especially with EDU accounts before institutions actually implemented security controls).

Same here - though I used my personal email domain with claude as the local/username. They autobanned that one and then banned my actual personal email. The only one that worked was a Google login. My appeal had a boilerplate response.

Another anecdote, but I signed up for Claude using a brand spanking new iCloud private relay address that was created specifically for Claude and it let me in without any problems. We're talking 10 seconds or less from address creation to account creation.

It's worth checking the spam score if its a new domain and see if there's anything on the internet archive. I learned this the hard way and now I check before buying

How new was your email address? When I set up my work Claude account with my near fresh email (I had just set up Outlook to work with my domain) I had no issues.

IIRC it was about 3 weeks old. I hadn't used it for anything else in that time either, which probably also contributd to the lack of reputation.

I have never been able to use it because they require a phone number during registration and would always reject mine.

I don't think this is a requirement anymore

[flagged]


You made a throwaway account to say this?

Yes. Because it’s funny and I don’t have any other account.

I don't see the humour.

I can’t help you.

didn't ask

Technically (or, at least, historically), they should have used the indefinite pronoun "one" i.e. "...because their defense systems seem overly sensitive to one's email address". But I imagine that would've got more comments than using you/your.

No, using “your” in such a context is a manipulation tactic.

The way I would phrase it is: software engineering is the craft of delivering the right code at the right time, where "right code" means it can be trusted to do the "right thing."

A bit clunky, but I think that can be scaled from individual lines of code to features or entire systems, whatever you are responsible for delivering, and encompasses all the processes that go around figuring what code is to be actually written and making sure it does what it's supposed to.

Trust and accountability are absolutely a critical aspect of software engineering and the code we deliver. Somehow that is missed in all the discussions around AI-based coding.

The whole phenomenon of AI "workslop" is not a problem with AI, it's a problem with lack of accountability. Ironically, blaming workslop on AI rather than organizational dysfunction is yet another instance of shirking accountability!


The folks at Stanford in this video have a somewhat similar dataset, and they account for "code churn" i.e. reworking AI output: https://www.youtube.com/watch?v=tbDDYKRFjhk -- I think they do so by tracking if the same lines of code are changed in subsequent commits. Maybe something to consider.


There was a huge discussion on this a few weeks ago, seems still far from settled: https://news.ycombinator.com/item?id=45503867

Personally I think "vibe-coding" has semantically shifted to mean any AI-assisted coding and we should just run with it. For the original meaning of vibe-coding, I suggest YOLO-Coding.


Everyone is reacting negatively to the focus on AI, but does Mozilla really have a choice? This is going to be a rehash of the same dynamic that has happened in all the browser wars: Leading browser introduces new feature, websites and extensions start using that feature, runner-up browsers have no choice but to introduce that feature or further lose marketshare.

Chrome and Edge have already integrated LLM capabilities natively, and webpages and extensions will soon start using them widely:

- https://developer.chrome.com/docs/ai/built-in

- https://blogs.windows.com/msedgedev/2025/05/19/introducing-t...

Soon you will have pages that are "Best viewed in Chrome / Edge" and eventually these APIs will be standardized. Only a small but passionate minority of users will run a non-AI browser. I don't think that's the niche Firefox wants to be in.

I agree that Mozilla should take the charge on being THE privacy-focused browser, but they can also do so in the AI age. As an example, provide a sandbox and security features that prevent your prompts and any conversations with the AI from being exfiltrated for "analytics." Because you know that is coming.


Of course they have a choice. Just don't do it. All you said are predictions of what may or may not happen in the future. The opposite could be true - the audience at large may get sick of AI tools being pushed on them and prefer the browser that doesn't. No one knows. But even if you are right, supporting an hypothetical API that extensions and websites may or may not use and pushing opt-out AI tooling in the browser itself are very different things.


Sure, these features may never catch on... but if they do, consider the risk to Firefox: an underdog with dwindling market share that is now years behind capabilities taken for granted in other browsers. On the other hand, if these features don't pan out, they could always be deprecated with little hit to marketshare.

Strategically I think Mozilla cannot take that risk, especially as it can get feature parity for relatively low cost by embracing open-source / open-weights models.

As an aside, a local on-device AI is greatly preferable from a privacy perspective, even though some harder tasks may need to be sent to hosted frontier models. I expect the industry to converge on a hybrid local/remote model, largely because it lets them offload inference to the users' device.

There's not much I could do about a hosted LLM, but at least for the local model it would be nice to have one from a company not reliant on monetizing my data.


> Everyone is reacting negatively to the focus on AI, but does Mozilla really have a choice?

Do these type of also-ran strategies actually work for a competitor the size of Mozilla? Is AI integration required for them to grow or at least maintain?

My hunch is this will hurt Firefox more than help it. Even if I were to believe their was a meaningful demand for these kind of features in the browser I doubt Mozilla is capable of competing with the likes of Google & Microsoft in meaningful matter in the AI arena.


I think Mozilla can get pretty far with one of the smaller open source models. Alternatively, they could even just use the models that will inevitably come bundled with the underlying OS, although their challenge then would be in providing a homogenous experience across platforms.

I don't think Mozilla should get into the game of training their own models. If they did I'd bet it's just because they want to capitalize on the hype and try to get those crazy high AI valuations.

But the rate at which even the smaller models are getting better, I think the only competitive advantage for the big AI players would be left in the hosted frontier models that will be extremely jealously guarded and too big to run on-device anyway. The local, on-device models will likely converge to the same level of capabilities, and would be comparable for any of the browsers.


I think you're right but there's also an opportunity to sell picks when everyone is digging for gold. Like AI-driven VS Code forks, you have AI companies releasing their own browsers left and right. I wonder if Mozilla could offer a sort of white-labeling and contracting service where they offer the engine and some customization services to whatever AI companies want their own in-house browsers. But continue to offer Firefox itself as the "dumb" (from an AI perspective) reference version. I'm not sure exactly what they could offer over just forking Chromium/Firefox without support but it would be a great way to have their cake and eat it too.


Of course they have a choice. Firefox started going downhill IMO because they kept copying Chrome. Vivaldi decided not to include AI until a good use case was found for it. This announcement was met with a lot of positivity.


I think youre mixing up two seperate concerns: functionality and standards. It seems to me that there could absolutely be a "dumb browser" that sticks to (and develops) web standards and is also relatively popular


What is the use case with these? Even larger models skip details. Small models are terrible at summarizing and writing.


> I'm so surprised that I often find myself having to explain this to AI boosters but people have more value than computers.

That is true, but it does not survive contact with Capitalism. Let's zoom out and look at the larger picture of this simple scenario of "a creator creates art, another person enjoys art":

The creator probably spends hours or days painstakingly creating a work of art, consuming a certain amounts of electricity, water and other resources. The person enjoying that derives a certain amount of appreciation, say, N "enjoyment units". If payment is exchanged, it would reasonably be some function of N.

Now an AI pops up and, prompted by another human produces another similar piece of art in minutes, consuming a teeny, teeny fraction of what the human creator would. This Nature study about text generation finds LLMs are 40 - 150x more efficient in term of resource consumption, dropping to 4 - 16 for humans in India: https://www.nature.com/articles/s41598-024-76682-6 -- I would suspect the ratio is even higher for something as time-consuming as art. Note that the time taken for the human prompter is probably even less, just the time taken to imagine and type out the prompt and maybe refine it a bit.

So even if the other person derives only 0.1N "enjoyment" units out of AI art, in purely economic terms AI is a much, much better deal for everyone involved... including for the environment! And unfortunately, AI is getting so good that it may soon exceed N, so the argument that "humans can create something AI never could" will apply to an exceptionally small fraction of artists.

There are many, many moral arguments that could be made against this scenario, but as has been shown time and again, the definition of Capitalism makes no mention of morality.


But it sounds like in this case the "morality" that capitalism doesn't account for is basically just someone saying "you should be forced to pay me to do something that you could otherwise get for 10x cheaper." It's basically cartel economics.


In isolation that makes sense, but consider that these AIs have been trained on a vast corpus of human creative output without compensating the human creators, and are now being used to undercut those same humans. As such there is some room for moral outrage that did not exist in prior technical revolutions.

Personally, I think training is "fair use", both legally and practically -- in my mind, training LLMs is analogous to what happens when humans learn from examples -- but I can see how those whose livelihood is being threatened can feel doubly wronged.

The other reason I'm juxtaposing Capitalism and morality is the disruption AI will likely cause to society. The scale at which this will displace jobs (basically, almost all knowledge work) and replace them with much higher-skilled jobs (basically, you need to be at the forefront of your field) could be rather drastic. Capitalism, which has already led to such extreme wealth inequality, is unsuited to solve for this, and as others have surmised, we probably need to explore new ways of operating society.


Heheh a bit tangential, but a long time ago, I had a similar thought: how much performance could we gain if we just compared hash values (typically integers) and avoided comparing actual keys -- and the pointer-chasing that entails -- as far as possible?

The problem is that for a regular hash table, eventually keys must be compared because two keys could have the same hash value. So maybe we relegate key comparisons only in cases when we encounter a collision.

The only case where this can work is when the set of keys that could ever be looked up is static. Otherwise we could always get a lookup for a new, non-existent key that creates a new collision and return the wrong value. Still, there are cases where this could be useful, e.g. looking up enum values, or a frozendict implementation. Something like minimal perfect hashing, but simpler.

So I came up with this silly project and benchmarked it in Java, C# and a hacky CPython "extension": https://github.com/kunalkandekar/Picadillo

A very micro optimization, but turned out there was a 5% - 30% speedup on a PC of that era.


This is a really fun project!

In SwissTable, the main difference is that you only get a probabilistic signal of presence (kind of like a Bloom filter), but the core idea feels very similar.


While those patents are not enforceable in China (unless equivalents were also filed in China -- unsure if they would be worth much) they would be when imported to the US. This is one of the reasons the ITC exists, and it played a prominent role during the smartphone patent wars. So at least the US market would be protected from knock-offs.


The smartphone wars were fought among tech giants, not capital intensive hardware startups. The problem with patents is that you need to already be financially successful enough to file them, able to pay to protect them in court, and can float your company's operating costs long enough to see them enforced and rewarded, which may take years.


Yes and no -- filing patents is quite affordable (probably outdated info, but I recall average costs for drafting and filing was ~10K / patent, most of the costs being related to the drafting rather than filing.) Compared to all the other capital investments required for hardware startups, these costs are negligible.

But you're totally right that enforcing them is extremely expensive, slow and risky.

That said, Roomba isn't exactly a startup but wasn't a tech giant either, and did enforce their patents often.

And especially against imported infringing products, the ITC provides a much cheaper, faster mechanism to get protection via injunctions.


In theory, sure. In practice? Chinese companies ignore your patent, you waste money suing, it takes a long time.

If you win? Good luck collecting damages from China, and have fun suing the next brand that starts selling the same machine in different plastic


That's why the ITC is so relevant here: it is relatively quite speedy compared to regular patent trials, and have the power to issue injunctions against imports (which is partly why it was relied on a lot during the smartphone patent wars.) So you may not collect damages from Chinese companies, but you can completely block their infringing imports into the US and deny them US revenue.


Why isn't Amazon liable?


$ -> Lobbyists. Legal firepower.

Or, said another way: unwillingness to enforce.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: