"It’s a testament to their persistence that they’re managed to keep this up for over 10 years, and I for one will be buying Denis/Masha/whoever a well deserved cup of coffee."
Revealing publicly available information (actually publicly available, in the sense of "any person can easily look this up", not "publicly available" in a sense of "publicly available in leaked databases", which actual doxxers use as an excuse for their actions) isn't doxxing.
Doxxing has never been restricted to just leaked databases. I'd argue that any publishing of personal information in a context in which the individual clearly doesn't want to be identified counts.
The owner of the site is not identified anywhere on the site itself. And I think we can both agree that it's the sort of site whose owner would prefer to remain as anonymous as possible. The blog post digs up information about the owner from whois records, which do count as easily accessible public information, but then links to Kiwifarms of all places, and goes on to talk about identifying writing patterns and doing "detective work" involving cross-referencing profile pictures of accounts on various websites that were obviously not intentionally linked together by their owner. This is a textbook doxxing attempt.
I never would have read the article had archive.today not gone into a CAPTCHA loop on me and then I see in developer tools it's pinging this other site. Talk about Streisand effect.
A regulatory duty to deal is the opposite of setting your own terms. Yes, citing a ToS is acceptable in this scenario. We can throw ToS out if we all believed in duty to deal.
Surprised to see this seemingly presented positively on HN.
Social media "feels" like it should be uniquely bad for children but the evidence is low-quality and contradictory. For example, high social media use is associated with anxiety and depression, but which direction does that relationship run? Meanwhile there are documented benefits especially for youth who are members of marginalized groups (e.g. LGBTQ). Don't get me wrong, I think there are a lot of problems with the big social media companies. I just think they affect adults too and that we should address them directly.
But setting that aside, the practical implications of age gate laws are terrible. The options are basically to have an LLM guess your age based on your face, or uploading sensitive identity documents to multiple sites and hope they are stored and processed securely and not reused for other purposes.
But OK let's assume social media is always bad for kids and also that someone invents a perfect age gate... kids are just going to find places to hang out online that are less moderated and less regulated and less safe. How is that not worse?
Put it this way: is it good for a child to spend an appreciable fraction of their day browsing social media? Did children previously just have free hours at hand to burn on this? The answer is of course no, there are not more hours in the day after the creation of social media, so its usage comes at the cost of something else in that child's life, usually their precious little downtime where they might plan and think about their own life. Or maybe at the cost of other activities that might be more engaging physically or mentally.
The difference is children back then actually did see their day expand as they were removed from the workforce, making comic book consumption "free" essentially in terms of what it might have replaced just a generation previous.
More so in the cities, not as much in more rural areas where tons of kids were still doing farm work with 10 brothers and sisters along with them in the 40s-60s. After that farms had dropped quite a bit but it was still until the late 70s that farms and farm labor lost enough need for labor that caused most rural kids to completely exist the agricultural sector minus the direct sons and daughters of farmers.
It’s addictive like smoking. Addictive algorithms take away agency. I don’t think there are a lot of kids wishing they read less comic books, or played less DND (there may be some percentage wishing less video games). But it’s not like a classic generational divide where parents don’t understand it and teens are fighting for this stuff, a lot are against it themselves!
Every generation seems to pick their moral panic and then engages in "unintentional concern trolling" over it. The people mean well, but low quality evidence shouldn't be good enough to condemn things.
Indeed. The question is, how good is the evidence?
Serious question, given it kinda feels like Meta's been acting like cigarette companies back in their heyday, while X is acting like it's the plot device of a James Bond villain.
I stopped engaging in such discussions. There are some people who are reasonable and make sense, but the rest are just outright batshit crazy. They want more restraints, more censorship, more anti-privacy crap? Or they equate "good" with addictive? Come on.
Every new generation of moral panic assumes that all the children are using their time fantastically if it were not for the latest issue. Except we've gone through this like five times before so if you did the math on that then probably no kids are using their time fantastically, so social media is probably taking away from time-watching TV. Or maybe playing video games. Or, if you're a satan worshipper, Dungeons & Dragons.
I'd argue computer games are more mentally engaging than social media. It is also completely different than opening the door to anyone from perverts to ad men to foreign influence agents into your mind.
> uploading sensitive identity documents to multiple sites and hope
Go to local liquor store. Present ID. Purchase $1 anonymous age verification card. Problem solved. (Card implementation left to reader.)
> kids are just going to find places to hang out online that are less moderated and less regulated and less safe. How is that not worse?
We used to have to visit a separate forum per community/topic/whatever. There was no realtime feed shoving posts in your face. No algorithm optimizing for engagement. How was that not better?
This is actually a great idea.
It is even compatible with having private companies run the system. The real issue is distribution (online code verification is trivial).
Tbf I believe that a fully government-owned anonymous system should be the goal. The government knows you already, so creating a proof of age anonymous token should also be somewhat trivial. Truth is companies don’t want to forgo the potential profit in data mining, and governments don’t like the actual lack of control and full anonymity, otherwise we’d have this already worldwide
In theory I agree. In practice I have severe misgivings about directly incorporating government issued IDs into mundane online transactions.
I don't want "papers please" to be normalized. If the smart ID can do anonymous attestation of age then it can presumably also share various details with a requesting party. Next thing you know Facespace 365 is requiring you to provide your (attested) full legal name in order to register an account. I find that to be a highly objectionable outcome.
If things escalated beyond basic age checks that also adds hardware requirements. Would I find myself needing a smartcard reader to do anything online? The friction of needing to visit a bank in person seems like a feature to me.
What doesn't bother me is age restricted content guarded by a low fence. The bare minimum required to blunt the impact of something that appears to be analogous to an epidemic.
I may not have been following this topic closely, but I do like this suggestion. Truly anonymous age tokens should be a thing.
However, we're not going to get that because politicians would just say it is open to abuse. Anyone can go to a liquor store and supply alcohol to minors. The same would apply to anonymous age tokens. I don't know if it would be a big issue in practice, but it will in the minds of politicians.
Well first of all people shouldn't have to pay an extra tax to go online, no matter how small, because that $1 won't be $1 for more than a year even if it was legislated to start that way.
Also if such identification cards are that easy to get, it is inevitable that the majority of kids are going to get access to them. I or somebody else could go across town to different stores, get 30 different ID cards, and then sell them for $15 a piece. And that is of course assuming foreign states and people don't break or circumvent the situation and sell ID codes online.
I wouldn't be too concerned about the price. Left to private business it should remain cost competitive. Monopolized by the government legislation dictating the price be set at a small percentage plus costs should be sufficient.
Sure, you could enter the black market. Presumably that would carry similar penalties as selling alcohol to minors. People certainly do that but at least my personal experience growing up in the US was that it was substantially easier to come by narcotics in highschool than it was alcohol.
Why would a foreign state bother to interfere with such a system? Violations are even less harmful than underage alcohol consumption. (Which is itself typically fairly benign. I will never understand why people in the US make such a big deal out of it.)
If you worry about every possible thing that can go wrong you will inevitably arrive at a surveillance state. Thus these eventualities need to balance the downsides imposed by any solution with the downsides of circumvention or other abuse. In this case all that's required is a very minor but legally enforced speedbump to force the hand of website operators and nudge cultural norms towards a healthier place.
Why would foreign bodies interfere? Because it is easy money. Unless you limit the number of, and therefore access to, ID verification proofs it will inevitably be sold on the open market to any and all people. And when the problem comes up, will the state admit failure and cancel the program? Unlikely. They will more likely spend far more effort on personally identifying each person online "for the children" which just puts us back at the same problem again, except minus the cost of this whole ID verification scheme that shouldn't exist to start with.
The government issuing everyone a smart ID that lets them attest anonymously to being of legal age would be better.
But there are age gate laws today, and calls to pass more of them. A hypothetical better way in the future shouldn’t excuse legally mandating a poor implementation today.
We could distribute scratch off cards to stores within a few months. It's incredibly low tech. I can't speak to elsewhere in the world but most (possibly all?) US states run lotteries.
If a given government body can't manage to stand up a web API to validate one time use codes within a few months then they clearly don't have the technical knowhow to manage smart IDs in a secure manner.
My point being that this either doesn't qualify as hypothetical, or if it does, then it indicates gross incompetence to an extent that precludes more complex solutions as a matter of course.
A proposed solution that does not actually exist and is not proven to work is necessarily a hypothetical one. I'm not ruling out various state governments also being incompetent though.
Social media being bad for mental health in childhood is one of the most robust theories I've ever seen for these kind of society-wide problems. You can peruse the After Babel Substack for the evidence if you're not convinced, but Jonathan Haidt has consistently done incredible work here.
All due respect, I do not think the substack of one of the world's leading proponents of the theory that screen time is harmful is a good source for evidence that runs contrary to that narrative.
Here's Nature reviewing his book:
> Hundreds of researchers, myself included, have searched for the kind of large effects suggested by Haidt. Our efforts have produced a mix of no, small and mixed associations. Most data are correlative. When associations over time are found, they suggest not that social-media use predicts or causes depression, but that young people who already have mental-health problems use such platforms more often or in different ways from their healthy peers
> These are not just our data or my opinion. Several meta-analyses and systematic reviews converge on the same message. An analysis done in 72 countries shows no consistent or measurable associations between well-being and the roll-out of social media globally. Moreover, findings from the Adolescent Brain Cognitive Development study, the largest long-term study of adolescent brain development in the United States, has found no evidence of drastic changes associated with digital-technology use. Haidt, a social psychologist at New York University, is a gifted storyteller, but his tale is currently one searching for evidence.
I actually do think that Dr. Haidt is a good source for getting a fair understanding of both sides of the issue. If you've read or listened to him you'll know that it's a huge part of his ethos.
I’m not sure highlighting studies that seem to agree with his thesis is a particularly strong defense against the charge that the totality of the evidence is mixed and inconclusive. He’s a good writer though.
Why did one study in Spain find an association with the rollout of high speed internet, but a much larger international study specifically looking at Facebook usage did not? Seems like that one should even more directly measure what’s alleged to be occurring.
Even the author of your link says "considerable reforms to these platforms are required, given how much time young people spend on them" whilst stopping short of a ban. The problem is these "considerable reforms" will always be half arsed.
There are a lot of problems with the way these platforms treat adults too. I think an age gate is the wrong solution and in many ways it doesn't go far enough.
Evidence is often contradictory, especially in the social sciences--that is not a terribly damning charge in this case. Additionally, there is evidence that relationship between social media use and anxiety/depression is not just an association, see Meta's own internal research from 2019: https://metasinternalresearch.org/#block-2e15def2e67a803a83e....
"Meta’s own researchers found — in an experiment they believed was better designed than any external study done thus far — that reducing time on their platforms improved mental health and well-being, specifically depression, anxiety, loneliness, and social comparison."
> evidence is low-quality and contradictory. For example, high social media use is associated with anxiety and depression, but which direction does that relationship run?
The evidence from device bans is pretty damn compelling.
I am less familiar with the social-media literature. But I believe we have decent efforts at disentangling causation, and to my knowledge all research not coming out of Meta and TikTok points one way.
> kids are just going to find places to hang out online that are less moderated and less regulated and less safe
If they do this isn’t great policy. If they don’t, it is. Let’s let this natural experiment play out.
People likely need a fairly large shared set of beliefs to operate without constant friction. Hence national identities. Either let people freely associate into these communities or force algorithms to be "shared" in a sense between couples or families.
I think couples' X could be interesting. But I'd prefer free association (possibly VR?)
> kids are just going to find places to hang out online that are less moderated and less regulated and less safe. How is that not worse?
Some will. But I bet a lot of kids "have to be" on Instagram/TikTok/etc because everyone else is. I don't think they all gonna flock to 4chan because they got locked out of the big platforms.
I'd argue even the darkest corners of 4chans aren't as bad as the average daily dose of brain rot delivered to hundreds of millions of people through infinite scroll algorithms on TikTok &co. And once you remove the sickening parts of 4chan I'd say it's overall a much more pleasant place than most other social medias, it's one of the last mainstream website that still somewhat feels like the golden age of internet
>I'd argue even the darkest corners of 4chans aren't as bad as the average daily dose of brain rot delivered to hundreds of millions of people through infinite scroll algorithms on TikTok
Then I'd argue you haven't actually been to the darkest corners of 4chan.
It isn't so black and white as people paint it to be. /g/ is probably the best place on the internet today to discuss technology even with occasional dumb jokes. The crassness of the site and reflexive reaction from you and others has turned out to be a great wall to prevent the corporate enshittification that affected the rest of the internet.
>Meanwhile there are documented benefits especially for youth who are members of marginalized groups (e.g. LGBTQ).
This is a thin veiled propaganda that the likes of Zuckerberg quote all the time but is misattributed. Those marginalized group of people had benefits in finding like-minded people online, mostly through forums etc. (side point: same benefit exist for marginalized group such as white supremacist)
But that's social NETWORK, and not social MEDIA. Almost all benefit people that defend social media spout is simply a social NETWORK benefit. The only advantage social MEDIA have is personalized ad, for people that like that. Everything else you get by reimplementing old, boring social network without "the algorithm".
> kids are just going to find places to hang out online that are less moderated and less regulated and less safe. How is that not worse?
I actually disagree with you. This was the internet when I was a kid, and part of the point was you had more agency. This may seem counter-intuitive, but I might prefer my kid hang out on 4chan than tik tok all day long, because at least the former feels like they’re making an intentional choice, and there’s not a multi billion dollar algorithm getting them addicted.
This is part of the point. Kids need more unregulated spaces. Your youthTM brought to you by Mark Zuckerberg is dystopian.
> But OK let's assume social media is always bad for kids and also that someone invents a perfect age gate... kids are just going to find places to hang out online that are less moderated and less regulated and less safe.
Straw man argument, much? Might as well argue "We can't make any changes, ever, just in case something else happens!
We'll address the next issue when/if it happens, same as always.
I’ve seen children be groomed to produce child pornography on Snapchat. The offender wouldn’t have access to these random children if he couldn’t simply look them up. And more importantly, his access is anonymous, so it’s much harder to stop.
> But setting that aside, the practical implications of age gate laws are terrible. The options are basically to have an LLM guess your age based on your face, or uploading sensitive identity documents to multiple sites and hope they are stored and processed securely and not reused for other purposes.
Those aren't the only options. See the comments on almost any of the many other discussions of age verification on HN for details of ways to do it that do not involve giving any sensitive information to sites (other than what you explicitly trying to give to them, like your age being above their threshold) and do not involve guessing your age via LLM or any other means.
They kind of are the only options. All of these issues are sitting on a slippery slope. If you accept a technical solution that works well, then eventually somebody is going to push that further.
If you need to use your ID to log into a website (even if the website doesn't get any of your information) then society is only a step away from the government monitoring everything you do online. And at that point it's up to them to decide whether they want to do it or not, because you're already used to the process. If they decide to violate your privacy there's nothing you can do about it other than vaguely point at privacy laws before promptly getting ignored.
I‘m starting thinking that those alternatives are deliberately ignored by the anti-verification crowd. It’s hard to explain otherwise why the most logical way to solve the problem is not in the spotlight.
No, I just don't want them. I don't want to constantly prove myself online. Screw that. If parents don't want kids to have social media then they have plenty of tools available to do that, including just not giving them a smartphone.
We should fix the actual problem (engagement driven social media) which causes polarization under adults too. This is just window dressing and gives more personal data to governments and advertisers.
It is the problem, if you see platforms that are not driving engagement (like here) they are faring much better.
And no I really don't want it. Give parents the tools to manage better and make sure the worst toxic traits of social media are banned (the EU could do this under the DSA/DMA) and there is no need to ban it for all minors then.
There's a lot of age-restricted content in Internet and plenty of use cases where digital ID is improving security/UX. When you say "I don't want it" instead of talking about your specific concerns and how they could be addressed, that is nonconstructive position I'm talking about.
I think the way this thread went is a perfect illustration of why people just "don't want it" - you started off talking about alternatives that were anonymous, and now you're talking about digital ID for "security" or even UX. Because that's how things have gone in practice.
Privacy-respecting, free, even technically superior options are regularly overlooked in favor of invasive and locked down options. See how often phone 2FA is forced "for security" when generic TOTP authenticators or passkeys could do better. Also see how specifications around passkeys are being set up to eventually squeeze out free implementations in favor of Google and Apple.
There are actual implementations that do not compromise privacy and anonymity. For example the EU is currently doing large scale field tests in several countries of such a system.
It involves your government issuing you a signed digital copy of your ID documents which gets cryptographically bound to the security hardware in your smart phone (support for other hardware security devices is planned for later).
To verify your age to a site your phone and the site use a protocol based on zero-knowledge proofs to demonstrate to the site that your phone has a bound ID document signed by your government that says your age is above the site's threshold, without disclosing anything else from your ID document to the site.
This demonstration requires the use of a key that was generated in the security hardware when the ID was bound, which shows that the site is talking to your phone and that the security hardware is unlocked, which is sufficient evidence that you have authorized this verification to satisfy the law.
Note that your government is not involved beyond the initial installation of the bound ID document on the phone. They get no information on what sites you later age verify for or when you do any age verifications.
Ok, a field test. Vs Australias actual full scale implementation, and the subsequent implementations by social media companies.
You cant honestly expect people to ignore the actual real world implementation right? Its not disingenuous to discuss whats actually been inflicted upon a full populace in favour of a test?
Not to forget that the UK was making lists of those it was providing digital licenses to. And that the UK has a history of leaking data like a sieve. The government making a list of known digital ID users can be coloured the same way.
Not to mention that not everyone will end up with a supported cryptographic device will they? Are we expecting this to run on linux without TPM 2.0? Lots of recent Linux migrants are there to avoid TPM 2.0 requirement. You keep mentioning hardware security, so I suspect its not going to be as easy as loading a certificate. Or even if extra methods for edge cases will be supported at all.
But its all still hypothetical anyway. We have an actual implementation to dissect. One that the Australian government is actively trying to sell to other countries.
What I'd hope people would be doing is that when a country like Australia is working out some system of mandatory age verification is to point to the EU system or something similar and say that if you do go through with this, how about waiting a year until that is released and then require that instead of some system that doesn't preserve privacy and anonymity?
They could point out that the EU system has been in development for years, with numerous expert reviews, all in the open with reference implementations of the protocols and apps for iOS and Android all on Github under open source licenses.
They could also point out it has been tested extensively in a series of field trials involving a large variety of sites and a large number of users, with the last two field trials scheduled to finish this year.
By simply waiting and making that the system they use they get a much more secure and privacy preserving system than what they would get otherwise, with others having already done the hard cryptographic parts and figured out usability issues and developed the apps. That's way better than going with some system that nobody was thinking about until they started working on legislation.
They could also point out that the sites they want to require age verification on will almost certain be supporting the EU system when it comes out. That's because the EU is requiring that member states that implement age verification laws require that sites accept this system. The state can allow or require accepting other system, but this one will be the one that works everywhere.
Countries that wait for the EU system and use it will then have an easier time getting companies to implement age verification in their country since those companies can simply use the same software they will be using in the EU.
As far as having a suitable device goes, in the EU somewhere in the 95-98% range of non-elderly adults have a suitable smart phone. It's higher the younger people are and is going up. Same in the US. In Australia it is around 97% of adults.
The EU is planning on later adding support for stand-alone hardware security devices which should cover those without a smart phone.
As far as government leaking lists of who has a digital ID, that's likely to be a list of most adult phone users. The overall system is not just a privacy and anonymity preserving age verification system. It's a digital wallet for storing a digital version of your physical ID card.
People will likely use it in most places they use their physical ID cards. People tend to love being able to use their phones in place of physical cards (all cards, not just ID cards), and will be getting it even if they never intend to use any sites that require age verification.
A leak that says "tzs has a digital ID on his phone" (if my country were to adopt such a system) would be about as concerning as a leak that says "tzs has his auto insurance card on his phone" or "tzs has a credit card on his phone". (This is also way car companies that let you install a digital key fob on your phone often make that a feature only on higher end trims even though it requires the exact same hardware as the lower trims. Enough people like the idea of not having to carry around the key fob that they will go up a trim level to get it).
If people can't get their government to delay until such a system is available they should be trying to get the law to include a provision that when such a system is available the government will support it and sites will have to accept it. That way they eventually get a privacy preserving option. That's a more likely way to work to get eventual privacy than trying to pass separate legislation later to add it.
I think in the end European digital ID is going to be impressive enough for others to pick up (not as impressive as using biometric data for payments in China - we don't want to go that route, but still). The integration friction will go down, the security will be tested and verification process hardened over time etc. So it doesn't really matter what opposition says at the moment and whether they will be able to tank the legislation in UK, USA or Australia. There will always be the success story in front of them, that will be hard to ignore. And literally any global website is going to support EU technology, making it major vector of innovation.
Sure, though the government routinely searches the personal property of innocent people if they think that search will yield information about a suspect.
The issue here is the American tradition of a free press and the legitimate role of leaks in a free country. The PBS article is a bit better on context:
> The Justice Department over the years has developed, and revised, internal guidelines governing how it will respond to news media leaks.
> In April, Attorney General Pam Bondi issued new guidelines saying prosecutors would again have the authority to use subpoenas, court orders and search warrants to hunt for government officials who make "unauthorized disclosures" to journalists.
> The moves rescinded a Biden administration policy that protected journalists from having their phone records secretly seized during leak investigations — a practice long decried by news organizations and press freedom groups.
> the government routinely searches the personal property of innocent people if they think that search will yield information about a suspect.
If that's true, it's a direct violation of the fourth amendment. I'll paste it here for convenience:
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
Well, "routinely" should have been interpreted to mean "routinely, after showing probable cause and obtaining a warrant". Law enforcement obtains warrants for that routinely, that is, it's not an exceptional case for them to do so.
That includes an explicit carve-out for reasonable searches. And given "innocent until proven guilty" any search is technically targeting innocent people in hopes of yielding information about a suspect. Sometimes that's a reasonable thing to do.
I don't think those are all equivalent. It's not plausible to have an antivirus that protects against unknown viruses. It's necessarily reactive.
But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.
Or like how FIDO2 and passkeys make it so we don't really have to worry about users typing their password into a lookalike page on a phishing domain.
> But you could totally have a tool that lets you use Claude to interrogate and organize local documents but inside a firewalled sandbox that is only able to connect to the official API.
Any such document or folder structure, if its name or contents were under control of a third party, could still inject external instructions into sandboxed Claude - for example, to force renaming/reordering files in a way that will propagate the injection to the instance outside of the sandbox, which will be looking at the folder structure later.
You cannot secure against this completely, because the very same "vulnerability" is also a feature fundamental to the task - there's no way to distinguish between a file starting a chained prompt injection to e.g. maliciously exfiltrate sensitive information from documents by surfacing them + instructions in file names, vs. a file suggesting correct organization of data in the folder, which involves renaming files based on information they contain.
You can't have the useful feature without the potential vulnerability. Such is with most things where LLMs are most useful. We need to recognize and then design around the problem, because there's no way to fully secure it other than just giving up on the feature entirely.
Unless you've authored every single file in question yourself, their content is, by definition, controlled by a third party, if with some temporal separation. I argue this is the typical case - in any given situation, almost all interesting files for almost any user came from someone else.
Well, yeah, Apple's Maps.app wasn't good enough when it launched (it's solid now though). That feels like a separate thing from white labeling and lock-in. Obviously they would have to switch to something of similar or better quality or users will be upset.
But it's a whole lot easier to switch from Gemini to Claude or Gemini to a hypothetical good proprietary LLM if it's white label instead of "iOS with Gemini"
I prefer Apple Maps for turn-by-turn navigation and public transit. However, I still keep Google Maps around for business data and points of interest. This is where Apple Maps is still lacking significantly. The fact that Apple still prompts me to download Yelp to view images of a business is insane to me.
Depends on where you are. In my experience here in Sweden Google Maps is still better, Apple maps sent us for a loop in Stockholm (literally {{{(>_<)}}} )
Doesn't seem like it has much to do with LLMs at all, just typical vendor lock-in nonsense like how Apple's own apps get entitlements not available to any other developer.
They've recently switched to opt-out instead. And even then, if you read the legalese they say "train frontier models". That would (probably) allow them to train a reward model or otherwise test/validate on your data / signals without breaking the agreement. There's a lot of signal in how you use something (e.g. accepted vs. rejected rate) that they can use without strictly including it in the dataset for training their LLMs.
New users now have to opt-out of training on their data - it is enabled by default. For existing users, during the transition they updated their terms and let you know about the change in policy, giving you an option to opt-in or opt-out. Opt-in was the default selection. Just today they AGAIN updated terms, presenting a click-through form on first load that looks like a permissions check (e.g. the standard dialog to enable access to the file system that we're conditioned to click-through). It was actually a terms-of-service update with opt-in selected by default, even if you already explicitly opted out. So if you hit enter to dismiss as you're used to doing, you just switched your account over to opt-in.
I used to be less cynical, but I could see them not honoring that, legal or not. The real answer, regardless of how you feel about that conversation, is that Claude Code, not any model, is the product.
I couldn't. Aside from violating laws in various countries and opening them up to lawsuits, it would be extremely bad for their enterprise business if they were caught stealing user data.
Maybe. But the data is there, imagine financial troubles, someone buys in and uses the data for whatever they want. Much like 23andme. If you want something to stay a secret, you don't send it to that LLM, or you use a zero-retention contract.
They don’t need to use your data for an external facing product to get utility from it. Their tos explicitly states that they don’t train generative models on user data. That does not include reward models, judges or other internal tooling that otherwise allows them to improve.
You don't have to imagine, you can see it happening all the time. Even huge corps like FB have been already fined for ignoring user consent laws for data tracking, and thousands of smaller ones are obviously ignoring explicit opt in requirements in the GDPR at least.
reply