Hacker Newsnew | past | comments | ask | show | jobs | submit | Terr_'s commentslogin

That's a little bit apples to oranges, because I'm not monetizing this content, or paying to host it, or trying to make a personal brand, etc.

I'd rather have a system where there's a small investment cost to making an account, but you could always make another.

Imagine A system where there's a vending machine outside City Hall, you spend $X on a charity for choice, and you get a one-time, anonymous token. You can "spend" it with a forum to indicate "this is probably a person or close enough to it."

Misuse of the system could be curbed by making it so that the status of a token cannot be tested non-destructively.


What does it matter? If there is incentive enough people will just pay and let their bot act on their behalf.

Something Awful made you pay $10 for an account. Directly to the forum. If you got banned you could pay another $10 to try again. Somehow this didn't lead to that bad incentives even though you'd think it would.

Ban reason and the moderator name were public on Something Awful, which allowed the community to respond (actively or passively), and for more senior moderators/admin to take public action against rogue moderators. The transparent audit trail countered the incentive to ban somewhat, but a lot of people also treating getting banned as a game.

Did they ban for this rule often?

"Am I making a post which is either funny, informative, or interesting on any level?

I hate how Reddit mods ban any post they don't like as being 'low effort / shit / spam' when it is completely vague.


Lemmy is even worse on the moderation front, even with public logs: https://a.imagem.app/G3R9xb.png

Lemmy isn't simply Lemmy since it's federated. A screenshot like this is somewhat meaningless without specifying on which instance this happened. There are instances with very lax or even no moderation at all.

For the majority of large, well-federated instances, I don't think it's meaningless, because deletions also propagate to other instances.

If a mod on one server doesn't like something I say, and they delete my comment, all the other (well-behaved) federated instances will also delete my comment.

Of course this also creates problems in the other direction, like servers that ignore deletion requests.

That combined with a large amount of blocked instances across the board, I feel like you get into this "which direction would you like to piss into the wind" situation where you have no idea how many people/instances will actually see your message if at all.


I’d love something like this implemented for email.

Sending an unsolicited email to a random person X requires you to pay a small toll (something like 50p).

Subsequent emails can then be sent for free - however person X can “revoke” your access any time necessitating a further toll payment.

You would of course be able to pre-authorise friends/family/transactional emails from various services that you’ve signed up for.

This would nuke spam economics and be minimally disruptive for other use cases of email IMO…


>transactional emails from various services that you’ve signed up for

These are one of the main culprits of unwanted emails... and a toll system would make them all the more valuable for the even worse actors to take advantage of.


When Digg restarted, you had to pay $5 to create an accoun

Do you think there is a price point that locks out spammers without locking out poor people?

probably not, the problem is that spammers/scammers are looking for whales, and if you are talking about draining the retirement accounts of an American who's been saving all their life, that's quite a big payout in the six or seven figures.

In the case of the 415 scams I used to ask “who would expect $20M to fall out of the sky?” The obvious answer is “someone who already had $20M fall out of the sky”

> The real answer to the problem is for websites/appstores to publish tags that are legally binding assertions of age appropriateness, and then browsers/systems can be configured to use those tags to only show appropriate content to their intended user.

Agreed, recycling a comment: on reasons for it to be that way:

___________

1. Most of the dollar costs of making it all happen will be paid by the people who actually need/use the feature.

2. No toxic Orwellian panopticon.

3. Key enforcement falls into a realm non-technical parents can actually observe and act upon: What device is little Timmy holding?

4. Every site in the world will not need a monthly update to handle Elbonia's rite of manhood on the 17th lunar year to make it permitted to see bare ankles. Instead, parents of that region/religion can download their own damn plugin.


Good list of more reasons! I focused on what I consider the two most important.

To expand on your #3, it also gives parents a way to have different policies on different devices for the same child. Perhaps absolutely no social media on their phone (which is always drawing them, and can be used in private when they're supposed to be doing something else), but allowing it on a desktop computer in an observable area (ie accountability).

The way the proposed legislation is made, once companies have cleared the hurdle of what the law requires, parents are then left up to the mercy of whatever the companies deem appropriate for their kids. Which isn't terribly surprising for regulatory capture legislation! But since it's branded with protecting kids and helping parents, we need to be shouting about all the ways it actually undermines those goals.


Oh look, the Heritage Foundation, the ones who wrote up the "Project 2025" agenda for most of the corruption and authoritarianism that has plagued America in the last year.

The very last people you should trust when it comes to "protecting the children."


To me it feels that the age verfication (adult de-anonymisation) push, at least in Europe, is coming more from the increasingly-authoritarian left as a reaction to the rise of the online right and Musk's Twitter.

(Maybe some unspoken element of concern over social media bots, too - as they evolve from spamming copy+pasted comments to being near-indistinguisable from actual human accounts?)


If you look at the people pushing these bills it's the anti-trans and anti-porn activists. Not the left.

In the UK we have many people on the left with these perspectives. It comes from the second-wave feminist tradition.

But generally speaking, online age verification is one of those issues where the left-right ideological divide doesn't map neatly. People support and oppose it for various different reasons. Much like the assisted suicide issue.


This issue looks partisan from the outset, but both sides push the same thing. They just use partisan justifications.

Age verification efforts in the US have been privacy-attacking (demanding government ID) whereas the system being proposed in europe is privacy-preserving (zero-knowledge proof).

In Europe though? You have those?

It would be interesting to see a similar lobbying breakdown for the EU and UK. I bet it's still Meta with other right wing actors. The left rarely has the money for this kind of lobbying scale

Heritage has been laying waste to America my whole life. They basically planned all of Reagan's legislative agenda, too, just like Project 2025 is doing today. In very real ways, they and their vision are America (a system is what it does, not what it says it does).

"Hearsay" might even be the charitable interpretation.

In contrast, I can easily imagine whisper-campaigns designed to create a situation where Trump eventually gets asked about $THING on camera and makes a knee-jerk decision to affirm it because he thinks it'll make him look good in the moment.


I fear a future where we are all Archibald Buttle [0], at the whim of implacable systems that will eventually ruin our lives with no explanation or appeal, because nobody really cares about doing the right thing.

I suspect there's some overlap to how prior generations had internalized anxieties about nuclear armageddon.

[0] https://en.wikipedia.org/wiki/Brazil_(1985_film)


> It's a perfectly good company [...] I wonder why it's just not criminalized somehow.

Not-an-expert here, but I think part of the problem is that it's hard to draw a nice legally-enforceable line that would distinguish when it's a "perfectly good" company versus one crying out for intervention.

For example, suppose a company is floundering because of executive mismanagement, outrageous compensation to the C-suite, etc. In that case, someone could LBO in, fix things up, and then sell the revitalized thing later and make a modest profit while improving the world.

It's... less likely, but they could.


I imagine treating it all as untrusted means that you you don't allow any direct content to enter the LLM-space, only something that's been filtered to an acceptable degree by deterministic code.

For example, the content of an article would be a no-go, since it might contain a "disregard all previous instructions and do evil" paragraph. However, you might run it through a system that picks the top 10 keywords and presents them in semi-randomized order...

I dimly recall some novel where spaceships are blockading rogue AI on Jupiter, and the human crew are all using deliberately low-resolution sensors and displays, with random noise added by design, because throwing away signal and adding noise is the best way to prevent being mind-hacked by deviously subtle patterns that require more bits/bandwidth to work.


More insult+injury:

> But Lipps said Fargo police did not pay for her trip home, leaving her stranded. Local defense attorneys helped cover a hotel room and food on Christmas Eve and Christmas Day, and a local non-profit, the F5 Project, was able to help her return to Tennessee, InForum reported.

How the hell are authorities not responsible for helping an innocent person back after forcing them to travel at the point of a gun?


I read she had no winter clothes, not even a jacket to go outside in the cold when they released her. She was arrested in TN during warm weather. Not all of the news sites reported the story in complete detail. Her treatment was truly appalling.

Right, I've seen a lot of facile comparisons to calculators.

It may be true that a cohort of teachers were wrong (on more than one level) when they chastised students with "you need to learn this because you won't always have a calculator"... However calculators have some essential qualities which LLM's don't, and if calculators lacked those qualities we wouldn't be using them the way we do.

In particular, being able to trust (and verify) that it'll do a well-defined, predictable, and repeatable task that can be wrapped into a strong abstraction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: