Hacker Newsnew | past | comments | ask | show | jobs | submit | 8organicbits's commentslogin

S3 absolutely supports HTTPS. I think they set their bucket policy to forbid HTTPS. The whole thing is odd.

https://docs.aws.amazon.com/AmazonS3/latest/userguide/exampl...


so, isn’t this using the s3 static website hosting feature - (im assuming due to the dir listing) which i dont believe does support https

Bloggers don't always publish immediately after writing a post, the start of the prior post explains the time line and the delay: https://ploum.net/2025-12-04-pixelfed-against-fediverse.html

hmm ok, I thought it might be that, but then I would have expected "I did" to have been "I published"

The lack of change after legalization of recreational use is interesting. How many deaths related to medical use versus (previously illegal but decriminalized) recreational use?

I don't think the user population changes much whether it's illegal, "medical," or legal.

The referenced mailing list is this Google Group (https://groups.google.com/a/list.nist.gov/g/internet-time-se...) which has some other posts about this incident.

Where are you seeing the other posts?

How do you ensure that the LLM is creating accurate content? It would be a terrible experience if the LLM rewrote a website with bogus claims that confuse customers.

We get asked this a fair amount and the way we’re strategising on it is to build more opportunities for the site owners to define context as part of the broad site research that goes into creating the interpolations.

If I was to do this, I'd decide on what audiences my site was targeting and ensure the landing page had pre-approved content for each of them. Then I'd only use the LLM to rearrange the pre-approved marketing content, such that it puts the content that it thinks best targets the visitor above the fold. This way, the worst the LLM can do is to order the content incorrectly, and the visitor would need to scroll to see the content that targets them.

Even better, the LLM can make up rules for matching traffic to targeted profiles (corp IPs shows enterprise content, gov IPs shows gov offering, EU IPs shows European hosting options, etc). This way you don't use an LLM while rendering the page, reducing cost and speeding up page load times.


That's a lot of words to say you have no idea how to do it.

If the engineer doesn't understand the proof system then they cannot validate that the proof describes their problem. The golden rule of LLMs is that they make mistakes and you need to check their output.

> writing proof scripts is one of the best applications for LLMs. It doesn’t matter if they hallucinate nonsense, because the proof checker will reject any invalid proof

Nonsense. If the AI hallucinated the proof script then it has no connection to the problem statement.


Earlier this year I priced out AWS's on-demand m7i.large instances at $0.002/minute [1]. GitHub's two-core costs $0.008/minute today so it was a nice savings. But it looks like this announcement doubles the self-hosted cost and reduces their two-core system pricing to $0.006/min.

From this perspective this is a huge price jump, but self-hosting to save money can still work out.

Honestly, GitHub Actions have been too flaky for me and I'm begrudgingly reaching for Jenkins again for new projects.

[1] https://instances.vantage.sh/aws/ec2/m7i.large?currency=USD&...


Have you considered other options like woodpecker for example?


I've been playing by just looking at the title of the puzzle and ignoring the clues. I can solve most of the puzzles that way, and it increases the challenge.


Yeah I’ve heard from a few people that they play this way! I’d like to add an official setting for it in the future


I've started fund raising efforts for a project related to accelerating adoption of authenticated encryption between mail servers (it's time to move past opportunistic TLS).

I also launched a web browser extension last week, Blog Quest, which has some great early adoption numbers that exceeded my expectations. When I can find some spare time I'll start fixing up some of the early feedback/feature requests.

https://github.com/robalexdev/blog-quest


Promotional content for LLMs is really poor. I was looking at Claude Code and the example on their homepage implements a feature, ignoring a warning about a security issue, commits locally, does not open a PR and then tries to close the GitHub issue. Whatever code it wrote they clearly didn't use as the issue from the prompt is still open. Bizarre examples.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: