Hacker Newsnew | past | comments | ask | show | jobs | submit | Ellipsis753's commentslogin

Old links to your site might still be http - HSTS prevents that request being in the clear. Also, if you have a man-in-the-middle attack, it doesn't matter if you return a redirect or not as the attacker has already replaced your site with a phishing attack instead of a redirect. HSTS prevents this.


Your second example would also be prevented by just not serving on port 80 as the parent comment suggests, no?

A MITM can intercept the SYNs to port 80 and send their own SYN+ACK.

Not serving on port 80 means a passive viewer won't see any content, but if you were just serving a redirect, there's not much content to see.

IMHO, if you use HSTS preload and you prime HSTS by serving your favicon with https and HSTS, you can go ahead and serve your (unauthenticated) content with http. A modern browser will switch over to https; a MITM could fetch your https pages and return them over http; and you'll be accessible on ancient browsers that can't manage modern TLS.


No, not really. You can still be MITMed on port 80.

Right. Clients (web browsers) would have to stop using it too for it to work I guess.

>no?

No.


It should be able to factor 15.


So can a 10 year old. The breakthrough I’m waiting for is factoring something I cant do in my head.



Apart from being a fun read, I learned that I should be skeptical at papers claiming to have factorized certain numbers. Thanks.


How much money or time do they owe you, though?


But it can’t because the error rate is still too high even for the most trivial examples


I was amused but unsurprised to see that this article was deleted when I looked it up...


Looks like you were rate limited at the end. They don't rate-limit britishairways.com which is a SNI that you can always access. Lolz.


It does sound like each server is its own process. I think you're correct that it would be a little faster if all games shared a single process. That said, then if one crashed it'd bring the rest down.

This is one of those things that might take weeks just to _test_. Personally I suspect the speedup by merging them would be pretty minor, so I think they've made the right choice just keeping them separate.

I've found context switching to be surprisingly cheap when you only have a few hundred threads. But ultimately, no way to know for sure without testing it. A lot of optimization is just vibes and hypothesize.


Great article. Not understanding the hate.

I think the jist of it is, you probably have sufficiently low requests/second (<1000) that using postgres as a cache is totally reasonable - which it is. If your hitting your load tests and hardware spend, no need to optimise more.


Is this saying that intel will support _only_ 512 instructions? (And not 256).

Or that it'll support _both_ 256 and 512 instructions going forwards (and stop doing the nonsense where some cores support 512 and others don't?)


AVX10 will continue to support 512-bit instructions, 256-bit instructions, 128-bit instructions and scalar instructions (FP32 & FP64), exactly like the current AMD and Intel CPUs with AVX-512 support.

So none of the current instructions will be removed.

The previous plan of Intel was that in consumer CPUs the 512-bit instructions shall be removed, keeping only 256-bit instructions, 128-bit instructions and scalar instructions (FP32 & FP64).

Nevertheless, the most ancient versions of AVX-512 had only 512-bit instructions and scalar instructions.

The 256-bit instructions and 128-bit instructions have been added in Skylake Server, as a workaround for the bad power management of Intel at that time, which forced huge drops in clock frequency for long times when using wide instructions.

On modern CPUs there is no need to use 256-bit or 128-bit instructions. You gain nothing with them. AVX10 instructions have masks, so you can process any arbitrary length with a 512-bit instruction, in the case of loop prologues or epilogues.

The use of 512-bit instructions simplifies many optimized programs, because one instruction processes one cache line.


Zen4 made it very difficult for Intel to make 512 a premium feature.


Zen 5 beats Arrow Lake without AVX512. For workloads that use it you then get another huge performance jump with Zen 5, but Arrow Lake doesn't even have the instructions.


One clear reason to use 128-bit instructions: naturally aligned 128-bit loads and stores are only atomic if encoded as EVEX.128 (or VEX.128 etc.).

The default auto-vectorization tuning for current Intel server CPUs using 256-bit registers, which is perhaps another counterexample.


The auto-vectorization (which I anyway would not rely on) default setting also sounds like a workaround for the SKX issue.

For atomic, I'm curious how you make use of that?


It will support both, but considering the previous experiences with avx 512 on intel, I wouldn't that excited


This is how most of the top of leaderboard works.


Do you have any evidence to support that claim? Competitive programmers (especially those with their own libraries ready to go) can be incredibly fast at solving coding challenges.


Also, LLMs already fail on the two-star problems.


Weirder still, the discounts stack! So blind people can benefit from buying a black-and-white TV for an additional discount.

I've given this a lot of thought in the past. The best I could come up with is that "legally blind" could still allow for someone with _very poor_ (colour) vision...


Ehhh. Feel like CAP theorem is a bit overrated. It's correct from a theoretical purist point of view, but you can still solve many of these kinds of issues in practice. (Similarly, people overstate the halting problem - which can be solved for computers with finite memory.)


If you think the CAP theorem is overrated, you don't understand the CAP theorem.

It is not just correct from a theoretical point of view. Any tradeoff you make in reality is in accordance to the CAP theorem, one way or another.

The only thing that is overrated is the idea of consistency, availability or pratition tolerance to be discrete things, when in reality they are continuous.


Availability is the one that’s overrated in my experience. A lot of system can handle some down time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: