Not the OP, but AMS can be useful for loading and unloading filament, as well as automatically continuing a print job when you run out of one spool of the same filament. It's not just for multi-color prints, although that's obviously the primary use case.
Can you though? The protocol is not very well documented and it seems to iterate rather rapidly with the server version that it aims to be compatible with.
This seems to be a theoretical discussion: I don't think I'd ever want to implement the client part of FoundationDB myself, and I don't really see a good reason to.
> Multiple frame generation (required for 5070=4090) increases latency between user input and updated pixels
Not necessarily. Look at the reprojection trick that lots of VR uses to double framerates with the express purpose of decreasing latency between user movements and updated perspective. Caveat: this only works for movements and wouldn't work for actions.
Tap to pay with spend controls sounds ideal for things like after school snacks/activities and transit. It could also be an easy to way to manage things like allowance digitally?
AuthZed is changing the way top tier companies add rich permissions experiences to their applications. Modeled after Google's Zanzibar, AuthZed offers a low-latency, high-scale permissions system, that scales to millions of QPS with milliseconds of latency.
Pros: It will automatically orient to either a frontal or rear collision and has tons of contact area with the body. If it's made of even a slightly stretchy material it would also spread the force out over some period of time.
Metros have them, and in my experience the rear-facing seats are more comfortable, as metros decelerate much faster than they accelerate (or at least in Lille, I didn't notice it as much in three other metros I took).
> I fear that fovea-tracking will remain scifi dreams in my lifetime, so the reality is we need to render full resolution throughout the field of view to be prepared for where the eye gaze might go in the next frame.
This is not at all true. All of the AVP, the Quest Pro, and the PSVR2 do eye tracking based foveated rending. They lower the clarity of the things you're not looking at. And reviewers say it works perfectly, like magic. They are unable to "catch" the screen adjusting for what they're looking at.
Are they actually doing some kind of optical shifting of a limited pixel display? Or you just mean they do some kind of low-res rendering mode to populate most of the framebuffer outside a smaller zone where they do full quality rendering?
In other words, are they just allocating rendering compute capacity or actual pixel storage and emission based on foveal tracking?
> Or you just mean they do some kind of low-res rendering mode to populate most of the framebuffer outside a smaller zone where they do full quality rendering?
This exactly. They don't reduce the resolution too much, but it's visible to an outside observer watching on a monitor.
That’s just to be about to have enough compute/bandwidth to drive the display. Other posters are correct that the DPI decreases away from the center and various optical aberrations increase. Foveated rendering won’t help with that.