The difference I see between a company dealing with this as opposed to an open source community dealing with this is that the company can fire employees as a reactive punishment. Drive-by open source contributions cost very little to lob over and can come from a wide variety of people you don't have much leverage over, so maintainers end up making these specific policies to prevent them from having to react to the thousandth person who used "The AI did it" as an excuse.
When you shout "use AI or else!" from a megaphone, don't expect everyone to interpret it perfectly. Especially when you didn't actually understand what you were saying in the first place.
>I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
I find this bit confusing. Do you provide enterprise contracts for AI tools? Or do you let employees use their personal accounts with company data? It seems all companies have to be managing this somehow at this point.
Shouldn't this go without saying though? At some point someone has to review the code and they see a human name as the sender of the PR. If that person sees the work is bad, isn't it just completely unambiguous that the person whose name is on the PR is responsible for that? If someone responded "but this is AI generated" I would feel justified just responding "it doesn't matter" and passing the review back again.
And the rest (what's in the LLVM policy) should also fall out pretty naturally from this? If someone sends me code for review, and have the feeling they haven't read it themselves, I'll say "I'm not reviewing this and I won't review any more of your PRs unless you promise you reviewed them yourself first".
The fact that people seem to need to establish these things as an explicit policy is a little concerning to me. (Not that it's a bad idea at all. Just worried that there was a need).
You would think it's common sense but I've received PRs that the author didn't understand and when questioned told me that the AI knows more about X than they do so they trust its judgement.
A terrifying number of people seem to think that the damn thing is magic and infallible.
One of the reasons I left a senior management position at my previous 500-person shop was that this was being done, but not even accurately. Copilot usage via the IDE wasn't being tracked; just the various other usage paths.
It doesn't take long for shitty small companies to copy the shitty policies and procedures of successful big companies. It seems even intelligent executives can't get correlation and causation right.
Some people who just want to polish their resume will feed any questions/feedback back into the AI that generated their slop. That goes back and forth a few times until the reviewing side learns that the code authors have no idea what they're doing. An LLM can easily pretend to "stand behind its work" if you tell it to.
A company can just fire someone who doesn't know what they're doing, or at least take some kind of measure against their efforts. On a public project, these people can be a death by a thousand cuts.
The best example of this is the automated "CVE" reports you find on bug bounty websites these days.
What good does it really do me if they "stand behind their work"? Does that save me any time drudging through the code? No, it just gives me a script for reprimanding. I don't want to reprimand. I want to review code that was given to me in good faith.
At work once I had to review some code that, in the same file, declared a "FooBar" struct and a "BarFoo" struct, both with identical field names/types, and complete with boilerplate to convert between them. This split served no purpose whatsoever, it was probably just the result of telling an agent to iterate until the code compiled then shipping it off without actually reading what it had done. Yelling at them that they should "stand behind their work" doesn't give me back the time I lost trying to figure out why on earth the code was written this way. It just makes me into an asshole.
It adds accountability, which is unfortunately something that ends up lacking in practice.
If you write bad code that creates a bug, I expect you to own it when possible. If you can't and the root cause is bad code, then we probably need to have a chat about that.
Of course the goal isn't to be a jerk. Lots of normal bugs make it through in reality. But if the root cause is true negligence, then there's a problem there.
If you asked Claude to review the code it would probably have pointed out the duplication pretty quickly. And I think this is the thing - if we are going to manage programmers who are using LLM's to write code, and have to do reviews for their code, reviewers aren't going to be able to do it for much longer without resorting to LLM assistance themselves to get the job done.
It's not going to be enough to say - "I don't use LLM's".
Yelling at incompetent or lazy co-workers isn't your responsibility, it's your manager's. Escalate the issue and let them be the asshole. And if they don't handle it, well it's time to look for a new job.
> At my company I just tell people “You have to stand behind your work”
Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.
Using AI as a scapegoat for sloppy and lazy work needs to be unacceptable.
Of course it’s the minimum standard, and it’s obvious if you view AI as a tool that a human uses.
But some people view it as a seperate entity that writes code for you. And if you view AI like that, then “The AI did it” becomes an excuse that they use.
Bad example. If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.
Taking responsibility for outcomes is a powerful paradigm but I refuse to be held responsible for things that are genuinely beyond my power to change.
> If the toaster carbonized bread in 20 seconds it's defective, likely unsafe, possibly violates physics, certainly above the pay grade of a sandwich-pusher.
If the toaster is defective, not using it, identifying how to use it if it’s still usable or getting it replaced by reporting it as defective are all well within the pay grade of a sandwich pusher as well as part of their responsibilities.
And you’re still responsible for the sandwich. You can’t throw up your arms and say “the toaster did it”. And that’s where it’s not tangential to the AI discussion.
Toaster malfunctioning is beyond your control, but whether you serve up the burnt sandwich is absolutely within your control, which you will be and should be held responsible for.
That's exactly what I said and at odds with your last comment. You take responsibility for making the sandwich if possible. If not, you're not responsible for the sandwich, but for refunding the customer or offering them something else.
If I'm required to write code using AI without being given time to verify it, then it's also not fair for me to be held responsible for the quality. Agency matters. I will not take responsibility for things that I'm not given the power to address. Of course if I choose to write code with an AI and it comes out badly, that's within my responsibilities.
It's a bad example because typically "whether to toast the sandwich" depends on customer preference (imposed externally) but "whether to use AI" is still mostly up to the worker.
No it’s not. If you burn a sandwich, you make new sandwich. Sandwiches don’t abide by the laws of physics. If you call a physicist and tell them you burnt your sandwich, they won’t care.
I think it depends on the pay. You pay below the living wage? Better live with your sla.. ah employees.. serving charcoal. You pay them well above the living wage? Now we start to get into they should care-territory.
But "AI did it" is not immediate you are out thing? If you cannot explain why something is made the way you committed to git, we can just replace you with AI right?
I think it's not unlikely that we reach reach a point in a couple of decades where we are all developing win32 apps that most people are running some form of linux.
We already have an entire platform like that (steam deck), and it's the best linux development experience around in my opinion.
Practically speaking, I’d argue that a compiler assuming uninitialized stack or heap memory is always equal to some arbitrary convenient constant is obviously incorrect, actively harmful, and benefits no one.
In this example, the human author clearly intended mutual exclusivity in the condition branches, and this optimization would in fact destroy that assumption. That said, (a) human intentions are not evidence of foolproof programming logic, and often miscalculate state, and (b) the author could possibly catch most or all errors here when compiling without optimizations during debugging phase.
The compiler is the arbiter of what’s what (as long as it does not run afoul the CPU itself).
The memory being uninitialised means reading it is illegal for the writer of the program. The compiler can write to it if that suits it, the program can’t see the difference without UB.
In fact the compiler can also read from it, because it knows that it has in fact initialised that memory. And the compiler is not writing a C program and is thus not bound by the strictures of the C abstract machine anyway.
As if treating uninitialized reads as opaque somehow precludes all optimizations?
There’s a million more sensible things that the compiler could do here besides the hilariously bad codegen you see in the grandparent and sibling comments.
All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that. I’m saying a spec that incentivizes this nonsense is poorly designed.
Why is the code gen bad? What result are you wanting? You specifically want whatever value happened to be on the stack as opposed to a value the compiler picked?
> As if treating uninitialized reads as opaque somehow precludes all optimizations?
That's not what these words mean.
> There’s a million more sensible things
Again, if you don't like compilers leveraging UBs use a non-optimizing compiler.
> All I’ve heard amounts to “but it’s allowed by the spec.” I’m not arguing against that.
You literally are though. Your statements so far have all been variations of or nonsensical assertions around "why can't I read from uninitialised memory when the spec says I can't do that".
> I’m saying a spec that incentivizes this nonsense is poorly designed.
Then... don't use languages that are specified that way? It's really not that hard.
> Undef values aren't exactly constants ... they can appear to have different bit patterns at each use.
My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.
The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true. Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?
And to be explicit: “if you don’t like it, don’t use it” is just refusing to engage, not a constructive response to this critique. These semantics aren't set in stone.
> My claim is simple and narrow: compilers should internally model such values as unspecified, not actively choose convenient constants.
An assertion you have provided no utility or justification for.
> The comment I replied to cited an example where an undef is constant folded into the value required for a conditional to be true.
The comment you replied to did in fact not do that and it’s incredible that you misread it such.
> Can you point to any case where that produces a real optimization benefit, as opposed to being a degenerate interaction between UB and value propagation passes?
The original snippet literally folds a branch and two stores into a single store, saving CPU resources and generating tighter code.
> this critique
Critique is not what you have engaged in at any point.
Sorry, my earlier comments were somewhat vague and assuming we were on the same page about a few things. Let me be concrete.
The snippet is, after lowering:
if (x)
return { a = 13, b = undef }
else
return { a = undef, b = 37 }
LLVM represents this as a phi node of two aggregates:
a = phi [13, then], [undef, else]
b = phi [undef, then], [37, else]
Since undef isn’t “unknown”, it’s “pick any value you like, per use”, InstCombine is allowed to instantiate each undef to whatever makes the expression simplest. This is the problem.
a = 13
b = 37
The branch is eliminated, but only because LLVM assumes that those undefs will take specific arbitrary values chosen for convenience (fewer instructions).
Yes, the spec permits this. But at that point the program has already violated the language contract by executing undefined behavior. The read is accidental by definition: the program makes no claim about the value. Treating that absence of meaning as permission to invent specific values is a semantic choice, and precisely what I am criticizing. This “optimization” is not a win unless you willfully ignore the program and everything but instruction count.
As for utility and justification: it’s all about user experience. A good language and compiler should preserve a clear mental model between what the programmer wrote and what runs. Silent non-local behavior changes (such as the one in the article) destroy that. Bugs should fail loudly and early, not be “optimized” away.
Imagine if the spec treated type mismatches the same way. Oops, assigned a float to an int, now it’s undef. Let’s just assume it’s always 42 since that lets us eliminate a branch. That’s obviously absurd, and this is the same category of mistake.
I recently compared performance per dollar for CPUs and GPUs on benchmarks for GPUs today vs 10 years ago, and suprisingly, CPUs had much bigger gains. Until I saw that for myself, I thought exactly the same thing as you.
It seems shocking given that all the hype is around GPUs.
This probably wouldn't be true for AI specific workloads because one of the other things that happened there in the last 10 years was optimising specifically for math with lower size floats.
It's coz of use cases. Consumer-wise, if you're gamer, CPU just needs to be at "not the bottleneck" level for majority of games as GPU does most of the work when you start increasing resolution and details.
And many pro-level tools (especially in media space) offload to GPU just because of so much higher raw compute power.
So, basically, for many users the gain in performance won't be as visible in their use cases
> If a person abuses the shared kitchen, they get kicked out. This is a business.
Not any business, it's a landlord-tenant relationship.
You can't simply kick out a tenant. You have to do a formal eviction process. In many cities this requires collecting evidence of contractual breach, proving that the tenant was notified they were being evicted (such as through a paid service to officially serve and record delivery of the notice), and then following the appropriate waiting period and other laws. It could be months and tens of thousands of dollars of legal fees before you can kick someone out of a house.
Contrast that with the $213 inflation-adjusted monthly rent that the article touts. How many months of rent would they have to collect just to cover the legal fees of a single eviction?
I think the biggest tool is higher expectations. Most programmers really haven't come to grips with the idea that computers are fast.
If you see a database query that takes 1 hour to run, and only touches a few gb of data, you should be thinking "Well nvme bandwidth is multiple gigabytes per second, why can't it run in 1 second or less?"
The idea that anyone would accept a request to a website taking longer than 30ms, (the time it takes for a game to render it's entire world including both the CPU and GPU parts at 60fps) is insane, and nobody should really accept it, but we commonly do.
Pedantic nit: At 60 fps the per frame time is 16.66... ms, not 30 ms. Having said that a lot of games run at 30 fps, or run different parts of their logic at different frequencies, or do other tricks that mean there isn't exactly one FPS rate that the thing is running at.
The CPU part happens on one frame, the GPU part happens on the next frame. If you want to talk about the total time for a game to render a frame, it needs to count two frames.
Computers are fast. Why do you accept a frame of lag? The average game for a PC from the 1980s ran with less lag than that. Super Mario Bros had less than a frame between controller input and character movement on the screen. (Technically, it could be more than a frame, but only if there were enough objects in play that the processor couldn't handle all the physics updates in time and missed the v-blank interval.)
If Vsync is on which was my assumption from my previous comment, then if your computer is fast enough, you might be able to run CPU and GPU work entirely in a single frame if you use Reflex to delay when simulation starts to lower latency, but regardless, you still have a total time budget of 1/30th of a second to do all your combined CPU and GPU work to get to 60fps.
Just as an example, round trip delay from where I rent to the local backbone is about 14mS alone, and the average for a webserver is 53mS. Just as a simple echo reply. (I picked it because I'd hoped that was in Redmond or some nearby datacenter, but it looks more likely to be in a cheaper labor area.)
However it's only the bloated ECMAScript (javascript) trash web of today that makes a website take longer than ~1 second to load on a modern PC. Plain old HTML, images on a reasonable diet, and some script elements only for interactive things can scream.
In the cloud era this gets a bit better but my last job I removed a single service that was adding 30ms to response time and replaced it with a consul lookup with a watch on it. It wasn’t even a big service. Same DC, very simple graph query with a very small response. You can burn through 30 ms without half trying.
This is again a problem understanding that computers are fast. A toaster can run an old 3D game like Quake at hundreds of FPS. A website primarily displaying text should be way faster. The reasons websites often aren’t have nothing to do with the user’s computer.
That’s per core assuming the 16ms is CPU bound activity (so 100 cores would serve 100 customers). If it’s I/O you can overlap a lot of customers since a single core could easily keep track of thousands of in flight requests.
Uber could run the complete global rider/driver flow from a single server.
It doesn't, in part because all of those individual trips earn $1 or more each, so it's perfectly acceptable to the business to be more more inefficient and use hundreds of servers for this task.
Similarly, a small website taking 150ms to render the page only matters if the lost productivity costs less than the engineering time to fix it, and even then, only makes sense if that engineering time isn't more productively used to add features or reliability.
Practically, you have to parcel out points of contention to a larger and larger team to stop them from spending 30 hours a week just coordinating for changes to the servers. So the servers divide to follow Conway’s Law, or the company goes bankrupt (why not both?).
Microservices try to fix that. But then you need bin packing so microservices beget kubernetes.
I'm saying you can keep track of all the riders and drivers, matchmake, start/progress/complete trips, with a single server, for the entire world.
Billing, serving assets like map tiles, etc. not included.
Some key things to understand:
* The scale of Uber is not that high. A big city surely has < 10,000 drivers simultaneously, probably less than 1,000.
* The driver and rider phones participate in the state keeping. They send updates every 4 seconds, but they only have to be online to start a trip. Both mobiles cache a trip log that gets uploaded when network is available.
* Since driver/rider send updates every 4 seconds, and since you don't need to be online to continue or end a trip, you don't even need an active spare for the server. A hot spare can rebuild the world state in 4 seconds. State for a rider and driver is just a few bytes each for id, position and status.
* Since you'll have the rider and driver trip logs from their phones, you don't necessarily have to log the ride server side either. Its also OK to lose a little data on the server. You can use UDP.
Don't forget that in the olden times, all the taxis in a city like New York were dispatched by humans. All the police in the city were dispatched by humans. You can replace a building of dispatchers with a good server and mobile hardware working together.
You could envision a system that used one server per county and that’s 3k servers. Combine rural counties to get that down to 1000, and that’s probably less servers than uber runs.
What the internet will tell me is that uber has 4500 distinct services, which is more services than there are counties in the US.
The reality is that, no, that is not possible. If a single core can render and return a web page in 16ms, what do you do when you have a million requests/sec?
The reality is most of those requests (now) get mixed in with a firehose of traffic, and could be served much faster than 16ms if that is all that was going on. But it’s never all that is going on.
This is a terrible time to tell someone to find a movable object in another part of the org or elsewhere. :/
I always liked Shaw’s “The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”
The amount of drama about AI based upscaling seems disproportionate. I know framing it in terms of AI and hallucinated pixels makes it sound unnatural, but graphics rendering works with so many hacks and approximations.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.
> AI upscaling is equivalent to lowering bitrate of compressed video.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
> I am certainly not going to celebrate the reduction in image quality
What about perceived image quality? If you are just playing the game chances of you noticing anything (unless you crank up the upscaling to the maximum) are near zero.
The contentious part from what I get is the overhead for hallucinating these pixels, on cards that also cost a lot more than the previous generation for otherwise minimal gains outside of DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
I also find them completely useless for any games I want to play. I hope that AMD would release a card that just drops both of these but that's probably not realistic.
They will never drop ray tracing, some new games require ray tracing. The only case where I think it's not needed is some kind of specialized office prebuilt desktops or mini PCs.
There are a lot of theoretical arguments I could give you about how almost all cases where hardware BVH can be used, there are better and smarter algorithms to be using instead. Being proud of your hardware BVH implementation is kind of like being proud of your ultra-optimised hardware bubblesort implementation.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
> But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
There were and always are people who swear to not see the difference with anything above 25hz, 30hz, 60hz, 120hz, HD, Full HD, 2K, 4K. Now it's ray-tracing, right.
I can see the difference in all of those. I can even see the difference between 120hz and 240hz, and now I play on 240hz.
Ray tracing looks almost indistinguishable from really good rasterized lighting in MOST conditions. In scenes with high amounts of gloss and reflections, it's a little more pronounced. A little.
From my perspective, you're getting, like, a 5% improvement in only one specific aspect of graphics in exchange for a 200% cost.
CP2077 rasterization vs ray tracing vs path tracing is like night and day. Rasterization looks "gamey". Path tracing makes it look pre-rendered. Huge difference.
CP2077 purposefully has as many glossy surfaces as humanly possible just for this affect. It somewhat makes sense with the context. Everything is chrome in the future, I guess.
As soon as you remove the ridiculous amounts of gloss, the difference is almost imperceptible.
There’s an important distinction between being able to see the difference and caring about it. I can tell the difference between 30Hz and 60Hz but it makes no difference to my enjoyment of the game. (What can I say - I’m a 90s kid and 30fps was a luxury when I was growing up.) Similarly, I can tell the difference between ray traced reflections and screen space reflections because I know what to look for. But if I’m looking, that can only be because the game itself isn’t very engaging.
I think one of the challenges is that game designers have trained up so well at working within the non-RT constraints (and pushing back those constraints) that it's a tall order to make paying the performance cost (and new quirks of rendering) be paid back by RT improvements. There's also how a huge majority of companies wouldn't want to cut off potential customers in terms of whether their hardware can do RT at all or performance while doing so. The other big one is whether they're trying to recreate a similar environment with RT, or if they're taking advantage of what is only possible on the new technique, such as dynamic lighting and whether that's important to the game they want to make.
To me, the appeal is that game environments that can now be way more dynamic because we're not being limited by prebaked lighting. The Finals does this, but doesn't require ray tracing and it's pretty easy to tell when ray tracing is enabled: https://youtu.be/MxkRJ_7sg8Y
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
There's also Metro: Exodus, which the developers have re-made to only support RT lighting. DigitalFoundry made a nice video on it: https://www.youtube.com/watch?v=NbpZCSf4_Yk
naive q: could games detect when the user is "looking around" at breathtaking scenery and raytrace those? offer a button to "take picture" and let the user specify how long to raytrace? then for heavy action and motion, ditch the raytracing? even better, as the user passes through "scenic" areas, automatically take pictures in the background. Heck, this could be an upsell kind of like the RL pictures you get on the roller coaster... #donthate
Even without RT I think it'd be beneficial to tune graphics settings depending on context, if it's an action/combat scene there's likely aspects the player isn't paying attention to. I think the challenge is it's more developer work whether it's done by implementing some automatic detection or manually being set scene by scene during development (which studios probably do already where they can set up specific arenas). I'd guess an additional task is making sure there's no glaring difference between tuning levels, and setting a baseline you can't go beneath.
It will never be fast enough to work in real time without compromising some aspect of the player's experience.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
Manually placed lights and baking not only takes time away from iteration but also takes a lot of disk space for the shadow maps. RT makes development faster for the artists, I think DF even mentioned that doing Doom Eternal without RT would take so much disk space it wouldn’t be possible to ship it.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
The quoted number was in the range of 70-100 GB if I recall correctly, which is not that significant for modern game sizes. I’m sure a lot of people would opt to use it as an option as a trade off for having 2-3x higher framerate. I don’t think anyone realistically complains about video game lighting looking too “gamey” when in a middle of an intense combat sequence. Why optimize a Doom game of all things for standing still and side by side comparisons? I’m guessing NVidia paid good money for making RT tech mandatory.
And as for shortened development cycle, perhaps it’s cynical, but I find it difficult to sympathize when the resulting product is still sold for €80
It's fast enough today. Metro Exodus, an RT-only game runs just fine at around 60 fps for me on a 3060 Ti. Looks gorgeous.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
Most modern engines support (and encourage) use of a mixed lighting mode. You can have the best of both worlds. One directional RT light probably isn't going to ruin the pudding if the rest of the lights are baked.
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Not OP, but a lot of the current kvetching about hardware based ray tracing is that it’s basically an nvidia-exclusive party trick, similar to DLSS and physx. AMD has this inferiority complex where nvidia must not be allowed to innovate with a hardware+software solution, it must be pure hardware so AMD can compete on their terms.
1. People somehow think that just because today's hardware can't handle RT all that well it will never be able to. A laughable position of course.
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
The gimmicks aren't the product, and the customers of frontier technologies aren't the consumers. The gamers and redditors and smartphone fanatics, the fleets of people who dutifully buy, are the QA teams.
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
And they're achieving "acceptable" frame rates and resolutions by sacrificing image quality in ways that aren't as easily quantified, so those downsides can be swept under the rug. Nobody's graphics benchmark emits metrics for how much ghosting is caused by the temporal antialiasing, or how much blurring the RT denoiser causes (or how much noise makes it past the denoiser). But they make for great static screenshots.
I disagree. From what I’ve read if the game can leverage RT the artists save a considerable amount of time when iterating the level designs. Before RT they had to place lights manually and any change to the level involved a lot of rework. This also saves storage since there’s no need to bake shadow maps.
So what stops the developers from iterating on a raytraced version of the game during development, and then executing a shadow precalcualtion step once the game is ready to be shipped? Make it an option to download, like the high resolution texture packs. They are offloading processing power and energy requirements to do so on consumer PCs, and do so in an very inefficient manner
> Someone who thinks COVID was a hoax isn't going to be one to dig deep.
This is kind of a side point, but people with fringe beliefs tend to dig a lot deeper to validate those opinions than those with a mainstream view.
You can bet that someone who thinks that the moon landing was a hoax to the point that they would tell someone about it will know more about the moon landing than a random person who believes it was real.
It often takes an expert in something to shoot down the arguments.
> but people with fringe beliefs tend to dig a lot deeper
Do they actually, though? Or do they just look for endless superficial surface claims?
I mean, if they actually dug deep they're going to encounter all kinds of information that would indicate that the moon landing was real. Which, then, if they still maintain that it was a hoax in light of that then they have to believe that the deep information is also a hoax. So if someone really was digging deep into personal details of your life, then what they read about you must also be a hoax, naturally.
Which, given the concern, one may as well solidify by putting fake information out there about themself. No sane person is going to be searching high and low for details about your personal life anyway. A moon landing hoax believer isn't going to buy into a published academic paper or whatever breadcrumb you accidentally left as a source of truth to prove that you have a PhD when a random website with a Geocities-style design says that you never went to college!
There is an infinite supply of people spouting bullshit and validation of that bullshit on the internet. You can spend a lifetime reading through that bullshit, and certainly feel like you're "doing research".
I am utterly fascinated by the flat earth movement, not because I believe in a flat earth, but because it's so plainly idiotic and yet people will claim they've done experiments and research and dug deep, primarily because they either don't know how to read a paper or how to interpret an experiment or simply don't know how lenses work. It's incredible.
> You can spend a lifetime reading through that bullshit, and certainly feel like you're "doing research".
I'm not sure broad and deep are the same thing, but maybe we're just getting caught up in semantics?
> It's incredible.
Does anyone truly believe in a flat earth, though, or is it just an entertaining ruse? I hate to say it, but it can actually be pretty funny watching people nonsensically fall over themselves to try and prove you wrong. I get why someone would pretend.
> I'm not sure broad and deep are the same thing, but maybe we're just getting caught up in semantics?
They’re not the same thing but I think they’re still going “deep” in that they will focus very heavily on one subject in their conspiracy rabbit hole.
> Does anyone truly believe in a flat earth, though, or is it just an entertaining ruse?
I think that a lot of people are faking, but I am pretty convinced that at least some people believe it. There was that dude a few years ago who was trying to build a rocket to “see if he could see the curve”, for example.
I have seen some fairly convincing vlogs where the people at least seem to really believe it.
> I think they’re still going “deep” in that they will focus very heavily on one subject in their conspiracy rabbit hole.*
Which is totally fair, but may not be what I imagined when I said "deep".
> There was that dude a few years ago who was trying to build a rocket to “see if he could see the curve”, for example.
Building a rocket sounds like fun, to be honest. If you are also of the proclivity that you are entertained by claiming to believe in a flat earth, combining your hobbies seems like a pretty good idea.
> I have seen some fairly convincing vlogs where the people at least seem to really believe it.
At the same time people don't normally talk about the things they (feel they) truly understand. It is why we don't sit around talking about 1+1=2 all day, every day. Humans crave novelty. It is agonizing having to listen to what you already know. As such, one needs to be heavily skeptical of someone speaking about a subject they claim to understand well without financial incentive to overcome the boredom of having to talk about something they know well. And where there is financial incentive, you still need to be skeptical that someone isn't just making shit up for profit.
When someone is speaking causally about something, you can be certain that either: 1) They recognize that they don't have a solid understanding and are looking to learn more through conversation. 2) Are making shit up for attention.
There is no good way to know how many flat earthers never speak of it, I suppose, but as far as the vocal ones go I don't suppose they are really looking to learn more...
When I built my house I went full home automation. At the time I was telling my friends about how important it was not to have cloud dependancy, and how I was doing everything local.
I use KNX as the main backbone and Home Assistant for control.
And everything was local with the one exception of my Kevo door lock. At the time I built, there just wasn’t a perfect local only solution.
I hadn’t planned properly for a way to integrate a wired in solution into the joinary around the door due to the particular circumstances of where it was, so I needed something wireless, and nothing wireless was local only at the time.
What pisses me off is that it’s the one thing I compromised on, and it’s the one thing that bit me.
Now I have very little notice to find a replacement with the same features.
My house lock is probably the one place where I'm not prepared to compromise security with a DIY solution. Not talking about the software security (in fact open source solutions are probably more secure) but literally the hardware and build quality of any DIY work.
I think you'll find it not as comprising as you believe, and might be a fun project.
Since you'll likely be scrapping it in some fashion, might want to try disassembling it first to see what would need to be done.
If you are not handy with electronics, there is also a chance their will be some work around the 3rd party server at some point, as in the protocol and such being deciphered, or a custom firmware you can build and flash.
If you do get it working, it would make a great spare.
that's kind of funny though as any lock can be picked. if someone wants into your house, most of the time they will not enter the locked front door. they'll find a window in the back that is easier to open with whatever they find in your back yard. they might exit the front door on their way out though. also, most locks are easily picked by someone with practice
If memory serves, something like 2% of break ins use "lock picking" which includes shimming a sliding door, a very low skill attack. Criminals just don't use high skill attacks to burgle homes. Probably a combination of most crimes being opportunistic, most criminals doing them being low skilled themselves, and people like us not being rich enough to move into the level of being targeted by the minuscule percent of high skill burglars.
One of their digital lock designs had a rather cough Pleasing vulnerability. But other than that it's vendor lock-in (heh), and lack of availability in the US.
With most so called locksmiths being drillsmiths in the US, not being able to clone DD and dimple keys.
Puck one. Or maybe the OP is just bitter they can't pick it for their next "belt" after getting chuffed with themselves picking average american garbage.
Digital locks aside, this is more applicable to any lock you buy and rely on (substitute US with your local region):
> lack of availability in the US
I wouldn't go out of my way to find something like Schlage here, when Abloy (Assa Abloy) locks are available in abundance with locksmiths able to duplicate usually all the key variants.
No, there was a vending machine smart lock that if you hitachi'd it right it'd unlock.
And, I phrased it wrong: most people expect to be able to walk into lowes and clone a key. And while it seems assa has been on a buying spree since I last looked at them, I do not associate them with anything you'd be able to find at big box store. When I think assa abloy I think "you better have the key card or you're SOL."
As a European, most of the products mentioned in the linked article and this discussion are from brands I've never associated with Assa Abloy in the first place.
I do agree with you, but I think there a non-zero chance the situation might be different now.
We are not getting the same insane gains from node shrinks anymore.
Imagine the bubble pops tomorrow. You would have an excess of compute using current gen tech, and the insane investments required to get to the next node shrink using our current path might no longer be economically justifiable while such an excess of compute exists.
It might be that you need to have a much bigger gap than what we are currently seeing in order to actually get enough of a boost to make it worthwhile.
Not saying that is what would happen, I'm just saying it's not impossible either.
And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.
I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
reply