> Any image transformation you can do on voxels you can straightforwardly transfer to nerfs
No.
> it might be cheaper to do it on the fly instead of redoing the training procedure to bake the change into the model.
I think it’s a bit more complex than you imagine; it’s not “cheaper/not cheaper”; it’s literally the only way of doing it.
If you have a transformation f(x) that takes a pixel array as in input and returns a pixel array as an output, that is a trivial transformation.
If you have a transformation that takes a vector input f(x) and returns a pixel output, it’s seriously bloody hard to convert it to a “good” vector again.
Consider taking a layered svg and applying a box blur.
Now you want an svg again.
It’s not a trivial problem. Lines blur and merge, you have reconstruct an entirely new svg.
Now you add the constraint in 3d; you can never have a full voxel representation in memory even temporarily because of memory constraints.
At best you’re looking at applying voxel level transformations on the fly to render specific views, and then retrain those into a new nerf model.
I think that counts as … not straightforward.
Doing all your transformations on the fly is a lovely idea, but you gotta understand reason nerf exists is that the raw voxel data is too big to store in memory. It’s simply not possible you can dynamically run a image processing pipeline over that volume data in real-time. You have to bake it into nerfs to use it at all.
> Consider taking a layered svg and applying a box blur.
> Now you want an svg again.
> It’s not a trivial problem. Lines blur and merge, you have reconstruct an entirely new svg.
Nerfs are not svgs.
Consider taking a nerf and applying a box blur. Easy, a box blur on voxel data takes multiple samples within a box and averages them together, so to do the same thing to a nerf just take multiple samples and average them together.
That does get slower the more samples you need, but you never have to materialize a full voxel representation.
> Doing all your transformations on the fly is a lovely idea, but you gotta understand reason nerf exists is that the raw voxel data is too big to store in memory.
> It’s simply not possible you can dynamically run a image processing pipeline over that volume data in real-time.
No.
> it might be cheaper to do it on the fly instead of redoing the training procedure to bake the change into the model.
I think it’s a bit more complex than you imagine; it’s not “cheaper/not cheaper”; it’s literally the only way of doing it.
If you have a transformation f(x) that takes a pixel array as in input and returns a pixel array as an output, that is a trivial transformation.
If you have a transformation that takes a vector input f(x) and returns a pixel output, it’s seriously bloody hard to convert it to a “good” vector again.
Consider taking a layered svg and applying a box blur.
Now you want an svg again.
It’s not a trivial problem. Lines blur and merge, you have reconstruct an entirely new svg.
Now you add the constraint in 3d; you can never have a full voxel representation in memory even temporarily because of memory constraints.
At best you’re looking at applying voxel level transformations on the fly to render specific views, and then retrain those into a new nerf model.
I think that counts as … not straightforward.
Doing all your transformations on the fly is a lovely idea, but you gotta understand reason nerf exists is that the raw voxel data is too big to store in memory. It’s simply not possible you can dynamically run a image processing pipeline over that volume data in real-time. You have to bake it into nerfs to use it at all.