Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My prediction/hope is that NeRFs will totally revolutionize how the film/TV industry. I can imagine:

- Shooting a movie from a few cameras, creating a movie version of a NeRF using those angles, and then dynamically adding in other shots in post

- Using lighting and depth information embedded in NeRFs to assist in lighting/integrating CG elements

- Using NeRFs to generate virtual sets on LED walls (like those on The Mandalorian) from just a couple of photos of a location or a couple of renders of a scene (currently, the sets have to be built in a game engine and optimized for real time performance).



This sort of stuff (generating 3D assets from photographs of real objects) has been common for quite a while via photogrammetry. NeRFs are interesting because (in some cases) they can create renders that look higher quality with fewer photos, and they hint at the potential of future learned rendering models.


I did it for a project… it was SLOW! Gave decent rules for a large area, but poor results for small details.

The real trick is textures.

When the photo is laid over and wrapped on the model, it looks great. But once you remove that, the raw mesh underneath is not as impressive.

I would really like to see the examples they had here without the texture laid on top.


There is no mesh here. Nerfs are 5d (colours are computed based on a 3D position vector + a view direction vector) fields that are rendered volumetrically. So the “texture” is an integral part of the neural representation of the scene, not just an image applied to a mesh.

The cool part is that this also allows for capturing transparency, and any effects caused by lighting (including complex specular reflections) are embedded into the representation.


Nitpicking, but for GP; Nerf is the internal representation, but the output doesnt have to be 2D (ray traced basically)

There are examples of people outputting SDF (and by extension geometry) with nerf, and projecting original texture onto that would give some nice effects; (live volumetric works best this way) though there would be some disparity where edges/occlusion isnt perfect, so youd want to sample nerf's rgb anyway... although a lot of that is fuzzy at the edges too. A lot of incorrect transparency at edges looks great in the 2D renders (so much anti aliasing and noise!) but less good for texturing


A NeRF is not the same as an SDF though. NGP (the paper by Nvidia linked here) can train NeRFs and SDFs, but I don't know of any straightforward way of extracting an SDF from a NeRF.

And while it's true that there are methods for extracting a surface from a NeRF, achieving a high quality result can be challenging because you have to figure out what to do with regions that have low occupancy (i.e. regions that are translucent). Should you consider those regions as contained within the surface, or outside of it? Especially when dealing with things like hair, it's not obvious how to construct a surface based on a NeRF.


Given the same amount of compute as this used, photogrammetry would be about as fast.


Perhaps even making non-gimmicky live action 3d films.

Having 3d renders of the entire film without needing green screens and a bunch of balls seems like it would have to make some of the post processing work easier. You can add or remove elements. Adjust the camera angles. More effectively de-age actors. Heck, even create scenes whole cloth if an actor unexpectedly dies (since you still have their model).

Seems like you could also save some time having fewer takes. What you can fix in post would be dramatically expanded.

Best part for film makers, they are often using multiple cameras anyways. So this doesn't seem like it'd be too much of a stretch.


With this - and with all the footage on actors you have (their movies) - then you basically have all actor's models.


Throw those into Copilot for filmmakers and away you go!


> - Using lighting and depth information embedded in NeRFs to assist in lighting/integrating CG elements

> - Using NeRFs to generate virtual sets on LED walls

Sounds like a powerful set of tools to defeat a number of image manipulation detection tricks, with limited effort once the process is set up as routine. State actor level information warfare will soon be a class of its own. Not just in terms of getting harder to detect, but more importantly in terms of becoming able to produce "quality" in high volume.


Computer games, VR and AR could also be pretty amazing uses for this technique too.


RIP photo-realistic modelers


hmmm well I still think they will be in demand for the same reason software developers will be not automated away. NeRF is really mind boggling good but there are still artifacts, and something that modelers have a good eye for.

Having said that, it might be the end for any junior type of roles. Same reason that github copilot really takes a bite of the need to have a junior developer.

I'm very curious what will happen because it will become a sort of trend across other industries apart from legal or medical professions (peace of mind from human-in-the-loop).


Maybe we'll have people spend their time building IRL sculptures and spaces to get digitized.


People made clay sculptures of CG characters as a modeling technique for a long time. It’s still done, but digital sculpture tools are getting easier to use so it’s not as common as it was.


Not even a full 3-D environment is required, just a bit of 6DOF and parallax would go a long way. I think VR videos (ok, porn) has gone as far as it can without head tracking.


I'm pretty sure the porn industry will find many uses to this technology


Have you seen the first episode of Halo, there are multiple outdoor scenes where you feel sure it's a recording than a CGI render. The uncanny valley is almost crushed


I can see this really taking off for football games. You'll be able to look at plays from all angles, zoom in/out, and get to _play_ with the game.


My, maybe too extreme, future fantasy version of this is turning existing movies into 3d movies you could watch in VR.


I'm thinking it would be something like: I want to be the baddy in Die Hard and want the protagonist to be Peter Griffin (cartoon version). The system feeds you the movie ... I'm imagining there could be an industry for writers to create the off-screen plots of other characters and principaly it would be rendered with the same scenes as the original movie.


- help with product placement advertisements


It will boost cut-scenes in games as well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: