Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's more interesting than I realized. In this example, I assumed that the model was generating some sort of 3D mesh representing the woman. Is that not at all the case? Would this technique be unable to generate a model or volumetric information despite being able to reasonably render her from many directions?


No, there is no mesh. A NeRF is a neural network trained to work as a function f(x, y, z, θ, φ). You put in your viewing position (x, y, z) in 3D space and the direction (θ, φ) you're looking into (where θ and φ are the angles for up/down and left/right, respectively), and the function will output a tuple (r, g, b, σ) of the colour (r, g, b) and the material density (σ) of whatever you see at the pixel in that direction and from that position.

You can generate a mesh from the density information this function gives you, but for that you need to discretise the continuous densities you get out.


Minor correction: it's not your (i.e. the camera's) XYZ position what you input, but the position of the point whose RGBA you're trying to render.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: