Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Five years ago, I've used common software to do this. I had to take hundreds of pictures of a scene, getting as many angles and details possible. Then when you pass it to the computer. Stitching it all together took well beyond 24 hours.

Now that I had a 3d model of the scene, I had to spend countless hours cleaning it up to make sure it was useable. Maybe in the last 5 years, things have improved.

But this demo used 4 pictures. And apparently, it rendered the final image in seconds. That's what's new.



Did it really use only 4 pictures? Do you have a source? https://news.ycombinator.com/item?id=30810885


If I understand it correctly it didn't make a 3d model, though. So you can't extract and reuse the result. Only move around in it and it creates an image for that viewpoint. But no meshes or textures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: