Nvidia makes big advances in inverse rendering to transform still photos into 3D objects

Nvidia Corp. Today showcased the impressive advances it has made in the area of ​​“inverse rendering,” which is a technique that uses artificial intelligence to reconstruct a series of still photos into a 3D model of an object or scene.

Nvidia’s newest method of performing inverse rendering, known as Nvidia 3D MoMa, has potential applications for videogame developers, architects, designers and concept artists. With it, they could quickly and easily import a photo of an object into a graphics engine and modify it in a multitude of ways, changing its material and texture, adding lighting effects or altering its scale.

3D MoMa works by taking a photograph of an object or scene and transforming it into a kind of triangular mesh, complete with textured materials, that can be dropped into game engineers, 3D modeling programs and film renderers. Nvidia explained that advances in neural radiance fields, combined with the processing power of its Tensor Core graphics processing units, allow it to generate these triangle mesh models within an hour or less.

The reconstruction process recreates three key features from the still photo: a 3D mesh model of the object or scene, the materials and the lighting. The mesh can be thought of as a papier-mâché model of the object in question that’s built from triangles.

This model can then be altered in various ways by developers to adapt the object to their creative vision. Materials are integrated as 2D textures that can be overlaid onto the 3D mesh like a skin. Then, the model calculates how the recreated object is lit to maintain lighting accuracy.

Everything is automated, making it far easier than the traditional process of creating 3D objects from scratch using complex photogrammetry techniques, which involve manual effort and can take hours to complete, Nvidia said.

At this week’s Conference on Computer Vision and Pattern Recognition, Nvidia showcased the capabilities of Nvidia 3D MoMa by building 3D objects from a set of images of jazz band instruments, including a trumpet, trombone, saxophone, drum set and clarinet, taken from different angles .

Nvidia 3D MoMa studies these photos and reconstructs the 2D images as 3D representations of each instrument. In this way, it can lift them from their original scene and import them into the Nvidia Omniverse 3D simulator platform to edit them.

From there, it becomes possible to alter the shape or change the material of each instrument. Nvidia’s team replaced the original plastic material of the trumpet with various materials, including gold, marble, wood and cork.

The edited 3D objects can then be dropped into any virtual scene. Nvidia tested this by placing the instruments into a Cornell box, which is a classic graphics test of rendering quality. The instruments reacted to light just as they would in the physical world, with the shiny metallic ones reflecting brightly, while the matte drum skins absorbed most of the light.

Finally, Nvidia used the instruments as building blocks to create a complex animated scene of a virtual jazz band.

Holger Mueller of Constellation Research Inc. told SiliconANGLE it was good to see Nvidia making progress in the creation of 3D objects, as it’s an area that involves a lot of tedious and resource-intensive work. This has held back the creation of rich virtual reality and augmented reality applications, he added.

“Innovations like transforming 2D material into 3D objects are key to building a richer, software experience, be it for VR, AR or the metaverse itself,” Mueller said.

“The Nvidia 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of Nvidia GPUs to quickly produce 3D objects that creators can import, edit and extend without limitation in existing tools,” said David Luebke, Nvidia’s vice president of graphics research.

Images: Nvidia

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, ​​Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

.

Leave a Comment

Your email address will not be published.