Rendered #5: NVIDIA Omniverse and a Volumetric Video Standards Association

May 28, 2021
protect

Rendered is a monthly newsletter on 3D rendering technology, game engines, volumetric filmmaking, photogrammetry, and everything in-between. It's your guide to emerging realities

For this issue, we see creative production get some much needed "modern" tooling, Google shows a 3D Telepresence demo, and a newly-formed Volumetric Format Association aims to solidify volumetric video formats

News

<iframe title="Promethean AI Keynote" src="//www.youtube.com/embed/hA0MsGWvmzs?enablejsapi=1&amp;origin=https%3A%2F%2Fwww.gamedeveloper.com" height="360px" width="100%" data-testid="iframe" loading="lazy" scrolling="auto" class="optanon-category-C0004 ot-vscat-C0004 " data-gtm-yt-inspected-91172384_163="true" id="636994219" data-gtm-yt-inspected-91172384_165="true" data-gtm-yt-inspected-113="true"></iframe>

Promethean AI Creation Engine

This is a bit of a follow-up to the concept of “Assisted Creation” I brought up in the last newsletterFrom their website:

Promethean AI is world's first Artificial Intelligence that works together with Artists, assists them in the process of building virtual worlds, helps creative problem solving by suggesting ideas and takes on a lot of mundane and non-creative work, so You can focus on what's important. All while learning from and adapting to individual tastes of every single Artist.

Typical ridiculous startup claims aside, what Promethean AI aims to offer is something like auto-complete or Gmail’s “smart compose” for the process of environmental art design. It’s very much that case that every project can be its own special flower that needs specific tweaks, but it’s also true that many projects contain easily automatable redundant work.

Tools like Houdini aim to solve a bit of this, but only at a a per-asset level (mostly). Houdini can help you generate millions of buildings that all look unique while still looking stylistically similar, but the act of actually arranging those buildings (or trees or chairs or whatever else) in a game world remains somewhat tedious work. Promethean AI aims to solve at least some of that, giving artists and creators a tool that ideally works with them to produce better environments, faster. It’s sort of like SpeedTree, but for whole environments and hand waves AI.

The whole keynote is definitely worth a watch, and hats off to the team for really showing a lot here, not just talking about dreams or ambitions.

<iframe title="Embedded content" src="//www.youtube.com/embed/xC6cho2VL6c?enablejsapi=1&amp;origin=https%3A%2F%2Fwww.gamedeveloper.com" height="360px" width="100%" data-testid="iframe" loading="lazy" scrolling="auto" data-gtm-yt-inspected-91172384_163="true" id="160805527" class="optanon-category-C0004 ot-vscat-C0004 " data-gtm-yt-inspected-91172384_165="true" data-gtm-yt-inspected-113="true"></iframe>

NVIDIA Omniverse

Speaking of tools, Nvidia recently unveiled “Omniverse”. Nvidia press strategy is often “if you know you know” so at times it can be hard to parse if something they are showing off is a piece of hardware, software, plugin, etc. 

From their own site:

Omniverse enables universal interoperability across different applications and 3D ecosystem vendors. It provides efficient real-time scene updates and is based on open-standards and protocols. The Omniverse Platform is designed to act as a hub, enabling new capabilities to be exposed as microservices to any connected clients and applications.

As far as I understand it, Omniverse leverages Pixar’s USD format (basically — a “scene description” file) in the cloud, and the disparate applications used in creative production essentially “commit” those changes to a cloud-hosted USD representation of the scene. Those changes then propagate to other connected client machines. RTX is involved via “views” on the content, generated by RTX machines (in the cloud), providing you high resolution renders without needing the hardware itself.

If it’s a bit confusing, it’s because it doesn’t seem like all parts of Omniverse are meant for all projects. VFX people can benefit from the RTX rendering, while game engine work is mostly concerned with asset synchronization.

Creative project collaboration is a notoriously sticky problem to solve, so it’s exciting to see someone tackle the issue in what feels like “the right way,” and with many vendors offering plugins and compatibility patches it seems like it will take.

Google’s Project Starline

Google strapped “more than a dozen different depth sensors and cameras” to a TV and made a 3D telepresence demo. I think people care? But reading the Wired article that covered the project, it’s clear they have the same sort of ¯\_(ツ)_/¯ reaction I did:

Google’s Project Starline seems especially overengineered, an amalgamation of accessible tech (Google Meet), nerd tech (computer vision! compression algorithms!), and an intricately constructed, unmovable mini studio, all for the sake of … more video meetings.

3D telepresence demos are nothing new. We even did one with Depthkit a few years ago, and Or Fleisher of this very newsletter did one before that. Here’s the thing about them: the novelty of them for the user wears off quickly, and no matter how high-res your screen is it will still feel like looking at a screen. Yes yes they are doing sensor fusion and streaming the data over the internet and reconstructing it at a different location, but what is never talked about is why.

It’s presupposed that people want this, that higher fidelity remote conversations improve the human experience or something. I recognize this is a step of many towards something that may actually achieve those aims, but for now it’s hard to see this and not feel like I’m being asked to sit at attention and talk to someone in the most unnatural way possible.

That aside, the fact Google was cagey about the actual depth sensors used is interesting — is Google making a standalone depth sensor? We’ll see!

Volumetric Format Association Founded

From the press release:

The first industry association dedicated to ensuring interoperability across the volumetric video ecosystem has launched. Seven companies have joined forces on the association, including Verizon, ZEISS, RED Digital Cinema, Unity, Intel, NVIDIA, and Canon. The aim of the Volumetric Format Association is to establish a collection of specifications driving adoption of volumetric capture, processing, encoding, delivery, and playback.

This is a Big Deal. Some of the largest players in camera tech, game engines, and realtime technology have collectively decided to form an association dedicated to standardizing volumetric video. This is notable fo

JikGuard.com, a high-tech security service provider focusing on game protection and anti-cheat, is committed to helping game companies solve the problem of cheats and hacks, and providing deeply integrated encryption protection solutions for games.

Read More>>