A new Machinima age is nigh, with NVIDIA, AI, game effects and faces and poses

When artists can get their hands on the latest NVIDIA GPUs, expect a bumper crop of AI-assisted poses and faces and music videos. The toys keep coming – NVIDIA Machinima just hit public beta.

I’m interested. Go on. (Images courtesy NVIDIA.)

Okay, so NVIDIA have been busy, which means there is a repeating cadence to their news. It’ll involve Omniverse somehow (their platform for collaboration), they’ll want you to buy one of their fancy new GPUs (think GeForce RTX 3060+ or Quadro RTX 4000+), and then… well some kind of “AI” machine learning is usually a safe bet, too, in combination with the graphics eye candy features.

But this Mad Libs game is fun. This time, what you get is Omniverse Machinima, a dedicated Omniverse app for making your own narrative (read: recorded, linear) animations using game engine tools.

[embedded content]

“Machinima” is a word I haven’t heard in a long time. It’s been long enough that I could – gasp, tell some kids to “ask your parents.” There was the genius of This Spartan Life [2005], a talk show hosted in Halo on Xbox. The technique even saw some action on South Park.

[embedded content]

[embedded content]

It makes sense that a new generation of tools will spawn a new generation of machinima. Unreal, Unity, Blender, Notch, and whatnot don’t really quite count – part of the “punk” appeal of machinima is somehow working with hacked-together elements of games.

But rapid visualization with today’s bleeding-edge, photorealistic engines is just an obvious new world to explore. The Omniverse angle is that you’ve got a platform set up for collaboration – meaning you don’t have to do this alone.

There are some other interesting ingredients here – and this is free?! (at least for the moment):

  • Photorealistic rendering (with NVIDIA’s MDL material library)
  • Animated faces (hey, I’m going to try that on a music video once I can get a GPU, won’t you?) – NVIDIA Audio2Face – can you believe
  • Visual effects and physics (which of course uses PhysX 5Blast, and Flow )
  • The ability to capture motion from a phonewrnch CaptureStream
  • Pose estimation that actually works (now using AI, instead of the ever-so-glitchy stuff in tools like the early Kinect or – eep – the horrible stuff we used to do with webcams) – see wrnch AI Pose Estimator
  • Sequencer for sticking stuff together – with real-time rendering, so you don’t have to render and use (eww) Premiere or Final Cut or something, but you can piece together a story
  • Reshade for different rendering styles (so you can make this cartoon-y if you want, even)
  • Camera Animation Tool for camera movements
  • Animation Recorder

This is either going to be a total mess of way too many parts wishing to be a real tool, or utter genius, it seems.

These tutorials look tasty, indeed:

Omniverse Machinima – Audio2Face in Machinima

Omniverse Machinima – Wrnch & Recording

But the specific tool aside, I’d also look at the big picture here – which is definitely about more than just NVIDIA and/or Omniverse, even if they’re a great case study in these trends:

  • AI assistance for animating from sound and motion capture (making this almost as much like puppetry as filmmaking)
  • Real-time photorealistic rendering
  • Networked toolchain with collaboration options

It’s likely NVIDIA aren’t the only ones looking at that. I’ll be curious if someone does something related on Apple Silicon, though it doesn’t have the same horsepower yet. (Notably the person evangelizing this for NV used to do something similar for developers at Arm.)

But this is all a new realm, because if you think about what we had before, it’s more like:

  • Manual animation
  • Rendering
  • Rendering to your collaborators who render again so you have to manually re-adjust th…

Gah. Game engines offered some real-time tools around that, but often the only “collaboration” involved hopping into a game together – fun and still relevant here, but a far cry from creative teams being inside the backend animating together. Also, this points to network/collaboration tools you might not have thought of – like the “team” might be your phone and a couple of friends with laptops, with the phone sending stuff over your wifi to your main machine.

Fun to watch. Now we just need those GPUs. And then – it could be the flood gates opening.

NVIDIA Omniverse Machinima Now Available [NVIDIA Developer Blog]

https://www.nvidia.com/en-us/omniverse/apps/machinima/

They keep enhancing NVIDIA Broadcast, too….