I’ve learned a good bit this week about tools and techniques for animation for VR. The first technical hurdle I solved was the high image size/high frame rate of the rendered sequences issue I mentioned in my previous post. I found a simple (and free!) solution was to use Blender to comp the image sequences and render out Mp4s. Not being a regular Blender user I was surprised to see how capable and flexible it is for this but it’s really simplified things for me. No more need to render uncompressed AVIs from AE now and compress them in Handbrake, Blender handles exporting to Mp4 beautifully.
The other thing I’ve been exploring this week is Stereoscopic VR. A mono 360 video is nice.. but once you add depth with a stereo version it’s a whole lot more visually interesting and immersive. Of course that rears one issue though – a time effective way of rendering in stereo. At the beginning of the week I started testing the demo version of Redshift for Maya which has mono and stereo spherical cameras and was blown away with how fast it is on my machine as it uses the graphics card (I’m currently using a GTX 1080) rather than the CPU to do the heavy lifting in render calculations. After a few days of testing with watermarked images I took the plunge and bought the plugin. I rendered a 400 frame animation test overnight last night at 2880×2880 and it did it in just over 8 hours with very acceptable levels of noise. The attached image here shows one side of the stereo image, rendered in Spherical format which is compatible with Facebook 360 photos.
The great thing with Redshift is that you can turn the samples right down and get a quick and dirty preview to check animation before commiting to the overnight render, I got frames down to about 3 seconds each for these tests which is even faster than what I was getting with Playblast VR. I do a lot of checking of these in the Gear VR to see how scale, distance and other factors are working in the animations.
For those interested in sharing their 360 renders on Facebook there are a couple of tricks to getting Facebook to understand that it’s a 360 photo, one is to make sure the image has a 2:1 aspect ratio and the other is to embed camera metadata in the image file that makes Facebook think that it’s a photo taken by a 360 camera. For that I used Exiftool. Once you’ve got this working anyone on Facebook with their smartphone can look around the image with their phone acting as a virtual window into it. The Facebook app on Android also directly supports the Gear VR with a ‘view in VR’ option appearing on the 360 image which when clicked asks you to insert the phone in the Gear and it’s right there to look around in. Very cool. Right now I’ve yet to experience a Facebook friend trying this option but with Gear VR sales being over a million by now I believe it’s only a matter of time.
So now I’m happy that I have a practical one-man pipeline set up for animation testing, currently I’m working on the look of the environment and populating it with interesting things to keep the viewer looking around in this virtual world.
I tried a friends’ Gear VR a couple of months back and was immediately excited by the possibilities for animation. Being ‘inside’ an animated world is something very unique to the VR experience. I don’t want to just enjoy this as a passive thing though, I want to create my own content. I’ve wondered though, is this even possible without a large team of people? Only one way to find out I guess..
I’m going to document my tests here and talk about the tools, successes, failures and what I learn at each stage. To begin with I tried to figure out what tools would be accessible and comfortable for me to work with. After an early test of rendering for VR with a plugin for C4D using Cinemas native render engine it became very obvious that this approach was not going to fly for animation as with the amount of iteration required to get good results it was far too slow. So I started looking around and found the Playblast VR plugin for Maya from Andrew Hazelden. I bought this and found that it sped things up considerably as it allows you to get decent looking results using Mayas’ Viewport 2.0 hardware renderer. Essentially the plugin renders the scene from 6 views and then stitches them together in a variety of formats. The one I’ve mainly been using for the moment is the Latlong type as this is readily compatible with YouTube and Facebooks’ 360 video. Once I got this and started testing I found that some of my go-to features in VP2.0, like screen-space AO would not work in a 360 render, due to the nature of the effect – it works in screenspace only, which is fine for a 2d playblast, but not for a 360 degree stitched image.
Viewport 2.0 in Maya in general is pretty good though, certainly good enough for previs and testing of 360 setups and my next little test was to set up a simple scene to use for animation testing. To that end I modeled up a low-poly island and palm tree in ZBrush and set them up in Maya at the center of the world and used Mayas’ ocean shader to create some simple water. That gave me the look in the attached screen shot. It certainly doesn’t look realistic but my tastes in animation lean towards the cartoony so I’m more interested in achieving that in VR in any case. I’m also trying to keep things as simple as possible so that renders are fast and I can iterate as mentioned above.
To give some life to the scene I then animated a deformer on the palm tree leaves to get them moving and added a couple of looping character animations with the characters surfing across the ocean. Lastly I created a background sound effect using some of my favorite electric guitar effects. I put that alongside a 400 frame animation in After Effects and rendered out a Mp4 I could load on my phone, a Samsung S7, and tested in the Gear VR.
One of the big things you run into is dealing with scale/distance from camera as with the wide-angle view of the VR camera things can look very strange. Even figuring out how high the camera should be from the ground took a good bit of testing. What can look fine in a still image on a PC monitor can feel very different when viewed though the Gear VR strapped to your head. To help figure this out I did a lot of still frame renders with Playblast VR and checked on the Gear.
The other thing I noticed was that the frame rate I usually animate in (24 fps) felt too blocky in VR. I’m not sure exactly why it is but once you can move your head and look around the scene the frame rate really seems to affect how substantial it feels. So I went back to Maya and set the project to 48 fps instead, just leaving Maya to interpolate the existing animation and the results looked a lot better, if a little over-smoothed. Probably I’ll have a new learning curve with dealing with high frame rate animation.
I ran into a few technical issues with the 48fps change, most notably I found that After Effects didn’t want to export to MP4 at that rate so I had to export an uncompressed AVI (huge file!) and then I converted that to MP4 with the free tool Handbrake. So clearly hard drive space is a consideration for this, especially with rendering at the Gears native resolution of 2880×1440. I remember when rendering at 1080p seemed extreme!
Since this first test I’ve also started looking at more tools and options.. but that’s for the next post.