Jurassic mocap, and more

Jurassic mocap, and more

By Ian Failes

With Jurassic Park, ILM pioneered a wave of fleshy, photoreal digital creatures on screen. By the time 2015’s Jurassic World was released, the visual effects studio had mastered this kind of creature work and the film continued to advance things such as flesh simulation and texturing. But it also brought with it other tools to help the filmmakers and VFX artists imagine their CGI dinosaurs.

The first was Cineview, an iPad app that essentially enabled an on-set augmented reality view of the dinos. And the second was dinosaur motion capture. Although it had been tried on previous giant-lizard films, this time ILM used mocap to help imagine the performances of the raptors and the villanous Indominus Rex. vfxblog spoke to Jurassic World’s visual effects supervisor Tim Alexander and animation supervisor Glen McIntosh about those tech advancements.

vfxblog: What led to the development of the Cineview app for Jurassic World? What did it enable you to do on set?

Tim Alexander (visual effects supervisor): We always have the problem on set with visualising large objects that will be added in post. I first realised this issue when I worked on Lost World as a compositor and we were extending plates so that we could, say, tilt up to fit a dinosaur into a shot. Often we use such advanced technologies such as large cardboard cutouts and poles with tennis balls on them to indicate where these large characters will be.

With ILM Cineview, we wanted to create a fast, small footprint way to more accurately visualize these large characters or even set extensions. Cineview gave us a way to discuss framing and shot choices on set in a more collaborative way often between director, DP and visual effects team. Typically when we had a dinosaur shot, I would stand near camera with the same lensing set in Cineview with the dinosaur in place as a quick check that our framing was going to work. Most often there was no discussion as our previs or storyboarding was accurate to what we were planning on shooting, but there were definite moments where the Cineview preview helped us to redesign or rethink a shot that would have otherwise been difficult or very costly to change in post.

vfxblog: What are some of the developments made with Cineview since Jurassic World?

Tim Alexander: Since its inception we have upgraded Cineview in numerous ways. Probably our biggest add originally was support for translational tracking which we did via a hardware add on called the Structure Sensor and now similar tracking can be done with the Apple AR kit. By adding positional tracking we can now walk around a character, test a camera move, or even walk around a virtual set with Cineview acting as a virtual camera.

Inside Cineview. Photograph taken at the 2015 VIEW Conference.

We also added a animation null that can be attached to an asset in Cineview and who’s position is driven by a transmitter on the set; with this addition, we can have multiple Cineview sessions all seeing the same object in the same place in the set (kind of like a multi-player setup), we can also do basic animation of an asset by having the transmitter move around on the set in the physical world. One other significant add was a matting mode where you can quickly draw a holdout on the screen for objects that should be in front of the rendered content.

vfxblog: For the motion capture done for the raptors and even the Indominus Rex, what was involved in just being able to re-target a human performance to a dinosaur?

Glen McIntosh (animation supervisor): The re-target from the human to the dinosaur was essential for the movements to feel ‘non-human’. Since the body of a dinosaur (with the legs underneath, the head forward and the tail stretched back) operates like a teeter-totter, a default dinosaur pose was created for each specific biped (two-legged) dinosaur. Our dino performers would therefore have the freedom to act as athletic and lithe as possible.

The dinosaurs themselves were based on the movements of hawks, herons, ravens, crocodiles and the stalking movements of large predators such as lions and tigers. Ostriches are the fastest living bipeds and so that worked perfectly for reference for our raptors. Motion capture works best with ambient movement and finding moments when the dinosaurs had to look like predators that were thinking. To that end, hundreds of hours were spent studying the head movements and timing of birds hunting such as owls and falcons.

The fidelity of data from the 44 infrared cameras was such that the slightest movements were registered and mapped onto the dinosaur models in real time. Since animals such as birds,dogs and cats walk on their toes, it was impossible for the actors to do this AND create an athletic performance that felt natural. Initial attempts resulted in raptors with shaking legs since the performers were trying to act while on their toes. So the decision was made to let the actors perform and get the basic choreography for each dinosaur and then the animators would be sure to finesse the ankle and claw animation later.

Mocap rigs for the raptors. Photograph taken at 2015 VIEW Conference.

But this way the animators were always working from a foundation of a performance that could be as broad or nuanced as the scene called for. More importantly, it allowed for a degree of improvisation and experimenting for creatures that are usually reserved for carefully planned and choreographed key-frame animation. This allowed for moments of spontaneity that the animators could either build upon or take inspiration from.

Once a default or ‘classic’ pose was created for the raptors, Indominus Rex or any of the biped dinosaurs, performers would act based on how smart that dinosaur was, how predatory but also how large. For the larger dinosaurs like a T-Rex, the data was ultimately slowed down to emphasise the bulk of the animal. It always felt like acting if the performer tried to ‘move big and slow’. Typically, the data could be slowed down by 15-20% to feel like an animal weighing several tons.

Anything slower and the movements tended to feel in slow motion or like someone was acting underwater. This was only necessary for the larger animals. As the raptors are still very large but roughly the same size as a Bengal Tiger, the movement processed would typically work right away. Some shots that involved complex choreography required weeks of additional key-frame embellishment but often shots would finish in one day because of the successful translation of performance to the model.

A great deal has been made of people wondering how a human could act like a dinosaur but ILM’s re-targeting technology specifically addressed that. The default pose would take into account the bend in the legs, the arch in the back and the position of the arms. What was required was performers extremely skilled in not only mimicry, but actual acting that brought the dinosaurs to life. The other half of the equation was a skilled team that knew how to process the data. A great deal of trial and error and experimentation took place at the beginning of Jurassic World. The re-targeting of the data arose from the complications of trying to record a human as a dinosaur as well as recording for extended takes. The ultimate goal is the director having an opportunity to direct their digital creations the exact same way they would direct their actors.

The motion capture performers calibrate themselves. Photograph taken at 2015 VIEW Conference.

vfxblog: What was the benefit of doing some mocap this way in terms of getting ideas down or shots done?

Tim Alexander: The real benefit we saw from using mocap was time savings and consistency in character performance. Glen McIntosh could block in a whole sequence of shots in the matter of hours (the inclusion of background plates and match moved cameras in these realtime sessions made the decisions being made on the mocap stage even more significant to the final shots). Obviously the final touches and perfecting of performance falls to the animators and takes significant amount of time, but, being able to block in a whole sequence in a very short amount of time meant that we could explore multiple options for action without infringing on the animation time needed to really plus out the performances of the raptors; we were making better decisions early in the animation process because we had the luxury of exploring performances.

Raptors calibrate in style. Photograph taken at 2015 VIEW Conference.

The second benefit was consistency in character. Since we had multiple facilities doing raptor shots, it was important for us to find a way to make sure a character like Blue always felt like Blue. By getting the intention of the performance from the motion capture actor who was cast as Blue, for example, and then distributing that performance on a per shot basis to the animators working on the shots, we believe made the performance of each dinosaur more unique and cohesive throughout the film.

More from vfxblog’s ‘Jurassic Week’:
Viewpaint: ILM’s secret weapon on Jurassic Park
The oral history of the Dinosaur Input Device or: how to survive the near death of stop-motion
The surprising game-changing VFX of Jurassic Park III

Back to vfxblog.com.