I’m pretty sure you can’t pack much more into this crazy weekend of visual effects in Vancouver

sparkfx2019

Did you see Spider-Man: Into the Spider-Verse and want to know how it was made?

Did you see Mary Poppins Returns and wonder how they combined 2D animation and live action?

Did you see Roma and not realise there were any visual effects in it at all?

All those projects – and a lot more – are being presented on at SPARK FX in Vancouver at Emily Carr University on 8, 9 and 10 February. It’s an insane event that’s literally for people at every stage of their effects careers, and also for those who are simply VFX or film enthusiasts.

RobLegato1h
Visual effects supervisor Rob Legato.

The big ticket items include: a keynote from The Jungle Book and The Lion King visual effects supervisor Rob Legato, a presentation by Imageworks on Into the Spider-Verse, a look behind the scenes from Framestore at Mary Poppins Returns, insights into the VFX work by MPC for Alfonso Cuarón’s black-and-white Netflix film, Roma, and a preview from Weta Digital about their photoreal work for Alita: Battle Angel – which they’re talking about even before the film has even been released.

Plus, SPARK FX has more than just presentations on the latest films. There’s also a Diversity & Inclusion Summit where real tools of change for the industry will be discussed. And there’s a major Career Fair with the top studios wanting to talk to you about your next VFX or animation job.

Let’s break it down.

Career Fair

27907569_1992193617462511_464496457110305351_o
The Career Fair will be packed with employers.

If you’re looking to get a job in visual effects, or see what else is around, here’s who will be there to ask at SPARK FX:

Industrial Light & Magic (ILM)
The Focus (MPC Film)
Sony Pictures Imageworks
DNEG
Method Studios
Scanline VFX
Framestore
Animal Logic
Cinesite
Image Engine
Electronic Arts
FuseFX
Rodeo FX
Zoic Studios
CVD VFX
DHX Media
Mainframe Studios
Yeti Farm Creative

An insane list of companies. The Career Fair is on Saturday, February 9th, starting at 9.30am in The Hanger, at the Centre for Digital Media, 577 Great Northern Way – right next to where SPARKFX is being held. More info here.

Diversity & Inclusion Summit

27982938_1993512500663956_398566071853937221_o
The Diversity & Inclusion Summit is all day on Sunday 10 February.

If you’ve been to any previous SPARK events, you’ll know that they’ve been leading the charge with sessions on getting more women into the VFX and animation industries, and improving diversity at studios as well.

This year at SPARK, VES Vancouver and Spark CG Society are co-presenting the Diversity & Inclusion Summit, and the sessions planned are aimed at not just talking about the problem, but actually delivering solutions. Leaders from inside and outside the industry will be on panels and audience participation and discussion is going to be a major feature.

The Summit is free to attend, and is happening on Sunday, February 10th. More info here.

The Conference

SPIDER-MAN: INTO THE SPIDER-VERSE
Into the Spider-Verse is just one of many talks you can see.

The big thing about the VFX presentations at SPARKFX this year is that so many of them are the first time you’ll be able to see anything about the VFX anywhere. For example:

  • Alita: Battle Angel – Weta Digital is giving a talk about the creation of their CG main character, before the film is even out at cinemas!
  • Spider-Man: Into the Spider-Verse – this Oscar-nominated film has been stunning audiences with its original take on 3D and 2D animation aesthetics, and Imageworks will be revealing behind the scenes of their new tools and techniques for doing it.
  • Mary Poppins Returns – not many people have seen how the 2D and live-action scenes were combined, but they will here thanks to Framestore.
  • Aquaman – ILM is going to break down their underwater work for this crazy comic book film.
  • Roma – Yep, there’s a whole bunch of VFX work in Alfonso Cuarón’s black-and-white Netflix film, and MPC will be revealing how they helped tell this important story.

In addition to these ‘firsts’, there’s a number of other major moments happening at SPARK, including:

  • A keynote by The Jungle Book and The Lion King VFX supervisor Rob Legato. He’s calling the keynote ‘The Analog Art of Emotional Storytelling’, and it’s about his experience of bringing traditional filmmaking and storytelling tools into the digital world.
  • A whole lot more VFX breakdown talks: Welcome to Marwen with Method Studios, First Man with DNEG, Mortal Engines with Weta Digital, Fantastic Beasts: The Crimes of Grindelwald with Image Engine and Venom with DNEG.
  • Retro 25th anniversary sessions on Forrest Gump and Stargate with visual effects supervisors who worked on those films (full disclosure: I’m moderating these!)
ENT_95141,2_AEC_PRX_1995_ForrestGump_3_1389686.tif_converted
I’m hosting some old-school chats on Forrest Gump, and also Stargate, with actual VFX supervisors from the films.

In amongst these sessions are a whole bunch of free workshops aimed at going deeper into some of the key VFX tools around, with companies including The Foundry and Ziva Dynamics.

These VFX sessions and workshops start on Friday night, February 8 with Into the Spider-Verse and then run on the Saturday and Sunday all day. More info, pricing (which includes full weekend and session passes) and the full programme here.

Come on by

I’ve tried to give you a full round-up of what’s happening at SPARKFX, but it’s hard to capture in a single post how cool it is to just be at an event with a whole lot of like-minded people. You never know who you might meet and what you might learn. Hope to see you there, and please come say hi to me and the SPARKFX team.

Go to http://sparkfx.ca/ to start checking out all the SPARKFX events.

27710214_1992192904129249_6343945736808225705_o

New stunning ‘First Man’ VFX featurette

FirstManvfx

Just in time for the VFX bake-off, Universal has released a First Man VFX reel featuring the practical, miniature and digital effects work for the film. Check it out below:

If you’d like to go further into the work, I talked earlier to visual effects supervisor Paul Lambert from DNEG and miniatures effects supervisor Ian Hunter for both VFX Voice and Thrillist:

First Man: an effects odyssey

How ‘First Man’ Used Miniature Model Rockets to Recreate the Vastness of Space

5 things that rocked at SIGGRAPH Asia Tokyo

DSC05098

SIGGRAPH Asia Tokyo 2018 has just wrapped up, and it was a fantastic week. The attendance was up near 10,000 and you could feel the buzz at the conference centre. Here’s my run-down of 5 of the coolest things I was able to see there.

1. Behind the scenes of Pixar’s Bao

DSC_1450
Bao director Domee Shi.

If you haven’t seen this Pixar short film yet, make sure you do. What was magical about this presentation at SIGGRAPH Asia Tokyo, led by director Domee Shi and several of her crew, was that it revealed a lot about the inspiration and the artistry and technology behind the short. It can just be so hard to capture the amount of work that goes into any animated project, and this presentation had everything – story points, design, cinematography, lighting, effects. The room was jam-packed, too.

2. Mixed reality Pac-man

IMG_6680
The mixed reality view for Pac-in-Time.

There’s always something a little bit whacky at SIGGRAPH Asia. Bandai Namco Studios fitted participants out with mixed reality Hololenses and sat them on Honda Uni-Cubs to produce a real-life version of Pac-Man (it was called Pac-in-town, I think). And lots of fun.

3. From Gollum to Thanos – Weta Digital’s CG characters

DSC_1641
Weta Digital visual effects supervisor Matt Aitken.

Over the years I’ve been able to cover so much of the work of Weta Digital in its crafting of digital characters. VFX supe Matt Aitken distilled all this work down into a fun history of the studio’s achievements in this area, all the way from The Frighteners, through Gollum in Lord of the Rings, Kong, the Apes films, Furious 7 and most recently with Thanos. It was a fantastic talk and one that made you think about how important these characters are in film history.

4. Robots and love

Keynote Kaname Hayashi (2)
Kaname Hayashi during our Q&A.

One the last day of the conference, I helped emcee keynote speaker Kaname Hayashi’s talk about his imminent GROOVE X robot, Lovot. While he couldn’t show any pics of the robot, it was particularly interesting to hear about the idea of companionship and emotion that might be able to come from a machine. Audience questions were also fascinating – there was so much comparison to pets (seems obvious now, but I hadn’t thought of it that way).

5. Real-time Live!

Real-Time Live! (2)
VTubers during their Real-Time Live! presentation.

It’s brilliant that Real-Time Live! is now part of SIGGRAPH Asia. The truth is, there was something a little chaotic about this year’s presentations, but they were all still very watchable. I enjoyed, in particular, Pinscreen’s app, BanaCAST’s anime-like mocap character, and the VTubers and Mimic Productions virtual humans (both of these made use of IKINEMA’s tech for helping to realise CG characters live on screen).

I’m excited to say I’m part of the committee for SIGGRAPH Asia 2019 in Brisbane, Australia, and I would obviously encourage any reader to come down to Oz for the event!

‘The Prince of Egypt’: Henry LaBounta reflects on parting the Red Sea

RedSea

These days, Henry LaBounta is Studio Art Director at EA Ghost Games. But before his career in games, LaBounta was at the forefront of effects simulation at Industrial Light & Magic, where he helped generate the tornadoes in Twister. Later he worked on The Prince of Egypt at DreamWorks Animation, in particular, on the parting of the Red Sea sequence. After DreamWorks, LaBounta moved to PDI as a VFX supe on films such as A.I. and Minority Report, before segueing into the games industry.

With the 20th anniversary of The Prince of Egypt approaching, vfxblog decided to ask LaBounta at the recent VIEW Conference in Turin what working on that Red Sea sequence was like back then. Hope you enjoy this new retro Q&A.

vfxblog: How did you come to be working on Prince of Egypt?

Henry LaBounta: Before I left ILM, I’d actually gone up to Skywalker Ranch. I met with George Lucas about working on the Star Wars movies that were about to start up. And then this came up as well, working at DreamWorks, it was the first movie they were going to do, where I could part the Red Sea: I was like, ‘Oh my gosh.’ Those are two interesting opportunities, right? But I had never done any 2D animation work before. So I was really excited about the opportunity to work with DreamWorks on something completely different from what I had been doing at ILM. And some of my friends were like, ‘Are you crazy? You want to work on a Bible movie and you could’ve been working on Star Wars?’ I’ve done a lot of crazy things in my career, and I’ve never regretted a single one.

vfxblog: For the Red Sea sequence, since this was a (mostly) 2D animated film, how did you think that you were even going to do that in CG so that it still had a 2D look?

Henry LaBounta: It was tricky because I know back then the whole idea of using anything computer graphics generated in an animated film was something not really done on a big scale. For characters, for example, it’d only be a crowd character that was CG that was only ‘so’ big on the screen.

The challenge is, in general, it’s easy to get in there and start making something that looks like some big visual effects kind of thing, which suddenly looks nothing like the rest of the film. So we had to develop techniques to incorporate an animation style within the effect of parting the Red Sea. We had a lot of really talented people on the team. Doug Cooper was one of the people I was working with. He was a huge help, because he had been working on animated films for quite a while. And one of the tricks we used was just taking a 2D animation of a splash, and using that in a sprite, and instancing that. So every splash looked like an artist could’ve drawn them, and they had that little bit more of a 2D feel to them.

vfxblog: Were you also looking to use Prisms, or even an early version of Houdini, to do the water simulations?

Henry LaBounta: When I got there, one thing that was interesting was DreamWorks was brand new. I mean, literally it was plugging in computers and setting up desks and stuff like that. Unlike ILM which was completely setup with pipelines, workflows and equipment and staff. We were kind of building the team while we were making the film. And we didn’t know straight away what we were going to use, but as we looked at the task at hand, we looked at some different software. Some of the effects artists in DreamWorks at the time were using Alias, and then they were doing a whole bunch of really nice things with Alias.

And I had been using Softimage primarily, and RenderMan at ILM. But we knew there would be some complex effects animation. And I wanted to try some procedural techniques. And Prisms was kind of the go-to thing at the time, but Houdini was brand new. So we were just on that cusp as Houdini was coming out. It may have even been Houdini 1.0., it was just barely ready for production. SideFX was so fantastic in giving us support. Like, I could in the morning send them a note and say, ‘This thing isn’t working.’ And by the afternoon I had a patch that fixed that. They were just an extended part of the team in a way, they were absolutely committed to making it work, and getting Houdini to actually generate the ribs and everything that we used to render in RenderMan.

vfxblog: Had you used Prisms at all or Houdini, before this?

Henry LaBounta: I had not. Not at all.

vfxblog: So what was it like learning that new software?

Henry LaBounta: For me, it was mind-blowing because it was like, this is software the way my brain thinks. I want to do this and I want to do that next, and I want to connect it to this. And I want to be able to change anything in the entire chain at any point. Houdini allowed me to take any kind of data and use it any way we wanted. We did some things that weren’t typical computer graphics ways to use that data, but it was really easy to make that work and plug it into a shader we’d written. It was so different than any other software. The closest thing to it was Dynamation, I think.

Henry
Henry LaBounta (centre) in Japan during a trip to promote The Prince of Egypt. He is with Michael Carter (left) and Richard Hamel (right) from SideFX. Image courtesy Kim Davidson.

vfxblog: This is Dynamation that was part of Wavefront?

Henry LaBounta: Yes. I’d done Twister at ILM, where we made that tornado with Dynamation. And it was something you could script and kind of procedurally control. And then Prisms and Houdini were like that on steroids. Like an entire package that’s based on those kinds of principles. Which are really common today, but back then it was pretty strict. You know, here’s a menu, here’s a drop down. This is the thing you want to do, you commit to that and it’s done.

vfxblog: Where did you get started on the parting sequence?

Henry LaBounta: DreamWorks had a great background painting department that would also do concepts for the film. They had already made some backgrounds for the parting of the Red Sea, and were working on some ideas of what this moment might look like. So our challenge was, how can we really bring that to life and animate it in an interesting way? We tried a lot of different things to get to this point. There were three directors on the film, all whom were fantastic to work with. And we took them along the process and we showed them work in progress.

Normally the challenge would be, how can we make a fluid system work in a really physically correct way? That wasn’t the challenge here. The challenge here was more, how can we bring something this scale to an animated film and not make it feel out of place? This wouldn’t have really been possible for the effects artist to draw at that scale and get that across. So shaders were a really big part of it for sure. And the work that Kathy Altieri, one of the art directors, had done, was super-inspiring. So by sticking with that colour pallet and being inspired by the paintings that were done, and always comparing our work to that, we tried to stay true to the format of the film that way.

prince
Worth checking out: ‘The Prince Of Egypt – A New Vision In Animation’. This art of book includes many behind the scenes stills from the Exodus sequence.

vfxblog: When you were making it, what could the directors review on? Were you able to do playblasts? Or did you get a pretty final result pretty quickly?

Henry LaBounta: What we would do is we would pick a few hero shots. Looking back at it now, some of the hero shots we picked were some of the most difficult shots to start out with, maybe not the best idea. And we would try and get those working. This is months and months of reviews and iteration to get it to the point where everybody was happy with it. And then once we got that done, it was like, okay that’s a foundation for all other shots. And a lot of other shots in a way kind of fell out of that very quickly. And those didn’t require as many reviews.

vfxblog: What do you remember was the reaction when this film got released?

Henry LaBounta: Well, I think the team was really proud of what we had created. And we had a great team we put together for this. Over the years as I’ve talked to people and said, ‘Oh yeah, I worked on this movie,’ I’ve been surprised how many people have told me, ‘I love that movie. I’ve watched that so many times. It’s our go-to movie at the holidays.’ And it’s just heartwarming to hear that it had that impact on people all these years, that they got something out of it and really enjoyed the work we did.

Find out more about the VIEW Conference at http://viewconference.it

Compulsory viewing: the Computer Animation Festival at SIGGRAPH Asia Tokyo 2018

'Reverie' - SIGGRAPH Asia 2018 CAF 'Best Student Film Award'
‘Reverie’ – SIGGRAPH Asia 2018 CAF Best Student Film Award

Here at vfxblog we’ve already previewed some of the VFX-related talks and dived into the Technical Papers process for SIGGRAPH Asia Tokyo this year. Now we look at the other conference highlight: the Computer Animation Festival.

The ‘CAF’ takes in the Animation and Electronic Theater, and the VR Theater, plus a selection of panels and talks about the latest in computer animation and visual effects. It’s definitely one of the best places to catch up with films from around the world.

vfxblog asked Computer Animation Festival Chair Shuzo John Shiota, who is also is the President and CEO of Polygon Pictures Inc., to tell us more about how the CAF works and what to look forward to this year.

vfxblog: People are always dying to know: what is the difference between the Animation Theater and the Electronic Theater?

Shuzo: The Electronic Theater is a 100+ minute show comprised of the very best 2018 has to offer in terms of computer graphics storytelling. The types of story are quite varied, ranging from short animations, visual effects, game cinematics, music videos, to scientific visualization.

The Animation Theater is comprised of works that didn’t quite make the Electronic Theater selection but are nevertheless worthy of merit, or works that have a longer running time which make them hard to program within the ET. Interesting note is that SIGGRAPH North America no longer has the Animation Theater in its program, so it’s sort of like a lost dialect that only exists in Asia.

Also, don’t forget the VR Theater, debuting in SIGGRAPH Asia for the first time, which showcases the vest VR storytelling pieces of the past year.

vfxblog: Can you talk about the submission and judging process for the Computer Animation Festival – how did you arrive at the participants and the winners?

Shuzo: We had about 400 submissions from all over the world. They were first reviewed by our online reviewers, comprised of industry veterans, who nominated selections to be sent to the final jury. On formulating the final jury, in addition to the deep knowledge of the art and industry that is expected of any CAF juror, my aim was to 1) bring in an Asian perspective (5 of the 7 jury members are Asian, and another is currently working in China), 2) create a female majority (4 out of 7 are female), and 3) create generational diversity (the jury ranges from members with decades of experience to a young artist in his 20’s who is also a multiple CAF Asia awardee).

The jury made its selection based on the following criteria. 1) Craftsmanship, 2) Relevance, 3) Originality, and most importantly, 4) Does it move you.

'L'oiseau qui danse' - SIGGRAPH Asia 2018 CAF Best In Show Award
‘L’oiseau qui danse’ – SIGGRAPH Asia 2018 CAF Best In Show Award

vfxblog: Do you feel like there were any particular trends in the submissions this year?

Shuzo: On watching the CAF trailer, I think you will find that the look and feel of the selected titles are truly diverse and eclectic. This underscores the fact that computer graphics as a medium of storytelling has truly matured, and is now capable of creating images in a myriad of styles.

vfxblog: What kinds of panels and talks related to the CAF are planned?

Shuzo: We have around 10 production sessions that will no doubt give the audience valuable insights on a wide range of digital production; from Hollywood blockbusters by the likes of Pixar, distinct digital Anime productions by local Japanese studios, VR productions, to 64K intro productions.

We are also planning to hear from the director of this year’s “Best in Show”, “L’oiseau qui danse”.

'Vermin' - SIGGRAPH Asia 2018 CAF Jury Special Award
‘Vermin’ – SIGGRAPH Asia 2018 CAF Jury Special Award

vfxblog: Now that the winners have been announced, are you able to say which of the submissions also stood out for you?

Shuzo: I am very happy about the selections. I think we have a very good Electronic Theater, Animation Theater, and VR Theater. As a Chair, I am not able to personally vote, but ultimately, all the pieces I was rooting for got chosen as the top picks!

You can register to attend SIGGRAPH Asia Tokyo 2018 at http://sa2018.siggraph.org/registration.

Ten things we learned about Framestore’s CG stuffed animals for ‘Christopher Robin’

Bbf000_0067_f1233

Marc Forster’s Christopher Robin is easily one of the most delightful films of 2018, and also contains some of the finest fully-CG animated characters you’ll see this year. That work was led by Framestore, which had, of course, sharpened its expertise in integrating CG animated ‘stuffed toy’ characters with live action in the Paddington movies.

Christopher Robin’s production VFX supervisor was Chris Lawrence, and its production animation director was Michael Eames (both hail from Framestore). In addition to Framestore’s 727 shots for the film, Method Studios also came on board to deliver several scenes.

To find out more about how Christopher Robin’s characters came to life, vfxblog sat down with Framestore animation supervisor Arslan Elver in London. Elver shared details on early animation tests, the on-set stuffies used during filming, and some of the specific details infused into characters such as Pooh, Piglet and Tigger.

1. Framestore started animation tests before seeing any concept art

Arslan Elver: We started with Pooh, but at that point we hadn’t seen the designs, the concept art, or anything else yet. The very first test we did was a yellow Pooh Bear in the form of a more classic Disney style. I did the very first tests with our animation director Michael Eames also getting involved. I did animated tests of him trying to climb stairs, but he fails and tumbles down, and he looks at his tummy and then looks around at an empty honey pot.

HD_XL_turntable
Pooh turntable.

2. The director didn’t want elbows or knees

One thing immediately our director, Marc Forster, reacted to was that the character had elbows and knees, and he didn’t want them. I didn’t understand at first but then he showed us some concept art which he was very happy with from Michael Kutsche. It was a teddy bear, it was Pooh Bear walking by holding the hand of Christopher Robin, and looking around, but you could tell there was nothing bending or anthropomorphic about it. We went back to the drawing board, and we did new tests to reflect that.

Pinwheel_20171026_BearJumper_v001c_BK
Pooh model.

3. Some inspiration for the animation came from a philosophy book

Marc Forster talked to us about a book called The Tao of Pooh. It’s about Taoism philosophy using the Winnie-the-Pooh characters. In the book, it talks about an uncarved block as an idea. The book says Pooh is an uncarved block. He’s not carved as a shape or sculpture. He’s empty. He’s a clean sheet. He doesn’t have any prejudice. He doesn’t have any expectations. He’s just who he is. So we started to dig into those ideas and think about the teddy bear aspect of it.

Pinwheel_20180202_A3Lineup_MK_v03_line_up_003_SR
Character line-up.

4. Stuffed toys changed the way the characters would be animated

What happened is, on the set, they made these characters as stuffed toys. They had fully furred ones and then just grey ones with no fur. The stuffies were moved around by actors on set, and then the camera person shot the scene again via muscle memory for a clean plate. The stuffed toys were so interesting to see, and they fell in love with them –  Disney and the filmmakers, so they asked us to match our assets to that. The stuffies were so cute, you could put them on a chair and just by rotating the head a little bit you could immediately get some emotion out of it. So that was the kind of behaviour we were trying to find. We’d think, during animation, ‘What kind of head tilt will give that same feeling?’

5. Some interesting animation moments came from those on-set stuffies

With Piglet, say, I immediately picked up on the ears from the stuffies. The fabric around the ears is looser, it has these very nice ear movements on the head turns. And then with Tigger, who is so long, I was holding the stuffie from his head, and because he’s heavy, the rest of his body was hanging down with these very floppy arms. Mike Eames saw it and said, ‘That’s interesting. I wanna play with that idea a little bit.’

Mhf000_0100_f1060
Piglet, voiced by Nick Mohammed.

6. Framestore’s animators played with the stuffed toys, too

When we got all these toys into Framestore, I called all the animators in and I said to them, ‘Just play with them,’ and we were recording. It’s some of the most stupid video footage ever. If you see these, I mean, just like these 35 year old men playing with plush toys, it’s ridiculous.

null
Director Marc Forster on set.

7. Pooh doesn’t really blink

Because of this Zen thing about Pooh, the director didn’t want him to blink. Even the eyebrow movements, he wanted to be very minimal. The mouth movements as well. He didn’t want it to be very complex. It was quite tricky because to be able to sell his talking, there’s a bit of jaw movement for sure, but if it’s just that it looks very weird. If you start to put in a lot of movement, it looks very stretchy very quickly, so we had to think, how we can keep it alive? How could we move the corners of the mouth and make some shapes that at least suggest that sound is generated but have enough fall-off on the corners of the mouth so it doesn’t feel like it’s stretching? It’s a very difficult thing to do without stretching.

Pinwheel_20170329_ForestPath2_v003_MM
Framestore concept art.

8. Tigger’s tail: should he jump on it a lot, or not?

I looked at Milt Kahl’s beautiful animation for Tigger when I was doing animation tests, and actually I did something based on that where he was running on four legs and I was thinking, ‘No, Marc’s not gonna go for this.’ But he responded really well. He liked that. He liked the energy of Tigger, but when I made the test with him on his tail, jumping on his tail, and then hopping down and clapping, Marc said, ‘Yeah, it’s very nice, but maybe we’ll only make him do it once or twice in the film.’ But it grew on him. During the production, I find myself getting notes like, ‘Let’s put him on the tail again.’

Gmm000_0400_f1076
Tigger, voiced by Jim Cummings.

9. Eeyore went from sad to…still pretty sad

Eeyore was interesting because the very first test he was walking and just sitting on his bum and depressed. We started to do that but people didn’t really respond very well because he was a bit dead, so the note we got back was that we needed to keep him alive but still make him feel very sad. So we kept that same posture of him, but we raised the head up and rotated it up more. I think Marc wanted to see his mouth more because he had such a long muzzle. The other thing was his eyes – because of the fur, the toy is so sweet and cute but as soon as you do a little bit of this pose, the fur covers it so much and the render, it looks like a thin black line almost, so there was a bit of back and forth with that.

Rse000_0215_f1076
Eeyore, voiced by Brad Garrett.

10. Getting honey and food onto Pooh was tough

They didn’t do anything on the set but, later on, they did carry out some shoots for the honey, for how it looks on the face with the fur. So if Pooh chucks his face into the pot, we had to work out, what kind of lining of honey comes out and how much is it on it? They used more of the grey stuffies without any fur for all the dirty stuff. There was one beautiful scene where there’s this big cake and they all jump on it, but they didn’t shoot anything for it. What we did was get one of our animators to put his head into it and chomp on it and see how much remained on his face and how the cake breaks up, to help the effects guys. So we sacrificed one of our animators to do that for the effects guys, but at least they got to eat cake.

Fwp0810_f1119
Pooh digs into some honey.

Christoper Robin is now available on Digital, DVD and Blu-ray.

Tech papers: the secret to SIGGRAPH Asia success

hairvae-teaser.jpg
Image from ‘3D Hair Synthesis Using Volumetric Variational Autoencoders’, ACM Transactions on Graphics (Proc. SIGGRAPH Asia), December 2018 Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo and Hao Li.

The Technical Papers section of SIGGRAPH Asia 2018 in Tokyo is shaping up, as always, to be a key part of the conference. But how do authors get their tech papers into a SIGGRAPH or SIGGRAPH Asia conference? And what happens once they do?

To find out, vfxblog asked Hao Li, who is a co-author on two papers accepted at SIGGRAPH Asia this year, how it all works.

Regular vfxblog readers will certainly have heard of Li and his research into digital humans. He is CEO & Co-Founder of Pinscreen, Inc. (which is developing ‘instant’ 3D avatars), Assistant Professor of Computer Science at University of Southern California, and Director of the Vision and Graphics Lab at USC Institute for Creative Technologies.

More about Pinscreen later, but first, the technical papers. This year, Hao is a co-author on two papers accepted to SIGGRAPH Asia:

1. PAGAN: REAL-TIME AVATARS USING DYNAMIC TEXTURES
Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, Hao Li (some more information and links regarding the paper on Koki Nagano’s website)

2. 3D HAIR SYNTHESIS USING VOLUMETRIC VARIATIONAL AUTOENCODERS 
Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, Hao Li (information on the paper at Linjie Luo’s website)

These papers are the end results of countless hours (and in fact, years) of research. So where does that process start, in terms of submitting a technical paper?

“The bar for SIGGRAPH and SIGGRAPH Asia technical papers is high and the approach for submitting a paper can be very different depending on the type of projects,” says Li. “They can be theoretical/applied and either solve a known problem or something entirely new.”

What to consider before submitting

Before submitting a SIGGRAPH or SIGGRAPH Asia paper, Li notes that, as a general rule, he considers the following things first:

1. Will the reviewer be impressed/excited by the results – not necessarily high-quality renderings, but will the results have a ‘wow’ effect? What is the first impression?

2. Are the technical contributions and novelties significant enough or is it too incremental?

3. Can I position/differentiate my proposed method with existing papers and show convincing advantages?

4. Is the problem interesting? Am I solving a long standing problem, that couldn’t be solved yet? Is my work achieving the state-of-the-art to a well known problem and making a significant impact? Have I introduced a new field that can inspire more work?

Getting accepted

You can find more on how papers are submitted and reviewed here, but Li of course has some inside knowledge about how to get a paper accepted from several years of working in the field.

He says successful papers usually satisfy those questions above, in that:

1. The reviewers must be impressed by the results.
2. The method is new and there are significant contributions made by the paper.
3. The proposed solution is really different or better than existing ones.
4. The problem is exciting, useful, and/or impactful.

“The reviewers should always be convinced why something cannot be achieved yet with existing solutions, and how/why the presented method can solve it,” says Li. “A comprehensive discussion and clear differentiation with related work is always needed.”

Successful papers, Li adds, are generally very well written with very clear contributions. They are also “presented with polished illustrations, figures, and accompanying videos. The evaluations of the method also need to be very thorough and rigorous.”

Each year, too, there are often industry trends and issues that are timely. Li says this can be “favorable for getting reviewers excited, for example, deep learning, VR/AR, and 3D printing.”

You got accepted – now what?

It’s a lot of work just to be accepted, but there’s more to the Technical Papers section than just the paper itself. Presenting the paper at the conference is a major part of spreading the knowledge and generating discussion. This actually begins in the exciting Technical Papers Fast Forward, where authors have less than a minute to entice conference attendees to come and view their full presentation. The Fast Forward at SIGGRAPH Asia Tokyo takes place on Tuesday 4th December from 6pm to 8pm.

For the full presentation, Li suggests the following flow that has been a basis for him and colleagues for some time:

1. Start with some slides to motivate the audience why they should care?
2. Get straight to what problem we are trying to solve.
3. Explain why it cannot be solved previously while presenting prior work, and why it’s challenging.
4. Either give an overview of the method (top-down) or explain the technique from a simple example (bottom-up).
5. Show insanely cool results!
6. Mention some limitations if any.
7. Discuss what’s next and show some future directions.

“The key,” concludes Li, “is to connect to the audience and speak as if you are explaining the work to a friend or colleague, and not sounding like you are reading from a paper. The audience has to be convinced that you know what you are talking about.

Where it might all lead

Technical papers unveiled at SIGGRAPH and SIGGRAPH Asia are diverse, and often lead to continued research and sometimes even real products. Pinscreen is an example of where Li and his colleague’s initial research into digital humans has been taken further.

The company has released an app that generates digital avatars from a single photograph, with photorealistic hair and clothing options. Pinscreen has also launched a facial tracking SDK along with a demo app.

You can find out more at pinscreen.com, and see Pinscreen’s presentation at SIGGRAPH Asia 2018 Real-Time Live! (Pinscreen Avatars in your Pocket: Mobile paGAN engine and Personalized Gaming) on Friday December 7th at 4pm-6pm. Also, check out fxguide’s in-depth coverage of Pinscreen here.

Good luck in submitting your technical papers in the future, and hope to see you at SIGGRAPH Asia Tokyo!

You can register to attend SIGGRAPH Asia Tokyo 2018 at http://sa2018.siggraph.org/registration.

Going back to ‘Pleasantville’: when doing a DI wasn’t so easy

Pleasantville

In 1998, Gary Ross’ Pleasantville became the first major Hollywood feature to go through what’s now known as the digital intermediate, or DI, process. The film needed that process because its characters, stuck in a 1950s black and white existence, would slowly start to escape their repressive world as they begin experiencing colour.

Getting there involved shooting on colour negative, scanning the film, carrying out significant roto, and then doing colour correction and selective desaturation of the imagery – then getting that all back onto film again. These were understood principles in the burgeoning era of digital filmmaking, but they hadn’t really been contemplated for so many VFX shots (around 1700 in total).

To get a handle on just what was involved in making the film two decades ago, vfxblog asked Pleasantville visual effects supervisor Chris Watts to break down the process. In this Q&A, you’ll read about the early Pleasantville colour manipulation tests, the need to convince the studio that the immense amount of scanning/colour correction could be done, the late 1990s tools of the trade – including an early version of Shake – and why painting out unwanted arm bands might have been the toughest work on the film.

vfxblog: What were the new things that had to be solved to make Pleasantville?

Chris Watts: Everything that we did in there, I knew how to do already. I had just never done that big of a job of doing it. Really, the harder part was the sort of disbelief encountered among vendors that you were actually going to try some new things. It was just crazy-talk to most people. Even people who you would think would know better. The people at certain facilities now, known exclusively for their DI abilities, they were in disbelief that certain things were possible, or that certain things, that we discovered were necessary, were in fact necessary. Like doing entire reels at once, rather than doing things as shots individually.

I’m not sure people are aware of how the job travelled from facility to facility in search of somebody who would do it our way, which was to do it a reel at a time. That was one of the big deals on that film, was getting somebody who would actually say, ‘Okay, we trust that you’ve done the research necessary to make us do this, because otherwise we’re just gonna do it our own way.’

[We] made a deal with Bob Fernley to get the movie done for the amount of money we had, and the amount of time we had left to do it. Fernley looked at me, thought about it for a second, said, ‘Okay, I’m gonna trust this guy. It sounds crazy, but we’ll do the movie a reel at a time and see what happens.’ Back then, nobody did that. Now it’s totally commonplace.

So that was probably the biggest hurdle, just getting people to say, ‘Okay, these guys, even though we’ve never heard of them, these guys know what they’re doing, and we’re gonna trust them.’

We made the movie essentially twice. We did it once in video-res, or low-res. We animated all the roto in video-res. And then we filmed it out at EFILM, to see how the movie played. Because, hell if we really knew how this was gonna work in a whole movie sense! The fact that it went through a computer and came out again, didn’t really phase me, but people were worried. They’d say, ‘How’s it gonna play if it goes through a computer and back?’ People just didn’t get that it was still gonna look like a movie, it was not gonna be necessarily something different. In fact it was going to be more evenly and better controlled, and it was gonna look better.

But it was one of those things where there was a lot of doubt whether the processes were going to work. So the director wanted to have something at all times that was screenable, but not too good, because then they might make us release it! It was screenable for people who were not really on board with the process. Because, well, it was a brand new thing for people.

Now, when I came on, the plan for the movie was to do it a certain way. And they had this chapter in the process built in. We ended up going a totally different way, but we still kept the major milestones on the studio side, just because it was fairly late in the game, and we didn’t want to be changing things up with this giant expense that they were not really accustomed to paying.

So, we did it twice. We did it at the video-res, essentially, and then we did it again at film-res. And obviously there was a lot of transference of assets in terms of what worked. We did the roto, to the degree that it was actually able to be done, we were able to transmit the roto to the high-res medium.

But even then, we did a lot of hacks on roto. I think we were probably the first people ever to do roto on jpgs instead of actual files. You know, we did roto on jpgs because who needs to do the roto on the Cineon files? Everybody was doing that, and there was this fear of doing it on the jpgs, that for some reason, even though that image was gonna be thrown away, and we’re gonna keep just the shapes, there was some fear that it was gonna be somehow different, and it wasn’t gonna line up as well or something if we, if people did roto on jpgs. But now, of course, everybody does it that way. Another crazy thing that came out of that.

vfxblog: Let’s go back to the beginning a little bit – was there any kind of proof of concept or test done to prove that a film like this could be done?

Chris Watts: When I came on, they didn’t have an effects supervisor, and they didn’t have anybody to look after the whole process. They also had nobody as a DI wrangler. But they had done this test, they’d shot this little sequence, and they’d shot it on colour film, turned it all completely black and white, and then re-colourized the whole thing from scratch. I thought, ‘That was kind a weird way to do it, but okay.’ They had this ‘mustard girl fill,’ which is what I always called it. Because she looked like she was kind of mustard coloured.

I watched it, and everybody was like, ‘Oh, isn’t this great? Isn’t this awesome?’ And I was, like, it was cool and everything, and the work was done to a better standard than any colourized movie I’d ever seen, but it was still basically colourized from scratch. And I thought, ‘Well, why not take the movie and shoot it on colour stock, and then do some selective desaturation to keep all the nice colours of colour film, and all that technology of the last 80 years?’ So, we tried that a little bit, after watching the ‘mustard test’, and we didn’t keep testing the selective desaturation. Everybody was, like, ‘Oh duh, this is much better.’ So that’s what got me hired on the job, was the fact that I’d come up with that little idea.

And the guy who was in charge of the colour, Michael Southard, who I’m still great friends with today, he immediately saw – luckily – he immediately saw the benefits of doing it this way. And he had the technical experience with the software, which they had kind of borrowed from this company that went out of business. He had the technical experience with that software to actually do it and then manage a little team that was able to duplicate the effect I was after. He was on board, and he was a great ally through the whole movie.

Michael just did a fantastic job. He was basically the colorist of the film, even though we didn’t have anything quite so real-time as a telecine console for the final colour, it was still done essentially in the same manner. We essentially filmed out the whole movie, piece by piece, in our office. We had a couple of Solitaires clicking away for 24/7 for months. And then once we got that done, we output the whole film again at Cinesite a reel at a time. And that was the deal that Bob Fernley and his crew were able to hold up their end of, which was to basically record the whole movie reasonably quickly, and then supe it all a reel at a time at once.

Even with our ambitious thinking, we didn’t think, ‘Well, we should just do a whole movie at once.’ Because that was too much even for us at the time. So we did a reel at a time, and then we did this elaborate print matching thing, where we matched up prints that came out of Deluxe, so that we didn’t have a big bump between reels when one went from one reel to the other. And that worked okay. The bumps that we got between reels were not great, I don’t think, but that was a time when the audience came to expect bumps, because you can always tell when a reel’s gonna change, the movie, it gets a little dirtier at the end. People were okay with that. Nobody ran screaming from the theatre when we encountered these little discontinuities in the colour that we got between reels.

vfxblog: How did the eventual approach to what would be done in post-production influence anything in terms of on-set filming?

Chris Watts: I pretty much planned things to just let production shoot whatever they wanted to, and then we would deal with it later. It wasn’t so much of a cop out, but it was kind of a daring, white knuckle experiment in what would become the style of visual effects supervising for the next 20 years. Let them shoot whatever they want, we’ll fix it later. For the most part, we didn’t key anything, we didn’t shoot much with multiple elements or multiple bluescreens. We knew we had a crew that was going to roto 160,000 frames of film. We knew they were really good and really fast. So basically once we sort of swallowed that somewhat bitter pill, which turned out to be not bitter at all, we were able to sort of free up some production to do all the things we wanted to do.

There were a couple things, like the drive through the cherry tree leaves that were falling down, the little pink leaves. They were gonna shoot that with the leaves that were the right colour, and then have this slowly come on. So I changed the colour of those to be a little bit different, basically I made them magenta instead of pink, just so they could be the opposite of green, and then we could key those pretty easily as they came down, because it would have been a real mess to have to roto that stuff as they were driving through that black and white forest. And that worked great, it worked fine. There wasn’t any problem. We did a test of that, one little shot, and it worked great.

vfxblog: There’s a very famous scene where Tobey Maguire is with Joan Allen, and he’s applying the makeup. How was that actually filmed, and what then happened in the back-end?

Chris Watts: For that scene we actually did shoot a different way than what they were gonna do. We did use keys in that. We got some green bespoke make-up. We wanted a colour green that was essentially the same colour as a flesh tone would be in black and white. Which wasn’t that hard to do, because green is essentially the major component of lumens. So it was pretty easy to come up with. I can’t remember the exact formula, but I remember doing a few tests, and picking one, and that was the way we went.

We really had to do the test and then get the output quickly, because people were hugely concerned about putting green makeup on Joan Allen. That was gonna be our master footage – ‘What are we gonna do if it didn’t work?’ they would say – so we did a test, and it worked, and it was all fine.

These days, again, it doesn’t seem like something was just that terribly risky, putting some green makeup on an actor for some effect later. But at the time, people were nervous about things like that. And people were even nervous about the way that Joan would be able to act with green makeup on. She might be self conscious or something. But she was totally fine. If she was self conscious, I didn’t know about it. She just did it, and the movie looked great.

Dealing with little challenges like that, little sort of petty fears of the studio, that was a big part of my job. Just convincing people things were gonna go okay. This is back in the day when there were – a lot of companies which now do great work were just coming up, and were maybe not doing work that was quite so great, and people who were maybe not so experienced in the filming side, more experienced with the digital side, were showing up on sets and making recommendations about things that turned out to not be the right thing. So I was real careful not to do that.

Anyway, with the green makeup – that made the edges of the make-up that much easier to deal with. They were gonna do it in roto before I showed up, and then I came up with the idea of the green makeup. So we just made the switch. Other than that, it wasn’t really that big of a deal.

vfxblog: These days, the big films have 12 or 13 or more VFX vendors. How did you approach it on Pleasantville? i remember there was essentially an in-house unit called Pleasantville Effects.

Chris Watts: Well, it’s kind of a weird story. Essentially the movie was gonna be done by this company, Cerulean. They were essentially a colourizing company in LA. The day after New Line gave them a million dollar deposit or something like that, they basically packed up their offices and disappeared. And then these other people – Dynacs – appeared who were, who’d been on the board of directors of Cerulean, who basically said, ‘Well, we’ll do the movie.’ And it was all a little bit shady, it seemed to me.

This was all at the beginning of the trend of sending work to India, and they had this great idea that they were gonna get work, and they were gonna have this whole schedule of when they were gonna get frames sent out, and get sent back. I just looked at it and laughed, because I knew instantly that it wasn’t gonna happen, just based on the schedule, and the company I was dealing with. It ended up in a lawsuit, and the company that was gonna do the work, they had no movie experience, and they had none of the sort of traditional pertinences of companies that are accustomed to dealing with working on a feature film.

I pointed out that these guys, even if you ignored the fact that one company had disappeared and sprung up from the ashes into another company, minus the $900,000 we gave them, they’d never had any experience doing any movies. Well, they had experience doing movies, because they had some of the same people, they still never had any experience doing movies where there was a living director, or a living editor. Pretty much any movie that’s been colourized is a movie that’s fallen into the public domain. Which is generally a movie for which the director and the editor are no longer around, if they are, they’re not concerned with the movie anymore. So that was a big deal.

I floated the idea that they were gonna get a movie that was gonna have changes, and they were gonna have to go back into shots, and redo things, and they pretty much freaked out, and said, ‘Well that just can’t happen.’ They ‘told on’ me to the higher ups at the production, thinking they were gonna get rid of me or something. They basically got rid of themselves, they ended up getting the job taken away from them. And we ended up setting something up ourselves in Gary Ross’ office in Toluca Lake.

We’d been fooling around with the idea of doing it ourselves, because I saw the writing on the wall. I was sort of hoping for the best but preparing for the worst while we were filming the film. And everybody was pretty concerned with just filming the movie at that point. So I was told to basically keep my mouth shut, not say anything to anybody. But I slowly prepared, quietly, for any of the various eventualities. One of which would be we’d end up doing the movie ourselves.

Eventually we started gathering up a little crew, and making plans for equipment, and budgeting things. There were some good people there at Dynacs, for sure, but they didn’t get some things that we take for granted on features. Like the ability to turn around work quickly, and the ability to iterate on shots, and things like that. That was not part of their game plan when they signed up for the movie, and they had no idea they were gonna have to do any of that stuff. So basically we took it away from them, and there was a big lawsuit.

vfxblog: Let’s talk a little about how you managed the post-production. How were you handling the data for this show back then?

Chris Watts: Here’s a fun fact for you: for the entire film, the disc capacity of the entire facility where we did it, which was essentially our office in Toluca Lake, we had two, count them, two terabytes of disc space. That’s what the whole film was done on. Which now fits in my laptop. But back then it was a huge rack and stuff. And we were so proud of it. But you know, they came out with these drives – the four gigabyte drive was the biggest one you could get. That was big then. Then they came out with the nine gig drives, and those were really expensive, but we still got them.

It was silly how efficient we got to be, or we had to be, with disc space. Now you look at what people do, and that’s like, that’s 10 minutes’ output of MPC or something. It’s astounding how much more data we deal with these days, and how many more elements we generate. I mean, luckily, these shots had, they had a background element, maybe one other element, and then a bunch of roto elements, which were essentially of insignificant size. So it was really only just multiple copies of outputs and some intermediate elements.

Every time we rendered it, we went really back almost to the original footage. Because we didn’t render things over and over again, we just went back like anybody else would. But it was still, it was not a huge data show from today’s standards, but back then, it was like, ‘Whoa, that’s the most data we’ve ever seen.’ It still cracks me up, that we were so proud of two terabytes.

vfxblog: You mentioned the crazy amount of roto in the film. What were the main tools you were using for roto, and image manipulation?

Chris Watts: There was nothing there really to play with. We pretty much had to build a lot of stuff ourselves. Luckily, there were some people around who also saw the writing on the wall, and we were able to use the ‘baby’ versions of certain bits of software that really helped us out a lot. Nobody knew, nobody understood, with very few exceptions, except for a few people at Cinesite, how logarithmic colourspace worked. And nobody was using scene linear colour space or anything else yet. But log was the way to go if you wanted it to come out of the computer looking the same way that it looked going in.

I had Raymond Yeung write us this bit of software that was able to manipulate the monitor access of CRT screens on SGI computers. And that was a huge help to us, because then we were able to back and forth between LUTs. Which again, remember, there was nothing there. You turn on your computer, and you got like, you got a terminal prompt. There was nothing else. There was not much of any kind of desktop, there was no, there was just nothing. It was SGI O2s.

For roto, we did have Matador, that was out. But Matador at that point was being made and sold by Avid, and they wanted like, 20 grand for a barely functioning, ass-backwards, really difficult mistress of a paid programme. Yeah, there were a few people who knew how to use it really well, but man, those people were expensive too. We also used Commotion for dust busting, and some paint work, too.

One thing that was really crucial was, Shake had just come out. Arnaud Hervas and the guys over in Venice had been working on it, we’d heard about this thing, Shake, and I went over to talk to him. I was like, ‘Oh my God, this is exactly what we need.’ And it’s got lots of handles to be controlled by external stuff. It’s all command line accessible.

But the version of Shake that had the interface was years away. It hadn’t come out yet. The interface for Shake basically just generated a text file, which was then rendered by the Shake engine. It was very simple as a software matter to associate those two. And just have the engine. And then later on, when they tacked the interface on there, that was the thing that generated the text file that the engine was able to read, and use as the basis for doing what I wanted to do.

So we were able to, with a little bit of work, and getting our head around kind of object oriented filmmaking, we were able to write a lot of software that essentially coughed out render scripts, or at least the beginnings of a render script, for every shot in the movie. We had this really awesome sort of mass production. Essentially, it’s was like a kind of a slightly slow motion DI thing we do now. You don’t have the ability to whiz back and forth in the film, and see one reel of cut negative, and see what you’re doing to it like you did before, but we did have the ability to go through shots, and time them. We had also written this colour correction tool called Coco that dealt with essentially still images. And it was able to, quite quickly afterwards, assemble very small but colour accurate motion film, and you could cut it into little tiny thumbnails.

The guys at Shake were hugely helpful in just being where they were at that point in development. It was exactly the right thing we needed at exactly the right time. We had to twist their arms a little bit to get them to sell us a few of them, because they were really still developing it. But we showed them we weren’t gonna be complaining about the lack of an interface. I was used to no interface software, because working at CFC, that was how they did it. The Shake guys, they were awesome. Every frame of that movie was rendered in Shake.

vfxblog: Can you talk a little about your actual workflow for completing the shots?

Chris Watts: To do the meat and potatoes of the work, we’d get the film in, went through all the editorial stuff, we’d telecined the film in a very structured way. In fact, we’d sort of do things like pull-downs, remember that? Before machines could detect pull-downs automatically. So I wrote some software that essentially lined up all the dailies in reels, to be worked with, the selected dailies, so the cadence of the pull-down would be unchanged for the entire reel. So when we wanted to take a pull-down out, it could all be put in or out at once, without messing with sound, or causing jitter frames, or anything like that.

And that was kind of a pain in the butt, because nobody was doing that yet. Avid didn’t know how to do any of that stuff. And we were actually editing in Lightworks, which was probably the superior platform then if you were an editor. But it wasn’t really as easy to get the data in and out of it. We figured it out.

So, we’d get the cut, we’d get the takes from which the cut was constructed, we’d scan whole takes at 1920×1440 resolution. We’d telecine the whole takes, obviously, but then we got to scanning, and we scanned less than the whole takes. I wrote a new bit of software to put just the takes together in the proper cadence. But either way, we had these big reels of footage where the pull-down could be taken out in one fell swoop on a Henry. And we’d spend lots of time in a Henry room. My son is named Henry; Thomas Henry Christopher Watts, because I used the Henry so much that I grew to love the Henry, and it’s now my kid’s middle name! Wayne Shepherd was our Henry guy. And there were other people too, like Mark Robben at Editel, back when Editel was still there.

Then once we had the files in our various file sizes at our little production facility, after we’d done the dailies, they did the roto. We did this crazy process to save time which was roto at 1K. Which probably sounds like anathema, but you can’t really see the difference if you look at it on film. Compression was one of those things where, oh my God, we can’t compress anything. The Academy was not willing to consider any cameras that did any kind of compression till the RED came out, basically, and then they finally had to say, ‘Okay, fine. We’ll let you compress things.’

So we did what essentially amounted to jpg compression. We would split the movie into luminance and then the colour part of it. Because we knew we’d be throwing most of the colour part away. Or at least not having to deal with it for a while. So we split it into these weird little daughter files. There were the luminance files, which look fine if you look at them, and then the colour files, which were essentially this weird – if you strip the colour out of something, and you’re just looking at the colour information, it’s this kind of weird, out of focus, kind of blobby looking stuff. It’s not even like, colour is on its own, and when you separate it out, it’s not really very sharp or anything else.

I think Paddy Eason and I came up with this idea when I was over in London at one point. Paddy’s been a long time friend from back in the CFC days. And we thought, let’s explore this idea of doing the movie at 1K, because again this was, we’re using O2’s and things like that. And then up-res’ing it, up-res’ing the colour from 1K colour to 2K colour, and then using the luminance from the original 2K file. Or from whatever the output of the effects work was.

So we tried that. I did some tests at Editel where we took the files and we did full-res chroma, half-res chroma, and quarter-res chroma. And the quarter-res chroma looked totally fine, but I was like, ‘Well, let’s not push it.’ So we went to half-res. Half-res actually made these real, nice small files, that were sort of like a quarter the size. And then we had these finished 1K files, and we took the colour from the finished 1K files, applied it to the 2K luminance, and we had these beautiful, pristine looking 2K output files. It worked really well.

That was the kernel of the image processing pipeline. And that was all managed by Lauralee Wiseman and her crew. She was great. She was able to manage that whole process, just keeping all that stuff straight. And then it ended up going to Cinesite to get filmed out with Jackson Yu over there, who did amazingly Herculean amounts of work to get everything in order, and dust busted, and looking good. Half the dust busting we did, and half the dust busting Cinesite did.

vfxblog: Was there anything else that you remember specifically about Pleasantville that you wanted to share?

Chris Watts: A couple of universal truths still hold. The one that I always end up coming back to is that after Pleasantville happened, well, on Pleasantville we had this joke that we said, ‘Don’t underestimate the difficulty of scanning an entire film into a computer and then getting all the frames back out in the right order.’ And that kind of still applies today.

DI’s are generally done the same way we did them, mostly because Raymond Yeung, who was our programmer guy, was behind that. He wrote so much software, he ended up getting hired by all the labs to go make their DI pipelines for them. So it was kind of nice – a lot of the stuff that we had sort of fought and struggled through, and come to conclusions on the best way to do it, is still the way that a lot of things get done because it’s the way that Raymond ported it over to whatever facility he was working for at the time. That was kind of an interesting side effect of Pleasantville, was that that stuff persists, and some of the difficulties that we had are still the same difficulties that people have today.

Then also, the other crazy thing about the movie that people might not believe is that we had to find somebody to scan all this stuff really quickly. We tested all different kinds of ways of scanning it, and we decided we wanted something that was kinda quick, and we’d be able to sort of evaluate it as it rolled by, and that was not the way the film scanners worked at that point. So we heard about this new machine called the Spirit DataCine, which sounded very complicated and new and exciting. And we heard that Kodak had one. I went to London, and went to do some tests at VTR. Because we were thinking about using theirs. And they were awesome. And we got back, and then we found out the Kodak had one, but they couldn’t find it. It was like, in some other building or something somewhere. They had this other office in Culver City, that I remember had an elevator that squeaked like the Titanic was sinking. It was basically this box in a basement. This big crate and it said Spirit DataCine on the side. So we cracked the thing open with a crowbar, plugged it in, and basically started playing with it. And it was, we were definitely the first people in LA to mess with one of those things.

We got it open, and we realised quite quickly that they’d made some errors in it – it was still in development, really, but errors in things like the log curves, that we all take for granted. The people at Phillips, who’d built the thing, and the people in Kodak who had had purchased the thing, they have different ideas about what log colour space meant. So there were some issues that we had to deal with, and some of those things came up in the middle of production. They were kind of hard to swallow. But essentially, in a nutshell, that machine was this brand new piece of kit, that later on became ubiquitous, before they were all pushed out into alleys, right behind the ranks that they replaced.

But it was these machines that basically enabled us to do the movie, that came along just in time for us to use in the movie. If it had been a week later, we would have had to do it some other way. But luckily, this machine was there, it was sitting at Kodak, nobody even knew what it was, sitting there gathering dust, it had been there for a week or two, just sitting there.

Here’s another thing: a lot of the effects work we did ourselves, and then it was a couple shots we farmed out to CFC, and various other places. One thing that came up was that we had a big clump of work that was completely not something we were expecting to do, or budgeted to do. It had to do with the black and white Gestapo guys. And they had these armbands on. And if you look carefully, you can still see them in a couple frames of the movie, but Gary decided this was too much, so we needed to get rid of those. And what a pain in the ass that was! We probably had 60 shots or something where we had to paint those things out. So that was all done in Commotion, and a little bit was done in Avid Media Illusion, and some of it was done in Matador, too. We had various difficulty levels of shot, based on how big the armband was in the frame. That was one of those things where it was like an armband over this puffy, billowy white shirt, and these people are running around doing stuff. It was pretty hard to do. But you know, we got it done. Marc Nanjo really cut his teeth as an artist on that.

I actually also worked on the last shot delivered of the movie, where William H. Macy says, ‘Honey, I’m home.’ And then it’s a pan around to various things in the house, and there’s a shot of his hat on the hat rack. And I guess they forgot to shoot it or something, or they decided they wanted it later. And so I had to come up with a shot of a hat on a hat rack, and I ended up having to assemble it from a couple other pieces, just the very tail ends of dailies, and frames that we had laying around from other shots on that set. And so I was madly painting that thing in.

It was literally the last thing. That was the one thing that was holding up the movie. And as soon as I was done with that, handed it to Lauralee, and said, ‘Okay, I’m done.’ I turned out the lights and went home. It’s the only movie where I’ve ever felt, ‘Okay, I’m done. There’s nothing more I could do on this movie.’ Usually you get dragged away kicking and screaming. Bob Degus the producer was there. He was like, ‘Oh, thank God.’ We shut off the lights and walked out together, because it was such a moment. And I just imagine that that hat’s probably still sitting there somewhere on that post. Waiting for me to come home.

ILM’s Hal Hickel on the symbiotic relationship between actor and animator

Warcraft3_final

At the recent Trojan Horse was a Unicorn event in Malta, I had the opportunity to sit down with ILM animation supervisor Hal Hickel for a THU TV interview.

We talked about the wealth of CG characters Hickel has overseen which began with live action and motion captured performances, including Davy Jones from the Pirates of the Caribbean films, the Orcs in Warcraft, and Tarkin and K-2SO in Rogue One (in which the original actor playing Tarkin, Peter Cushing, had in fact passed away).

DSC_4815
Hickel (centre) gears up for the THU TV interview. Photo by John Crowcroft.

With before and after images from those films, here’s some of Hickel’s main takes on how he and his team tend to tackle a character where actor and animator need to combine to craft the final result.

DavyJones_plates
When we were gearing up to do Pirates 2, we had a bunch of problems to solve. One of them was, we knew we needed to do body motion capture on location, which is something we at ILM had not done before. We needed to do it in jungles and on ships at sea and on sets, because we didn’t want to capture Bill Nighy’s performance separately on a motion capture stage. We wanted him there with the other actors. And then we had Davy Jones’ beard, which was a massive problem. It’s probably the single most difficult thing we had to do on the show. So, we decided not to tackle facial motion capture, but we opted instead to shoot Bill on-set in a motion capture suit – what we called iMocap are is our version of on-set motion capture.
DavyJones_final
So, we’d filmed him and then the animators would just study the footage of face and keyframe animate Davey’s face. The thing is, it wasn’t just a mechanical process of saying, ‘Oh well this, you know, the mouth corner moved this much, so we’ll move our mouth corner that much.’ You really had to look at it and try and figure out what his intention was as an actor. Sometimes that’s a bit like tasting a stew and trying to figure out what they put in it. When an actor is doing something really subtle and there’s no subtext, really teasing that out and getting it right as you transform it, because that’s the other thing, is it wasn’t a one to one transfer. I mean, if Bill got angry and flared his nostrils, well, Davey doesn’t have a nose. So we had to find other ways to communicate certain things. So there was a translation that had to happen, but the intent was always to preserve exactly what Bill had done and communicate that faithfully.
Warcraft2_capture
On Warcraft, it was definitely our impression that at least some of the actors who had done shows before where they were creating characters using motion capture, that they seemed to have the impression that that was all good and everything, but ultimately later on the visual effects crew was going to just bulldoze over that with animation and obliterate it and kind of do their own thing. So we did a test pretty quickly, just a few weeks into principal photography where we took some early phase capture of Robert Kazinsky and transferred it onto Orgrim.
Warcraft2_final
Even though our Orgrim asset wasn’t quite finished yet, we got a nice looking render with some nice lighting and we took that back to set on a laptop and just went around and showed it to the actors to say, ‘Look, what you’re doing on set is gold and we are going to treat it with kid gloves because the whole idea is to get that from a to b – you will see yourselves in these characters at the end of the process. And I think it was a great comfort to them. I think they felt that was great, like, ‘It actually matters what I do on camera.’
RogueOne_Tarkin_capture
With Rogue One and Tarkin, the actor having passed away introduces a very difficult thing that I don’t think we have all the answers for in terms of our technology and our processes. Because the very hardest thing from my point of view on it was, well, we had a terrific actor – Guy Henry. But Guy doesn’t use his face the way Peter Cushing uses his face. We all use our face differently. He doesn’t smile like him. He doesn’t form the phonemes the same. So while we could get a great performance from Guy and we could apply that to Tarkin and get a realistic looking movement, it lacked Tarkin’s likeness. We had high realism, but we had problems with likeness. It looked like Peter Cushing’s cousin or something. So we’d have to then adjust the motion to the face. The animation team would have to adjust it – if he did a smile, say, to get it to look like a Tarkin smile or a Peter Cushing smile.
RogueOne_Tarkin_final
The problem was if you messed with it too much, of course it would start to feel like you’ve messed with it. It’s very easy to break capture. Even body capture people who’ve worked with it know that it’s sort of an interconnected web of motions. And if you just tweak the hips a little or move this a little, you can break stuff pretty quickly and it starts to look weird and Frankenstein’d together. So we had to find a line. We were trying to chase realism, but we’re also trying to chase likeness. And sometimes we had the sacrifice likeness a little bit to keep it feeling real and it would be a little less Cushing because we just didn’t want to push the motion around that much.
null
We didn’t do facial capture with K-2SO on Rogue One, but Alan Tudyk’s performance, his comic timing, every little choice of how he moved his head and the delivery of his lines – we never messed with his timing. We had to fit the body capture to K-2SO and his posture and everything, but, again, the whole job there was to preserve what Alan had done, not to change what he’d done, especially his timing. We never messed with his time. It was perfect comedy gold.
RogueOne_K2SO
Actors are still at the heart of the process. They’re the foundation on which we build everything else. To me that’s kind of exciting. It’s funny because when motion capture was first coming onto the scene in visual effects, there were a lot of animators who were afraid of it because it took away some of their creative authorship over the work and I think they assumed that pretty soon just everything would be done with motion capture. But in fact it’s provided us with some really creative interesting tasks to build characters where we’re partnering with an actor.

SIGGRAPH Asia in Tokyo: a sneak peek at the VFX talks

siggraphasia

It’s not long now until SIGGRAPH Asia hits Tokyo. The computer graphics event begins December 4th, and runs at the Tokyo International Forum until December 7th.

SIGGRAPH Asia will have some fantastic talks, papers, displays and experiences for attendees. vfxblog will be there, and I highly recommend signing up now. You can find out more about how to do that right here.

Meanwhile, vfxblog has secured a special sneak preview of some of the VFX and animation-related talks that will be held at SIGGRAPH Asia. These feature some of the biggest studios around, including ILM, Pixar and Weta Digital. Check out the details below.

From Gollum to Thanos: Characters at Weta Digital

ata_1500_pubStill_raw4k_v222.1267

Of course, Weta Digital is another major player in the visual effects world, and they continue to stun audiences with photoreal and emotional performances. VFX supervisor Matt Aitken, who helped bring Thanos to life for Avengers: Infinity War, will run-down how Weta Digital continues to innovate on CG characters.

Production Session by Pixar

INCREDIBLES 2

There’s more to CG animated film than just animation, and that’s something that will be explored by Pixar Director of Photography Erik Smitt, who recently worked on Incredibles 2. He’ll break down the lighting and camera work in the film and give an overview of how Pixar managed to make this incredible sequel to the original film.

The History of VFX at ILM from Jurassic Park to Ready Player One

the-lost-world-jurassic-park-san-diego

ILM is a name synonymous with visual effects – the studio has been in existence for more than 40 years. Which is why this presentation from Nigel Sumner, Creative Director, ILM Singapore and Nico Delbecq, Effects Supervisor, ILM Singapore, is a must-see, because it will show all sorts of imagery and give all kinds of details from ILM projects over the years.

Behind the scenes of Solo: A Star Wars Story

Solo3

Nigel Sumner, Creative Director, ILM Singapore and Atsushi Kojima, Lead Animator, ILM Singapore, will talk through ILM’s visual effects for the latest Star Wars film, including how a lot of the work included old-school fx techniques combined with the latest in computer graphics and compositing.

Beyond the Uncanny Valley: Creating Realistic Virtual Humans in the 21st Century

Digital humans are currently one of the holy grails of VFX, and they’re also a big part of VR/AR and gaming developments in recent times. Several experts will weigh-in on where we are at in relation to the crafting of virtual beings:
– Christophe Hery, Senior Scientist, Pixar Animation Studios
– Hiroshi Ishiguro, Professor, Director of the Intelligent Robotics Laboratory, Department of Systems Innovation, Graduate School of Engineering Science, Osaka University
– Matt Aitken, Visual Effects Supervisor, Weta Digital Ltd
– Prasert “Sun” Prasertvithyakarn , Senior Game Designer, Luminous Production
– Erik Smitt, Director of Photography, Pixar Animation Studios

Hope to see you in Tokyo, and keep an eye out on vfxblog for more SIGGRAPH Asia previews!