‘The Prince of Egypt’: Henry LaBounta reflects on parting the Red Sea


These days, Henry LaBounta is Studio Art Director at EA Ghost Games. But before his career in games, LaBounta was at the forefront of effects simulation at Industrial Light & Magic, where he helped generate the tornadoes in Twister. Later he worked on The Prince of Egypt at DreamWorks Animation, in particular, on the parting of the Red Sea sequence. After DreamWorks, LaBounta moved to PDI as a VFX supe on films such as A.I. and Minority Report, before segueing into the games industry.

With the 20th anniversary of The Prince of Egypt approaching, vfxblog decided to ask LaBounta at the recent VIEW Conference in Turin what working on that Red Sea sequence was like back then. Hope you enjoy this new retro Q&A.

vfxblog: How did you come to be working on Prince of Egypt?

Henry LaBounta: Before I left ILM, I’d actually gone up to Skywalker Ranch. I met with George Lucas about working on the Star Wars movies that were about to start up. And then this came up as well, working at DreamWorks, it was the first movie they were going to do, where I could part the Red Sea: I was like, ‘Oh my gosh.’ Those are two interesting opportunities, right? But I had never done any 2D animation work before. So I was really excited about the opportunity to work with DreamWorks on something completely different from what I had been doing at ILM. And some of my friends were like, ‘Are you crazy? You want to work on a Bible movie and you could’ve been working on Star Wars?’ I’ve done a lot of crazy things in my career, and I’ve never regretted a single one.

vfxblog: For the Red Sea sequence, since this was a (mostly) 2D animated film, how did you think that you were even going to do that in CG so that it still had a 2D look?

Henry LaBounta: It was tricky because I know back then the whole idea of using anything computer graphics generated in an animated film was something not really done on a big scale. For characters, for example, it’d only be a crowd character that was CG that was only ‘so’ big on the screen.

The challenge is, in general, it’s easy to get in there and start making something that looks like some big visual effects kind of thing, which suddenly looks nothing like the rest of the film. So we had to develop techniques to incorporate an animation style within the effect of parting the Red Sea. We had a lot of really talented people on the team. Doug Cooper was one of the people I was working with. He was a huge help, because he had been working on animated films for quite a while. And one of the tricks we used was just taking a 2D animation of a splash, and using that in a sprite, and instancing that. So every splash looked like an artist could’ve drawn them, and they had that little bit more of a 2D feel to them.

vfxblog: Were you also looking to use Prisms, or even an early version of Houdini, to do the water simulations?

Henry LaBounta: When I got there, one thing that was interesting was DreamWorks was brand new. I mean, literally it was plugging in computers and setting up desks and stuff like that. Unlike ILM which was completely setup with pipelines, workflows and equipment and staff. We were kind of building the team while we were making the film. And we didn’t know straight away what we were going to use, but as we looked at the task at hand, we looked at some different software. Some of the effects artists in DreamWorks at the time were using Alias, and then they were doing a whole bunch of really nice things with Alias.

And I had been using Softimage primarily, and RenderMan at ILM. But we knew there would be some complex effects animation. And I wanted to try some procedural techniques. And Prisms was kind of the go-to thing at the time, but Houdini was brand new. So we were just on that cusp as Houdini was coming out. It may have even been Houdini 1.0., it was just barely ready for production. SideFX was so fantastic in giving us support. Like, I could in the morning send them a note and say, ‘This thing isn’t working.’ And by the afternoon I had a patch that fixed that. They were just an extended part of the team in a way, they were absolutely committed to making it work, and getting Houdini to actually generate the ribs and everything that we used to render in RenderMan.

vfxblog: Had you used Prisms at all or Houdini, before this?

Henry LaBounta: I had not. Not at all.

vfxblog: So what was it like learning that new software?

Henry LaBounta: For me, it was mind-blowing because it was like, this is software the way my brain thinks. I want to do this and I want to do that next, and I want to connect it to this. And I want to be able to change anything in the entire chain at any point. Houdini allowed me to take any kind of data and use it any way we wanted. We did some things that weren’t typical computer graphics ways to use that data, but it was really easy to make that work and plug it into a shader we’d written. It was so different than any other software. The closest thing to it was Dynamation, I think.

Henry LaBounta (centre) in Japan during a trip to promote The Prince of Egypt. He is with Michael Carter (left) and Richard Hamel (right) from SideFX. Image courtesy Kim Davidson.

vfxblog: This is Dynamation that was part of Wavefront?

Henry LaBounta: Yes. I’d done Twister at ILM, where we made that tornado with Dynamation. And it was something you could script and kind of procedurally control. And then Prisms and Houdini were like that on steroids. Like an entire package that’s based on those kinds of principles. Which are really common today, but back then it was pretty strict. You know, here’s a menu, here’s a drop down. This is the thing you want to do, you commit to that and it’s done.

vfxblog: Where did you get started on the parting sequence?

Henry LaBounta: DreamWorks had a great background painting department that would also do concepts for the film. They had already made some backgrounds for the parting of the Red Sea, and were working on some ideas of what this moment might look like. So our challenge was, how can we really bring that to life and animate it in an interesting way? We tried a lot of different things to get to this point. There were three directors on the film, all whom were fantastic to work with. And we took them along the process and we showed them work in progress.

Normally the challenge would be, how can we make a fluid system work in a really physically correct way? That wasn’t the challenge here. The challenge here was more, how can we bring something this scale to an animated film and not make it feel out of place? This wouldn’t have really been possible for the effects artist to draw at that scale and get that across. So shaders were a really big part of it for sure. And the work that Kathy Altieri, one of the art directors, had done, was super-inspiring. So by sticking with that colour pallet and being inspired by the paintings that were done, and always comparing our work to that, we tried to stay true to the format of the film that way.

Worth checking out: ‘The Prince Of Egypt – A New Vision In Animation’. This art of book includes many behind the scenes stills from the Exodus sequence.

vfxblog: When you were making it, what could the directors review on? Were you able to do playblasts? Or did you get a pretty final result pretty quickly?

Henry LaBounta: What we would do is we would pick a few hero shots. Looking back at it now, some of the hero shots we picked were some of the most difficult shots to start out with, maybe not the best idea. And we would try and get those working. This is months and months of reviews and iteration to get it to the point where everybody was happy with it. And then once we got that done, it was like, okay that’s a foundation for all other shots. And a lot of other shots in a way kind of fell out of that very quickly. And those didn’t require as many reviews.

vfxblog: What do you remember was the reaction when this film got released?

Henry LaBounta: Well, I think the team was really proud of what we had created. And we had a great team we put together for this. Over the years as I’ve talked to people and said, ‘Oh yeah, I worked on this movie,’ I’ve been surprised how many people have told me, ‘I love that movie. I’ve watched that so many times. It’s our go-to movie at the holidays.’ And it’s just heartwarming to hear that it had that impact on people all these years, that they got something out of it and really enjoyed the work we did.

Find out more about the VIEW Conference at http://viewconference.it

Compulsory viewing: the Computer Animation Festival at SIGGRAPH Asia Tokyo 2018

'Reverie' - SIGGRAPH Asia 2018 CAF 'Best Student Film Award'
‘Reverie’ – SIGGRAPH Asia 2018 CAF Best Student Film Award

Here at vfxblog we’ve already previewed some of the VFX-related talks and dived into the Technical Papers process for SIGGRAPH Asia Tokyo this year. Now we look at the other conference highlight: the Computer Animation Festival.

The ‘CAF’ takes in the Animation and Electronic Theater, and the VR Theater, plus a selection of panels and talks about the latest in computer animation and visual effects. It’s definitely one of the best places to catch up with films from around the world.

vfxblog asked Computer Animation Festival Chair Shuzo John Shiota, who is also is the President and CEO of Polygon Pictures Inc., to tell us more about how the CAF works and what to look forward to this year.

vfxblog: People are always dying to know: what is the difference between the Animation Theater and the Electronic Theater?

Shuzo: The Electronic Theater is a 100+ minute show comprised of the very best 2018 has to offer in terms of computer graphics storytelling. The types of story are quite varied, ranging from short animations, visual effects, game cinematics, music videos, to scientific visualization.

The Animation Theater is comprised of works that didn’t quite make the Electronic Theater selection but are nevertheless worthy of merit, or works that have a longer running time which make them hard to program within the ET. Interesting note is that SIGGRAPH North America no longer has the Animation Theater in its program, so it’s sort of like a lost dialect that only exists in Asia.

Also, don’t forget the VR Theater, debuting in SIGGRAPH Asia for the first time, which showcases the vest VR storytelling pieces of the past year.

vfxblog: Can you talk about the submission and judging process for the Computer Animation Festival – how did you arrive at the participants and the winners?

Shuzo: We had about 400 submissions from all over the world. They were first reviewed by our online reviewers, comprised of industry veterans, who nominated selections to be sent to the final jury. On formulating the final jury, in addition to the deep knowledge of the art and industry that is expected of any CAF juror, my aim was to 1) bring in an Asian perspective (5 of the 7 jury members are Asian, and another is currently working in China), 2) create a female majority (4 out of 7 are female), and 3) create generational diversity (the jury ranges from members with decades of experience to a young artist in his 20’s who is also a multiple CAF Asia awardee).

The jury made its selection based on the following criteria. 1) Craftsmanship, 2) Relevance, 3) Originality, and most importantly, 4) Does it move you.

'L'oiseau qui danse' - SIGGRAPH Asia 2018 CAF Best In Show Award
‘L’oiseau qui danse’ – SIGGRAPH Asia 2018 CAF Best In Show Award

vfxblog: Do you feel like there were any particular trends in the submissions this year?

Shuzo: On watching the CAF trailer, I think you will find that the look and feel of the selected titles are truly diverse and eclectic. This underscores the fact that computer graphics as a medium of storytelling has truly matured, and is now capable of creating images in a myriad of styles.

vfxblog: What kinds of panels and talks related to the CAF are planned?

Shuzo: We have around 10 production sessions that will no doubt give the audience valuable insights on a wide range of digital production; from Hollywood blockbusters by the likes of Pixar, distinct digital Anime productions by local Japanese studios, VR productions, to 64K intro productions.

We are also planning to hear from the director of this year’s “Best in Show”, “L’oiseau qui danse”.

'Vermin' - SIGGRAPH Asia 2018 CAF Jury Special Award
‘Vermin’ – SIGGRAPH Asia 2018 CAF Jury Special Award

vfxblog: Now that the winners have been announced, are you able to say which of the submissions also stood out for you?

Shuzo: I am very happy about the selections. I think we have a very good Electronic Theater, Animation Theater, and VR Theater. As a Chair, I am not able to personally vote, but ultimately, all the pieces I was rooting for got chosen as the top picks!

You can register to attend SIGGRAPH Asia Tokyo 2018 at http://sa2018.siggraph.org/registration.

Ten things we learned about Framestore’s CG stuffed animals for ‘Christopher Robin’


Marc Forster’s Christopher Robin is easily one of the most delightful films of 2018, and also contains some of the finest fully-CG animated characters you’ll see this year. That work was led by Framestore, which had, of course, sharpened its expertise in integrating CG animated ‘stuffed toy’ characters with live action in the Paddington movies.

Christopher Robin’s production VFX supervisor was Chris Lawrence, and its production animation director was Michael Eames (both hail from Framestore). In addition to Framestore’s 727 shots for the film, Method Studios also came on board to deliver several scenes.

To find out more about how Christopher Robin’s characters came to life, vfxblog sat down with Framestore animation supervisor Arslan Elver in London. Elver shared details on early animation tests, the on-set stuffies used during filming, and some of the specific details infused into characters such as Pooh, Piglet and Tigger.

1. Framestore started animation tests before seeing any concept art

Arslan Elver: We started with Pooh, but at that point we hadn’t seen the designs, the concept art, or anything else yet. The very first test we did was a yellow Pooh Bear in the form of a more classic Disney style. I did the very first tests with our animation director Michael Eames also getting involved. I did animated tests of him trying to climb stairs, but he fails and tumbles down, and he looks at his tummy and then looks around at an empty honey pot.

Pooh turntable.

2. The director didn’t want elbows or knees

One thing immediately our director, Marc Forster, reacted to was that the character had elbows and knees, and he didn’t want them. I didn’t understand at first but then he showed us some concept art which he was very happy with from Michael Kutsche. It was a teddy bear, it was Pooh Bear walking by holding the hand of Christopher Robin, and looking around, but you could tell there was nothing bending or anthropomorphic about it. We went back to the drawing board, and we did new tests to reflect that.

Pooh model.

3. Some inspiration for the animation came from a philosophy book

Marc Forster talked to us about a book called The Tao of Pooh. It’s about Taoism philosophy using the Winnie-the-Pooh characters. In the book, it talks about an uncarved block as an idea. The book says Pooh is an uncarved block. He’s not carved as a shape or sculpture. He’s empty. He’s a clean sheet. He doesn’t have any prejudice. He doesn’t have any expectations. He’s just who he is. So we started to dig into those ideas and think about the teddy bear aspect of it.

Character line-up.

4. Stuffed toys changed the way the characters would be animated

What happened is, on the set, they made these characters as stuffed toys. They had fully furred ones and then just grey ones with no fur. The stuffies were moved around by actors on set, and then the camera person shot the scene again via muscle memory for a clean plate. The stuffed toys were so interesting to see, and they fell in love with them –  Disney and the filmmakers, so they asked us to match our assets to that. The stuffies were so cute, you could put them on a chair and just by rotating the head a little bit you could immediately get some emotion out of it. So that was the kind of behaviour we were trying to find. We’d think, during animation, ‘What kind of head tilt will give that same feeling?’

5. Some interesting animation moments came from those on-set stuffies

With Piglet, say, I immediately picked up on the ears from the stuffies. The fabric around the ears is looser, it has these very nice ear movements on the head turns. And then with Tigger, who is so long, I was holding the stuffie from his head, and because he’s heavy, the rest of his body was hanging down with these very floppy arms. Mike Eames saw it and said, ‘That’s interesting. I wanna play with that idea a little bit.’

Piglet, voiced by Nick Mohammed.

6. Framestore’s animators played with the stuffed toys, too

When we got all these toys into Framestore, I called all the animators in and I said to them, ‘Just play with them,’ and we were recording. It’s some of the most stupid video footage ever. If you see these, I mean, just like these 35 year old men playing with plush toys, it’s ridiculous.

Director Marc Forster on set.

7. Pooh doesn’t really blink

Because of this Zen thing about Pooh, the director didn’t want him to blink. Even the eyebrow movements, he wanted to be very minimal. The mouth movements as well. He didn’t want it to be very complex. It was quite tricky because to be able to sell his talking, there’s a bit of jaw movement for sure, but if it’s just that it looks very weird. If you start to put in a lot of movement, it looks very stretchy very quickly, so we had to think, how we can keep it alive? How could we move the corners of the mouth and make some shapes that at least suggest that sound is generated but have enough fall-off on the corners of the mouth so it doesn’t feel like it’s stretching? It’s a very difficult thing to do without stretching.

Framestore concept art.

8. Tigger’s tail: should he jump on it a lot, or not?

I looked at Milt Kahl’s beautiful animation for Tigger when I was doing animation tests, and actually I did something based on that where he was running on four legs and I was thinking, ‘No, Marc’s not gonna go for this.’ But he responded really well. He liked that. He liked the energy of Tigger, but when I made the test with him on his tail, jumping on his tail, and then hopping down and clapping, Marc said, ‘Yeah, it’s very nice, but maybe we’ll only make him do it once or twice in the film.’ But it grew on him. During the production, I find myself getting notes like, ‘Let’s put him on the tail again.’

Tigger, voiced by Jim Cummings.

9. Eeyore went from sad to…still pretty sad

Eeyore was interesting because the very first test he was walking and just sitting on his bum and depressed. We started to do that but people didn’t really respond very well because he was a bit dead, so the note we got back was that we needed to keep him alive but still make him feel very sad. So we kept that same posture of him, but we raised the head up and rotated it up more. I think Marc wanted to see his mouth more because he had such a long muzzle. The other thing was his eyes – because of the fur, the toy is so sweet and cute but as soon as you do a little bit of this pose, the fur covers it so much and the render, it looks like a thin black line almost, so there was a bit of back and forth with that.

Eeyore, voiced by Brad Garrett.

10. Getting honey and food onto Pooh was tough

They didn’t do anything on the set but, later on, they did carry out some shoots for the honey, for how it looks on the face with the fur. So if Pooh chucks his face into the pot, we had to work out, what kind of lining of honey comes out and how much is it on it? They used more of the grey stuffies without any fur for all the dirty stuff. There was one beautiful scene where there’s this big cake and they all jump on it, but they didn’t shoot anything for it. What we did was get one of our animators to put his head into it and chomp on it and see how much remained on his face and how the cake breaks up, to help the effects guys. So we sacrificed one of our animators to do that for the effects guys, but at least they got to eat cake.

Pooh digs into some honey.

Christoper Robin is now available on Digital, DVD and Blu-ray.

Tech papers: the secret to SIGGRAPH Asia success

Image from ‘3D Hair Synthesis Using Volumetric Variational Autoencoders’, ACM Transactions on Graphics (Proc. SIGGRAPH Asia), December 2018 Shunsuke Saito, Liwen Hu, Chongyang Ma, Linjie Luo and Hao Li.

The Technical Papers section of SIGGRAPH Asia 2018 in Tokyo is shaping up, as always, to be a key part of the conference. But how do authors get their tech papers into a SIGGRAPH or SIGGRAPH Asia conference? And what happens once they do?

To find out, vfxblog asked Hao Li, who is a co-author on two papers accepted at SIGGRAPH Asia this year, how it all works.

Regular vfxblog readers will certainly have heard of Li and his research into digital humans. He is CEO & Co-Founder of Pinscreen, Inc. (which is developing ‘instant’ 3D avatars), Assistant Professor of Computer Science at University of Southern California, and Director of the Vision and Graphics Lab at USC Institute for Creative Technologies.

More about Pinscreen later, but first, the technical papers. This year, Hao is a co-author on two papers accepted to SIGGRAPH Asia:

Koki Nagano, Jaewoo Seo, Jun Xing, Lingyu Wei, Zimo Li, Shunsuke Saito, Aviral Agarwal, Jens Fursund, Hao Li (some more information and links regarding the paper on Koki Nagano’s website)

Shunsuke Saito, Liwen Hu, Chongyang Ma, Hikaru Ibayashi, Linjie Luo, Hao Li (information on the paper at Linjie Luo’s website)

These papers are the end results of countless hours (and in fact, years) of research. So where does that process start, in terms of submitting a technical paper?

“The bar for SIGGRAPH and SIGGRAPH Asia technical papers is high and the approach for submitting a paper can be very different depending on the type of projects,” says Li. “They can be theoretical/applied and either solve a known problem or something entirely new.”

What to consider before submitting

Before submitting a SIGGRAPH or SIGGRAPH Asia paper, Li notes that, as a general rule, he considers the following things first:

1. Will the reviewer be impressed/excited by the results – not necessarily high-quality renderings, but will the results have a ‘wow’ effect? What is the first impression?

2. Are the technical contributions and novelties significant enough or is it too incremental?

3. Can I position/differentiate my proposed method with existing papers and show convincing advantages?

4. Is the problem interesting? Am I solving a long standing problem, that couldn’t be solved yet? Is my work achieving the state-of-the-art to a well known problem and making a significant impact? Have I introduced a new field that can inspire more work?

Getting accepted

You can find more on how papers are submitted and reviewed here, but Li of course has some inside knowledge about how to get a paper accepted from several years of working in the field.

He says successful papers usually satisfy those questions above, in that:

1. The reviewers must be impressed by the results.
2. The method is new and there are significant contributions made by the paper.
3. The proposed solution is really different or better than existing ones.
4. The problem is exciting, useful, and/or impactful.

“The reviewers should always be convinced why something cannot be achieved yet with existing solutions, and how/why the presented method can solve it,” says Li. “A comprehensive discussion and clear differentiation with related work is always needed.”

Successful papers, Li adds, are generally very well written with very clear contributions. They are also “presented with polished illustrations, figures, and accompanying videos. The evaluations of the method also need to be very thorough and rigorous.”

Each year, too, there are often industry trends and issues that are timely. Li says this can be “favorable for getting reviewers excited, for example, deep learning, VR/AR, and 3D printing.”

You got accepted – now what?

It’s a lot of work just to be accepted, but there’s more to the Technical Papers section than just the paper itself. Presenting the paper at the conference is a major part of spreading the knowledge and generating discussion. This actually begins in the exciting Technical Papers Fast Forward, where authors have less than a minute to entice conference attendees to come and view their full presentation. The Fast Forward at SIGGRAPH Asia Tokyo takes place on Tuesday 4th December from 6pm to 8pm.

For the full presentation, Li suggests the following flow that has been a basis for him and colleagues for some time:

1. Start with some slides to motivate the audience why they should care?
2. Get straight to what problem we are trying to solve.
3. Explain why it cannot be solved previously while presenting prior work, and why it’s challenging.
4. Either give an overview of the method (top-down) or explain the technique from a simple example (bottom-up).
5. Show insanely cool results!
6. Mention some limitations if any.
7. Discuss what’s next and show some future directions.

“The key,” concludes Li, “is to connect to the audience and speak as if you are explaining the work to a friend or colleague, and not sounding like you are reading from a paper. The audience has to be convinced that you know what you are talking about.

Where it might all lead

Technical papers unveiled at SIGGRAPH and SIGGRAPH Asia are diverse, and often lead to continued research and sometimes even real products. Pinscreen is an example of where Li and his colleague’s initial research into digital humans has been taken further.

The company has released an app that generates digital avatars from a single photograph, with photorealistic hair and clothing options. Pinscreen has also launched a facial tracking SDK along with a demo app.

You can find out more at pinscreen.com, and see Pinscreen’s presentation at SIGGRAPH Asia 2018 Real-Time Live! (Pinscreen Avatars in your Pocket: Mobile paGAN engine and Personalized Gaming) on Friday December 7th at 4pm-6pm. Also, check out fxguide’s in-depth coverage of Pinscreen here.

Good luck in submitting your technical papers in the future, and hope to see you at SIGGRAPH Asia Tokyo!

You can register to attend SIGGRAPH Asia Tokyo 2018 at http://sa2018.siggraph.org/registration.