Full liquid metal, now in 3D: re-visiting the freakin’ T-1000 walking out of the fiery truck crash

Illustration by Aidan Roberts.

If you’ve never seen James Cameron’s Terminator 2: Judgment Day – either on the big or small screen – now’s the time to embrace this wonder of filmmaking and effects. The movie has been digitally re-mastered and received the ‘full liquid metal 3D’ stereo conversion treatment by Stereo D. The new release just premiered at the Berlin International Film Festival and will have public release dates around the world in August.

Twenty-six years ago, T2 helped usher in a new wave of digital visual effects artistry thanks to the pioneering computer graphics work by ILM, capitalising on their work for The Abyss, and then which the studio took even further on Jurassic Park.

Terminator 2 GIF - Find & Share on GIPHY

It was the liquid metal T-1000 played by Robert Patrick that represented the majority of this CGI work in the film. Indeed, a hero reveal of the ‘cybernetic organism’ emerging from the flames of a burning truck wreckage became one of ILM’s signature shots for years to come.

Two of the principal artists behind that work were animation director Steve ‘Spaz’ Williams and associate visual effects supervisor Mark Dippé. In this special vfxblog interview conducted at SIGGRAPH Asia 2016 to celebrate the 25th anniversary of Terminator 2, Williams and Dippé recount their efforts to create that memorable shot, known as CC-1.

We’re gonna f#@king build it!

Steve Williams: We had five separate categories of shots for Terminator 2. Now, we had what was called the pseudopod team, so we could repurpose the data from The Abyss. But as opposed to refracting, the T-1000 was reflecting. Then we had the morph team, you know, which was the more two-dimensional transformations. Then we had the death team, that was the whole death sequence at the end. And then we had the human team, which I was part of in terms of animating. This hero shot – CC1 – fell into that group.

Mark Dippé: The pseudopod from The Abyss was an abstract alien creature that had no relationship to humanness or even livingness. But for the T-1000, the big question was, how can you make it move and behave as if it’s a human inside, whatever you wanna call it, even though Robert Patrick in this case is not a human, he’s a T-1000, he’s a machine, but that was the big concern.

Mark Dippé (left) and Steve ‘Spaz’ Williams speaking at SIGGRAPH Asia 2016. They were in conversation with Scott Ross.

Steve Williams: We said, fuck it, we’re gonna fuckin build it, you watch. That was the MO.

Mark Dippé: Yeah, we’re gonna match the real Robert Patrick, that was the whole principle.

Steve Williams: We were so cocky.

Mark Dippé: And then Cameron just goes, go for it man. You’re on it. Do it.

The shot, and the story

Steve Williams: In the script, the T-1000 is going to walk out of the fire and he’s going to, the term people used was ‘morph,’ but in fact it was model interpolation. He’s going to interpolate into the fully clothed version of Robert Patrick.

Mark Dippé: This was the first time you were gonna see the liquid metal man, so in the sort of rule of thumb of the visual effects world it had to knock your socks off. The first time you see your creature or your effect, people have to buy it a hundred percent or you’ve lost them.

Steve Williams: It was three hundred and seventy five frames and it took five months to do the animation.

An excerpt from the script for Terminator 2: Judgment Day.

Mark Dippé: So we worked our ass off on this sucker.

Steve Williams: This was our big chance, so we knew, I knew, Mark and I both knew that I had to build this guy in data. I had to build the whole creature in four sided b-spline patches.

Mark Dippé: What’s amazing about this shot too is, the first time you see it, it reveals what it is, but it sort of tells you the whole story. There’s this terrible crash, you think this creature in the future is dead, and out comes this liquid metal blob out of the fire that transforms into a fully formed realization of the character you’ve come to know, the T-1000 Robert Patrick. And you get it right there.

Where to start?

Steve Williams: So, we had what we called RP1 through to RP5. Robert Patrick – RP -that was the actual naming convention.

Mark Dippé: RP1 is the blob, an amorphous blob. RP2 is a humanoid smooth shape kinda like Silver Surfer. RP3 is a soft, sandblasted guy in a police uniform made out of metal, and RP4 is the sharp detail of the metallic liquid metal police guy, and then RP5 is live action.

Steve Williams: So this shot here was CC1 where he migrates from RP2, which is what we call the ‘Oscar’ version, a smoothed-down T-1000, but he shares the exact same dataset or control vertices as RP4. And RP4, again, is the fully clothed version with the wrinkles and buttons. What I did is I hid all the buttons and the badge and the gun, I hid it inside his body cavity, and grew it out in time. The press called it morph. In fact, it was called model interpolation.

The various T-1000 macquettes on display at ILM.

Steve Williams: Now, to get to all those RP versions, we had to break it all down. In the script it said he migrates from the blob version into a fully clothed version. That’s Cameron’s idea – so we had to translate that. So we thought, okay, we’ll break it into four stages. Let’s just do that in data, but the control vertices have to actually share the exact same properties. But they migrate in time. That’s essentially what the MO was at that point.

Mark Dippé: We chose those ones because we felt, first of all it was hard to do any of this, but we felt those five stages were sufficient enough for us to achieve all the story ideas that were required. You know, he’s a formless blob, oh, he’s kind of a soft humanoid form. Oh, he looks kinda like a policeman. He is the policeman, to Robert Patrick.

Steve Williams: If you look at Robert Patrick and what we call the RP4, which is just before it becomes the real guy, all that data of his head we collected using a cyber scanner. Then what we had to do is write an equation to actually smooth it all down and make it stupid, make it essentially like ice cream for RP2. So the data all had to be the same. You were not changing the amount of control vertices in the actual data. You had to run a smoothing algorithm over it.

Mark Dippé: We also had little chrome maquettes that were made for each of the stages. The ILM Model Shop made them and they were used as concept pieces.

Sketches of the five RP stages.

Steve Williams: This was all done in a forward kinematic system called Alias, version 2.4.1. So I had to counter-animate it to make sure the feet didn’t go through the ground. Obviously with an inverse kinematic system we handle it differently where you’re taking one effector and modifying it, as opposed to: rotate, rotate, rotate, set three channels of animation. And this is all built in b-spline patches which are, and the rule with a b-spline patch is four sides; you have to have four sides. Now, of course, it’s sub-div.

Mark Dippé: I mean, we had so many technical difficulties that have just been solved for today. Like we had to build these separate pieces that then had to be blended together with an external piece of software that was written by this consultant named Angus Poon who Spaz knew.

Steve Williams: Yeah, a buddy of mine from Alias.

Mark Dippé: The fundamental thing to realize is the modelers, the animators, everybody had to be aware of these technical constraints, and no one could screw up or it just all fell apart.

Robert Patrick reference

Steve Williams: We had Robert Patrick come up to ILM and we painted a grid on him, a four inch by four inch grid all over his body, and he was like in a crucifix pose.

Mark Dippé: And then Robert volunteered to be there in his little Speedo underwear.

Steve Williams: He knew he was a part of something big. It was new.

Mark Dippé: Yeah, Robert was fantastic. He was totally game. He stood there in the, he was cold and standing there half-naked in front of all these people you know having him walk a certain way.

Robert Patrick with the grid painted on him.

Steve Williams: We had him run, and he ended up running so much on a rubber mat that we had that he ended up blistering his feet, to the point where we had to cover his feet up.

So, there was no real motion capture at that time, at all, so we shot him with two VistaVision cameras exposing simultaneously. One from the front on an 85mm lens, and one from the side on a 50mm lens, and they’re firing simultaneously. So I can look at frame one from the front, and that would match frame one from the side. From there I basically rotoscoped Robert’s walk.

Mark Dippé: It was really through hand digitization not only of his body data but of his movement data that we created a database with a virtual character. It was all hand-built.

Steve Williams: We even originally included a limp Robert had from a football injury. I noticed it in the initial test that we shot with him. So I had to try and correct that in the bone walk. So when I went and I reanimated CC1 for real when we got the plate photography I made a lot of corrections to that, because he was supposed to walk like a machine.

Mark Dippé: It is one of those things where it’s a little subtle, but you can see it, and it just came out of the rotoscoping.

“His shirt was smoking”

Steve Williams: The shot was filmed on October 30th or 31st, 1990. It was Halloween day that we were literally on that set. So we were shooting with an A and a B camera we had a B camera, and that B camera we had to paint out later because it was there capturing all Robert’s motion. It was so we could study his motion from the right side as well, because we had to literally try to replicate his walk.

Mark Dippé: What we were using was a motion control head so we could do multiple takes, and we had these markers used for tracking the camera. Everything we did seems so trivial today in some sense, but back then everything was kinda risky and we’d never really done it.

Steve Williams: Now, in The Abyss, for the most part the cameras were locked. But here the camera was moving. This is really the first shot that was ever done where there was a moving plate where you had to have exact motion in the computer that mimicked the exact motion in reality. This was a Tondreau camera, and the Tondreau camera was a PC-based system, and we took the actual curve of the motion of the camera and plugged it into Alias version 2.4.1, and it was kind of accurate.

Mark Dippé: Being the hero shot, the most important shot in the movie in many ways, it had all the difficulties you can imagine. Multiple elements we had to now shift in the plate with the flames to the live action Robert Patrick. We had to have our guy, his feet had to stick to the ground, he had to have reflections in the plate, we had to have the flames that were there…

Steve Williams: Well, the reflections actually are Robert Patrick duplicated and flipped upside down.

Mark Dippé: But we had to match him into this, and it had, basically everything was in this shot.

Steve Williams: And we really got such great real flames – they were crazy hot. It was so hot after twenty takes, remember that, his shirt was smoking.

Mark Dippé: This also reflects Cameron’s aesthetic. ‘I want some flames,’ he’d say. And so the effects guys kept putting in more rubber cement, turning up the flame bars, and basically in the last take, I think it’s on the hero take, it’s like it was getting so hot they put that flame blanket for one more take and it literally, there’s smoke coming off Robert.

A final frame from the sequence.

Steve Williams: And that kind of background – moving flames – was pretty challenging. There’s actually a cross dissolve of two elements going on here. Element one is Robert Patrick: the camera’s moving, Robert Patrick runs in half way and he metronomically tries to match. That’s element one. Element two, a clean pass of the entire plate. Then we cross dissolve over thirty flames because the flames were different and you just don’t notice it, but the camera remembers the move.

Mark Dippé: And then we would have all the reflections in there.

Steve Williams: As a matter of fact, with our chrome reflection we actually, because it’s a six-sided cubic reflection map, on occasion we would reflect things that only we know are in there.

Mark Dippé: The bottom line, though, is that the environment maps are not in sync with the actual photography, so they’re always, but just the character, your eye is very forgiving, your mind puts it all together.

Animating a liquid metal man

Steve Williams: The actual data that’s in the T-1000 was exactly what I was looking at. It was wireframe, but it was the actual real data. In the case of Jurassic Park later on it was substitutional data that we subbed in high res later on. So the actual configuration hierarchy of the way that I built the T-1000 dealt with the pivot points of the knees and the hips and stuff like that, so the actual physical data was the chain itself. Unlike Softimage where you pre-built a chain and hang data off it heirarchally.

So this was a very primitive system. It was a pivot point here, a pivot point there, and a pivot point at the hip, so when I animated it I’d had three separate channels just for the leg, so when he took a step, you have to counter-animate the data on the foot as he’s moving through because essentially he’d go right through the floor the whole time.

A body parts sketch and Cyberware scan used by the ILM team.

Mark Dippé: Basically when he animated the hip he had to go and reanimate all of the other elements.

Steve Williams: You had to animate everything all the way up. And so when IK came along you had one channel of animation and an effector that would modify it in real-world space.

Mark Dippé: Then for the morph into the real Robert Patrick, the morph is developed on a 3D body, and you can look at it front-on and wait until it looks good and then it gets put into 3D space onto him. And of course it never lines up a hundred percent so it has to be cleaned up by hand to really line up.

Steve Williams: Stefan Fangmeier ended up rendering this shot, he ended up being the head TD on this, and when he showed up we were having real problems with the chrome shader. And when Fangmeier showed up everything was perfect.

Mark Dippé: And remember, all the match moving was done by hand. There were no automatic match moving tools in those days. And we did compositing with a script based approach. It was based on the early days of the Pixar code – it was all script-based and it was very tedious. For example, in The Abyss, when it was even more primitive, everything was rendered in layers because rendering took so long you did not want to take the chance of having your highlights baked in at too high a level, so every layer was rendered separately and then they were all composited together. We had similar control here but it was definitely much, much more sophisticated.

Chrome man render.

Steve Williams: This is the first example, in Terminator, where the compositing was all digital compositing. The Abyss was optical compositing.

Mark Dippé: In terms of the element, into the live action, yeah.

Steve Williams: We actually literally scanned out the computer generated elements for The Abyss, and that was optically composited for eighteen shots. This is the first example in T2 where all the compositing was done digitally.

A five month odyssey

Steve Williams: I lived with this for so long. Sometimes I would go into our C Theater and just look at it projected. We’d scan out the film, then look on a big huge screen in the theater because: this is the shot that’s gonna be in the movie. And I just remember just sitting there for what seemed like hours at the time just watching it, and then going to the back of the theater and watching it, going to the side of the theater and watching it and thinking, fuck it’s still not right man, it’s still not right.

Mark Dippé: Yeah we’d do stuff like flop it and watch him going left to right.

Steve Williams: We’d even literally put a mirror up to the screen and watch it in reverse that way.

Reference stills of Robert Patrick in police uniform.

Mark Dippé: It’s funny, because now we kind of love and accept it, but back then – you’ve seen it so much, you’re just thinking about what you think is not right. And the truth is, you’re making it all up. You’ve never seen it before, so this whole thing is in your mind anyway. It’s your imagination it’s right or wrong.

Steve Williams: And it’s one of the problems with the animation process. When you’re living with a shot for so long you don’t know if it’s good anymore.

Judgment day

Steve Williams: People flipped out when they saw the shot.

Mark Dippé: It was huge. T2 caused a huge explosion. The fan audience is a little bit of a specialized one but it was all over the world, because I went to some festivals and you’d see like the T2 skeleton there, it was a massive thing.

Steve Williams: We went to SIGGRAPH that year for T2 and we were swamped.

Mark Dippé: It was massive.

Illustration by Aidan Roberts.

Steve Williams: And I always appreciated that Cameron would be saying in interviews, ‘It was Dippé and Spaz that figured that out.’ He was very, very open about that.

Mark Dippé: Yeah he’s a very generous dude. I mean, on set he can be a handful, you know, he has a reputation, but I would say once he respects what you do he’s a very generous dude. It was really great working with him.

Terminator 2, which was also notable for its wide use of practical, miniature and make-up effects, would go on to win the Best Visual Effects at the 64th Academy Awards. The recipients were Dennis Muren, Stan Winston, Gene Warren Jr. and Robert Skotak.

Thanks to the team behind SIGGRAPH Asia 2016 for making this interview possible. You can find out what’s happening for SIGGRAPH Asia 2017 in Bangkok here.