How James Cameron Created an Otherworldly Reality in Avatar: The Way of Water

“I see you.” That is the line spoken in both James Cameron’s original Avatar (2009) and its first sequel, the incredible Avatar: The Way of Water, released this past December, when the native Na’vi characters look into each other’s eyes and see the person within. And with the Oscar®-winning visual effects (VFX) he and his team have developed and utilized, the phrase takes on new meaning. The CG animated characters are not cartoons – they are people. And we see them, from the inside out.

For the production of the original film, the team developed and used a unique “Performance Capture” system, put together by a company called Giant Studios, who had provided their Performance Capture system to Peter Jackson and his Wētā Digital (now Wētā FX) studio for the Lord of the Rings films and others. Rooted in a Motion Capture system, the technique captures the performance of an actor’s body by recording and deciphering markers placed at key points on the body, and essentially creating a skeleton that drives those same points as the basis of an animated character. Performance Capture extends the system by adding a facial camera rig, which adds the true facial expressions of the actor in their performance of a scene, which, again, can be used as the basis of the facial animation of their character. The resultant animation from Weta is so lifelike, it can be easy to forget one is watching alien characters, and not human actors.

Not long after the release of the first film, Cameron, his longtime producing partner, Jon Landau, and the production team held a “post-mortem” retreat at a hotel in Santa Barbara, to talk and think about the kinds of improvements they would like to make for the film’s sequels. “First and foremost,” Landau states, “was making the actors’ performances come through stronger in the characters. But we also knew we would have much more integration between human actors and CG characters. So it was a matter of how do we give Jim more tools, when he’s on the set, to see, in real time, things that aren’t there.”

Returning to the fold was Virtual Production Supervisor Ryan Champney. An electrical engineering and computer engineering major from Georgia Tech, Champney eventually fell into working with Performance Capture. “I was originally writing software to determine when cell towers would become saturated,” he recalls. “I realized, ‘I don’t really want to be doing this,’” he laughs. He went to work for Giant, and served as the Simulcam (see below) technical director on Avatar.

As things began to gear up for the first sequel, The Way of Water, Cameron asked him to return, as Virtual Production Supervisor and Virtual 2nd Unit Director. “He said, ‘We’re going to do a bunch of these,’” Champney notes. Cameron’s production company, Lightstorm Entertainment, also acquired Giant, which had fallen on hard times, to secure the technology for future use.


Tuk (Trinity Bliss) swims in the ocean

“In many ways,” says Landau, “Ryan is the father of the system. And he never says, ‘This is the system, I’m just gonna keep it going.’ He’s always approaching things as “How do we improve it? What else do we need? What does Jim need to film? How do we adapt it to include new technologies?’ He’s always pushing to do more. Says Champney, “We have to think, how do we make all the systems faster, more efficient, and work in real time? I was brought in in 2012, but always thinking, how do we build something that we know will still be cutting edge 10 years on? The system couldn’t be something where we would be locked in and can’t change. It had to be able to evolve, without affecting the production.”

Design
Cameron is a truly visual storyteller, so as he is crafting his story, he makes use of concept art from his Art Department, to help him visualize what he’s seeing in his head. “He uses those to bash out story ideas, while they are writing, just to get a visual of it,” explains Lightstorm Executive Producer and Visual Effects Supervisor and Virtual 2nd Unit Director Richie Baneham. That art was developed by Production Designers Dylan Cole and Ben Procter. “They will walk through the script with Jim and map out scenes,” says Champney. Cole’s work focused on everything present in the natural world of Pandora, where the story takes place, and its inhabitants, the Na’vi. Procter’s designs were focused on the elements of the human world – the human environments, ships, gear and vehicles. “The images that they produce give us a place and a sensibility about that world,” Baneham says.

Cole’s and Procter’s concept art is then turned over to Lightstorm’s teams of visual effects artists, working out of offices attached to the company’s soundstages in Manhattan Beach, CA. Each stage has three stories attached, housing a total of, on average, 200 artists (though ranging, over time, between 150 and 300). These include departments such as Character Animation, Environments, Motion Edit, Sequence, Post Visualization, Kabuki (see below) and others. The Environments artists then take that art and begin modeling each of the assets – molding and sculpting everything from the physical environments in which the scenes take place, to Na’vi characters, creatures, animals, to every piece of costume, plants, vehicles, weapons and more – a total of more than 38,000 individual assets, all of which have to be created before Performance Capture can take place.

“When the production designers turn over an environment, these artists are the ones building that environment,” Champney explains. “The ones digitally modeling it, doing sequence layout, preparing things for the stage, preparing things for final rendering in New Zealand. They’re the unsung heroes, behind the scenes, building all of these worlds – and creating the breathtaking environments of a scene.”

Interestingly, all of these items were built by the artists ahead of time, not only for Cameron’s virtual production (see below), but for use by Wētā FX when completing the film. “All that work has to be done eventually,” Champney explains. “With virtual production, you’re front-loading that work It needs to be ready by the time you’re shooting,” for use in the Simulcam system (also see below), “as opposed to ‘We’re just gonna shoot something, and then dump it over to a VFX company, and have them put it together.’ Jim doesn’t want to rely on what it will look like later. He wants to decide what it will look like in the moment.”

The artists not only develop the 3-dimensional models to be used in the scenes, but fully flesh out the environments, using Wētā’s Gazebo software, a proprietary rendering engine, creating “template level” (a sort of game engine level) animation in 3D space of each scene. Cameron was then able to do – as any director would be able to – essentially “scout,” using his virtual camera, the environments with the artists, to ask for changes in the worlds they have built. “Jim will see the world, through his virtual camera with real-time lighting, before any actors get there,” says Champney, “and say, ‘Okay, let’s put some trees over here, make that hilltop taller, etc.,’” crafting it into the space in which he will place his actors and characters for blocked scenes.

The process is not an unusual one for virtual productions, many of which Champney works on. “Some filmmakers like to use VR. They’ll put on a pair of goggles and actually walk around and look at the virtual world, accompanied by the VFX Supervisor and Production Designer. They’ll have a list of assets they can bring up on a menu and use their joystick – ‘I want to grab that rock, put it there. Let’s make that rock bigger.’ They’re essentially dressing the set, virtually, before they shoot it.” Cameron prefers to use a virtual camera, instead. “He feels, ‘I want to see it through a lens, not a set of goggles.’” But the process is the same. “He’ll ask, ‘Can we add more bushes here?’ ‘Remove that bush.’ ‘Chop that tree – make that tree bigger.’ ‘We need more fire light, we need more fill.’ It’s a very powerful tool, but unless you know how to manage it all, it can get very complex.”

It is here, too, that Cameron does his initial blocking for the scene, deciding how and where the characters will be placed to deliver the scene . “At that point, we’re able to very quickly understand what the blocking will look like, and then evolve the environment, to manipulate the performances in a manner that will compose properly, and inform our actors, to be able to have a real world experience,” Baneham notes.

Performance Capture
The way virtual production works, those actors whose characters will later be fully realized by Wētā, but whose movements will drive the final product, perform the scenes in a Performance Capture stage. “We shoot everything virtually first,” says Champney. “Even live action actors, who will eventually be shot on a live action stage with a live action camera, will do their performance on the Performance Capture stage, so that Jim can work out how he wants them in the scene with the other, later-animated characters, played by their fellow actors on the stage.”


Director James Cameron and crew behind the scenes. Photo courtesy of Mark Fellman

Markers are placed at key points on the actor’s body, and infrared light is flashed at them at 60 cycles per second, and the reflections from those markers are captured/recorded with special cameras located around the three-dimensional “volume.” The volume is, physically, the three-dimensional space in which all of this is taking place, within the Performance Capture stage. Special software is used to keep track of the movement of each marker on the actor’s body – and even of more than one actor’s motion at a time – and creates, essentially, a moving skeleton of the actor’s movement throughout the scene, keeping precise track of where the actor is in relation within the volume itself.

While keeping track of the body is part of what’s called “Motion Capture,” filmmakers like Cameron want to capture the actor’s facial expression through the scene, as well, bringing it up another level to what is known as “Performance Capture.” To accomplish this, the actor wears a “facial rig” – a special headgear rig which supports a pair of HD cameras in front of the actor’s face, to record not only every bit of expression on their face, as they deliver the scene, but also, again, the movement of sets of dots painted onto their face, data which is extracted and used in the final facial animation later.

“Motion Capture is about ‘How is your body moving?’” says Champney. “Performance Capture includes all the facial tics, facial movement and audio of the actor’s delivery.” It is key, he says, to use the actor delivering the character in the scene to do both the physical movement and the facial movement, as opposed to, say, having a stunt performer do the motion and have the cast actor simply stand in front of a camera and say their lines to a 3D camera. “The way your face moves is a direct representation of how your body’s moving. So you don’t want to have just a face that gets glued onto something else. As much as possible, you want to capture a complete performance.”

While during the making of the first film, the facial rig had a single wide angle lens to capture the image, for Way of Water, a pair of such cameras was used (wide angle, still, in order to avoid having the lenses’ small booms extend too close to the actor’s face, which longer lenses would require). “You want as much information as you can about the face,” informs Letteri, “and a single camera doesn’t capture a whole lot. Your face actually covers a lot of real estate. But when we did the first film, those cameras were so heavy, we did not want to ask the actors to wear a second one, because it’s prohibitive to the performance – it changes the behavior. There’s a tradeoff between getting good data and getting a good performance.” By the time the new film was being made, the newer generation of cameras were so light that a pair of them equaled the weight of the original single camera.

That system provides stereo, allowing the team to increase the “depth map” of the actor’s face with real data. “Now, you’re getting a depth mesh that you can reconstruct and look at with a longer lens, so you can see what’s going on. You’re now producing a topography that can give you that mesh you can reference,” and apply to facial animation to represent that actor’s face on the CG model of their face. “It gives you stereo disparity,” notes Champney. “Instead of just getting the 2D position of the markers on the face and trying to reproject that on the face, you’re now getting a 3D representation. You’re getting the Z-space of a marker, not just the X and Y-space.”

Interestingly, he says, at some point in the near future, Wētā may not even need the facial markers on future shoots, thanks to the introduction of A.I. into their mapping/tracking software. “They now have an A.I. element, to teach the system of the actor’s type of motion, training it with muscle simulations for a particular actor in a particular response. I would imagine, if Movies 4 and 5 are greenlit, we might not even have the dots on the face anymore. It might just be a stereo camera.”

It is the marker-infrared light-camera system (as well as the facial system) that is recording performance and action of the actor, during Performance Capture – not a true motion picture camera. But Cameron, while directing the cast, is also using something called a “Virtual Camera.” The Virtual Camera is not a true camera, but a rig that has similar ergonomics to a real camera, that allows him to get a very real sense of how completed shots will look. The system takes the data about the position of the actor within the volume, gathered from their markers, generates a game engine version of the character in Gazebo, and places it within the CG environment.

“He uses it as a blocking tool,” explains Landau. “It allows him to see the environment, and see how the actor is moving within it. So he’s thinking, ‘Where should people be stacked up? [depthwise, in front of/behind each other]’ ‘Okay, you gotta turn your bodies a bit more,’ ‘I want the sunset to be behind them.’ Because we don’t want to be manipulating their performances after the fact.” Notes Champney, “Jim is able to work out here, in the virtual world, how he wants the scene to look. ‘The camera’s gonna be on this side, the lighting’s gonna be from here. Here’s your blueprint.’ It’s pretty much 80-90% there when he’s finished, so things like this aren’t being decided on a live action set, a year later, with an expensive crew hanging around.”


Sigourney Weaver as Dr Grace Augustine, director James Cameron, and Joel David Moore as Norm Spellman. Photo courtesy of Mark Fellman

In order for the system to know where Cameron’s camera location is within the volume, it, too, has a set of markers – a “halo” or “crown” – which allows the system to keep track of how he is maneuvering through space, as well as rotation and translation of the camera. “Our optical sensors see where it is in space, within about 1/4 of a millimeter of where it actually is,” Baneham explains. “We then assess what lens he would be using, to create what he’s seeing, as well as the focal length and depth of the sensor from the ‘glass’ of this virtual camera,” so that when Cameron is filming live action elements that will go in the very same scene, those lensing specs can be applied on the set, to produce an image with identical optical qualities.

The Virtual Camera system’s camera rig was originally built, for Movie 1, by that film’s Virtual Production Supervisor, Glenn Derry, over several iterations. “It had to feel like a real camera, as Jim or any operator would be used to handling. It had to be shoulder-mounted, and it had to be heavy, to start with. But then, if you free yourself from 100 years of filmmaking, you can say, ‘Well, what, actually, do you need?’ such as buttons and controls for what we’re actually doing on a Performance Capture stage. By the time we hit the end of Movie 1, we had a conformed, ergonomically correct, behaviorally correct version of the Virtual Camera.”

For the new film, Derry’s brother, Robbie, built the latest iteration, whose control buttons were made to work perfectly for whomever is operating it. “Jim is left-handed, and has a very specific arrangement for what controls he wants where,” explains Baneham, who also operates the Virtual Camera. “And mine are different. But they’re just a preference setting. So we can literally switch, just by saying, ‘Use Jim’s settings’ or ‘Use my settings,’” for such things as zoom, focus and iris, among other things. “Jim has such a keen eye and a long-ingrained understanding of how cameras work. So he wants it to behave the right way.”

The Performance Capture stages are located in Lightstorm’s facility in Manhattan Beach, the largest space being Stage 27. “We had four stages, at our peak,” says Champney. Large sliding “elephant doors” connect the stages to each other, to allow the creation of bigger spaces, when needed for big scenes.

Physical set pieces are built to represent topographical surfaces (like hills, trees or structures) which the actor will be interacting with while performing the scenes. “They’re 2 meters x 2 meters tilted pieces that click together, to create a massive topography, like a giant Lego set,” Baneham explains. “If we need somebody to go over an object, we’ll never say ‘Pantomime' as if there was a plant there.’ We’ll put a plant there, and make our actors go through it. It needs to be informed behavior. Because we live and die by this. If you don’t have an earnest performance – if you don’t believe in the characters, we don’t have anything.”

One important change that came since the production of Movie 1 is an increase in scale – the number of people that can perform on the Performance Capture stage at a time and still have the system identify one actor from another. “We were very limited in the number of people we could capture at one time,” says Landau. “If you look at the first movie, it was a much more intimate film. Many of the scenes were only between two characters. Now, Ryan and his team have gotten us up to between 22 and 24 people. Now you could do any scene you want, capturing everybody you need.”

The system, developed by Champney and lead programmers Vaughn Cato and Bill Lorton – still rooted in what they call “the Giant system” – can process much more data. “And, with all of those people, where markers might be occluded on one actor, it does a ‘biomechanical' solution and figures out who’s who and what body part is where.”

Before the principal actors are brought in to do the Performance Capture of their scenes, the company’s stock “troupe” of stunt people/live action stand-ins, such as Kevin Dorman and Alicia Vela-Bailey, will be used to help Cameron do his initial blocking out of the action. “We’ll have an idea, then, ‘Is this the way Jim envisions the scene?’” explains Champney ``Then, when we bring in the principal actors, like Zoe, Sam, everybody, we have an idea of what the blocking will be. In a way, it’s the same as if this was a live action movie. You’re building the set – the CG environment – you’re lighting it, you’re building the atmospherics, and then the actors come in. It’s the same thing with the virtual production stage.`` While the actors, while shooting Performance Capture of their scenes, do have a look at the Gazebo animation, to understand the environment they will be performing in, when on the stage, there is still nothing present but the “Lego pieces” Landau described. So Cameron does something else that is equally important, to help prepare them.

“Jim always tries to give them a visual or sense memory of the types of the experience of things they’ll be doing in scenes, that they can tap into while performing on the stage,” Champney explains. “So in the first movie, they went to Hawaii, and had to be in loin cloths and go around the jungle.” They also did dive training, as well as a shoot in The Bahamas this time, riding vehicles through the water, as their characters do, riding animal characters.

Onstage, the actors will have props – anything that they will be interacting with in the scene, again, to stay clear of not-believable miming. “It’s the same prep they would be doing in Black Box Theater,” says Landau, a type of theater where actors perform in a simple black space with no settings. And do they get to see their lines ahead of time? “Sometimes, we don’t have scenes ready until the day of, because Jim’s still making changes to it. But they’ll have seen the artwork,” Champney notes.

While people often think of Cameron as a director focused mostly on technology, those who work closely with him know him to be quite the opposite, when it comes to working with actors. “Jim is an actor’s director,” says Baneham. He’s a master at getting a performance out of an actor, without specifically ever telling them what to do. He’ll direct them, tell them what he wants, and they’ll try to hit it for him. But he’ll never tell somebody to do something specific or ‘Move your face like this.’ He values their craft far too much.” Champney agrees. “He does indeed look at Performance Capture almost like Black Box Theater. He’s really just focusing on the performances, getting into their head. And he often isn’t even using the Virtual Camera. He’s really interacting with them, as a director, directing actors.”

So one would wonder if acting in a Performance Capture space is more difficult for actors than working on a normal live action set. It turns out the opposite is true. “Sigourney Weaver was asked in interviews if it was hard, and she noted that it is, in fact, easier for her,” Champney details. “While on a normal film set, she might have a camera three inches from her face, with a big, hot light, right there, and a bunch of people tweaking things around her. But here, she can look directly at the person she’s interacting with, run up and touch them, because they’re not worried about hitting their ‘marks.’ Because there’s no camera there, requiring them to be in a specific place. They’re simply performing in the environment, and the camera will be placed in the environment with them later. She noted that it’s really much more natural, like a theatrical performance on a stage, as opposed to the more mechanical acting that has to happen in filmmaking. It’s almost like a two-man Broadway play on a very blank stage.”

Humans aren’t the only ones required to do Performance Capture. In both the previous film and the new one, “Direhorse” creatures have to be created, based on the behavior of real horses. In Way of Water, they are featured in a major scene involving the destruction of a MagLev train. “So we had the huge elephant door at the end of Stage 27 open to the next stage, to create one massive stage,” Champney explains, “so that we can get horses fully running up to speed, and have enough room for them to decelerate.”


From left to right: Ronal (Kate Winslet), Tonawari (Cliff Curtis), and the Metkayina clan.

And do horses have to have markers glued onto their fur, like the human actors do? “They do. You simply have to find an adhesive that won’t pull on their hair or irritate them – but also keeps the markers on. It’s almost like a thick Vaseline that doesn’t create any kind of irritation.”

Getting Performance Capture wasn’t limited to nice, dry stages. As its name suggests, in The Way of Water, there’s plenty of water, and plenty of scenes taking place in water, requiring Performance Capture in a water-filled tank, in order to capture true swimming motion of the actors.

Champney and team began early, testing their Performance Capture system in Jon Landau’s pool. But they soon discovered that infrared light gets absorbed very quickly in water. “We began noting that our markers were super dim,” Champney recalls. “It wasn’t until we got very close to the camera that we could see them.” After studying the situation, and learning of the limitations of infrared in water, they realized they needed to find a different frequency of invisible light, but one which the motion capture cameras could still see. “We had to find a happy marriage between the cameras with stock camera sensors we could buy off the shelf with LEDs that were also available commercially, without having to pay a manufacturer to make custom fixtures.” The resultant light used underwater, then, was somewhat close to the ultraviolet light range.

Much action takes place with characters diving from air into water – both of which had to be shot with Performance Capture. “So we had to come up with a system that could marry the two systems of dry and wet – infrared above and ultraviolet below, but all part of the same system,” to allow a continuous capture of an actor passing from air into water. “You really need an actor to be diving into real water,” he explains. “We actually did dry tests for Weta, using wire rigs, without a water tank. But you just don’t get that realistic movement. There’s a very quick deceleration that you don’t see when doing it dry.”

The solution laid in placing a set of white balls on the surface of the tank’s water, to separate and control the two lighting systems. “We didn’t want the infrared light contaminating the ultraviolet, and vice versa. And the floating balls created that separation we needed.”

Besides the Motion Capture infrared cameras and Cameron’s Virtual Camera, there were also sets of reference cameras, with operators filming the actors doing the Performance Capture performances. Typically, two cameras were used, grabbing a medium and a closeup of each actor.

The reference footage was used by Cameron and his editors to make their “selects” – to choose which takes they preferred for any given scene. While Cameron can indeed see the animated imagery of the actor’s character generated for the Virtual Camera, based on the Performance Capture data, Champney notes, “The virtual world, at this point, is doing its best approximation of everything the actor is doing. But the faces are like video game faces. He doesn’t want to make editorial selects of what he wants to use based on video game representation. He wants to make his performance selects based on the reference footage of what they were doing. He’ll look at Zoe Saldaña’s face on the reference footage and say, ‘That’s the one where she’s crying. That’s the one I really like.’”

It is these Performance Capture selects, along with the detail seen in the reference camera footage, that Weta animators will base the animation of the character’s movement and expression on, indeed bringing about that incredibly realistic appearance on each character’s face, something which truly represents the actor’s performance.

In fact, the use of reference cameras came about early on production of the first film. “Jim didn’t believe in the system, at that point. He knew that we wouldn’t get completed faces back from Wētā until, say, a year later. So he said, ‘I want to make sure we have reference cameras on every single thing. I don’t want an artist’s interpretation of their performance. I want what their performance is. Can we do that?’ And Joe Letteri said, ‘I think we can.’”

Cameron, the team and the actors will review both the Virtual Camera playback and all of the reference images after each take. The step is also beneficial to the cast members, who can get a true visual sense of how their performance will appear in the finished scene and see if they need to make any adjustments they would like to make or understand any notes/direction from their director.

Oftentimes, Cameron will decide to combine performances from one take with those from another take – or even create selects by combining characters from different takes into a single take. “Sometimes, Jim even picks the facial performance from one take, to be connected with the body performance from another take,” Champney notes.


Director James Cameron behind the scenes operating a camera. Photo courtesy Mark Fellman

Once Cameron’s selects have been made, those choices are edited together. “It’s a moment in time – a perfect play,” says Baneham. “It’s the night every actor was on.”

That edit is then turned over to “The Lab” – the collection of Lightstorm visual effects artists, who then create what are referred to as “Camera Loads.” Camera Loads will be used by Cameron in the following production step, known as “Cameras” (see below), when he will actually compose the shots, holding his Virtual Camera, as if he is standing within the (virtual) scene. “The Camera Loads will have all of the motion edited by the Lab, so that it is accurate and smooth,” Champney explains. The process can take anywhere from several weeks to a month for any one scene. The Sequence Team, specifically, is responsible, then, for making sure that all aspects of the Camera Load come together in a cohesive way. This includes refining the lighting and atmosphere of the scene, the tiling of all of the background capture (which was not part of the initial Performance Capture), and adding in any animation, such as creatures, vehicles, explosions, etc., which are required for the scene.

And besides representation of the actors as their characters, in Gazebo “Template level” animation, those of actors who will actually later be filmed in live action on the live action set will have also performed alongside their colleagues, doing a Performance Capture version of what they will eventually do onstage. A Template level Gazebo version of that performance also appears in the Camera Load, even though it will be replaced later with their on camera performance, just to allow Cameron to create his shots of the scene properly, and include them where they will be appearing in the scene.

Another essential part of Camera Loads is something referred to as “Kabuki.” While the Gazebo animation can create a face, to some degree, out of the data it gets from the Performance Camera facial camera rig, it still isn’t 100% representative of the actor’s true performance. So the image from one of the two facial rig cameras is used to create a video texture of that image, which is then projected onto the Gazebo model of the face. The system also removes the marker dots from the image of the actor’s face, using an automated process.

“When Jim is doing the Cameras step, he really wants to see the performance of the eyes and the mouth on the characters,” Champney explains. “He wants to see the eye movements, the little twitches, the blinks, lip quivers, and the motion of the face.” The process is temporary – it’s only used for Cameron while doing Cameras, with Wētā creating fully fleshed out facial animation much further down the road, when it comes time to animate the characters in post production. Notes Landau, “You make decisions in editorial based on something as simple as a blink. We’re seeing their mouths, their eyes, and we can see it from any angle.”

Finally, just prior to live action, Cameron and/or Baneham will use the Virtual Camera to shoot Rough Camera Passes (RCPs) – taking the Virtual Camera out on the empty Performance Capture stage and getting shots which will give the Lab a rough idea of the kinds of Camera Loads that will be needed for use in the Simulcam system (see below), when Cameron is shooting on the live action stage.

After Performance Capture is completed, Cameron does what is known as the “Cameras” step – the creation of the actual shots which will be used to create the final film images. On a regular live action film stage, the director of photography (DP) and his camera operators shoot the action with studio film cameras, with the framing and coverage the director desires, and then the director and editor select which takes from which angles to use to create the final cut of the film (onto which visual effects are added later, in post production).

In a virtual production, the settings and acting, at this point, exist in a Template level 3D animated version, as if existing in a complete 3D world of their own. The director and/or DP works with a virtual camera, placing it and moving it within that world, as if it is on a set, to create shots/coverage. They can select a virtual representation of a real lens, zoom, dolly, make a move as if the camera is on a camera crane and more. They can place the camera anywhere they like, getting the coverage that a pair of cameras (or even a single camera) would have normally had to do with multiple takes, to get a wide shot, a closeup, an over-the-shoulder, etc.


Lo'ak and a Tulkun.

“Now, we have a moment in time that might span, say, two minutes,” explains Baneham. “How do we tell that story, narratively, to the camera? We accomplish this with Camera Coverage. And it’s so interesting to see Jim capturing it, a year after the Performance Capture was done. Jim’s Cameras are so wonderful, because they tell you what he’s thinking. You look back at that a year later, and go, ‘All right, I see what he was doing.’” And, notes Landau, “That’s where we get our dailies, to begin our edit process, complete with the CG characters and CG worlds right in them. Without that, you’re editing with actors shot on a blue screen, waiting for VFX to come back much later, with a completed shot with those things finished.”

Cameron will literally stand on the empty Performance Camera stage with the Virtual Camera – whose position is identified and included, just like before, via the markers atop the rig, picked up by the Motion Capture system – viewing what the Virtual Camera sees in a monitor/eyepiece. “He’s using the Virtual Camera to see the scene and performances, represented by that scene’s Camera Loads,” Champney explains. “He sees the digital characters, as if they were standing right in front of him. And as he moves the camera around, he doesn’t see the barren stage he’s on – he sees the world of the film.”

And, as mentioned, he’s getting perfect coverage. “He can look at these characters and worlds from any angle. On a live action set, often you’ll do multiple takes of a scene. Maybe there’s a really emotional take, where the actor gets that one cry, where it’s perfect. But to set things up, say, to do a reverse, everyone goes back to their trailers, and the lighting has to be moved, the camera setup changed. A couple hours later, the actors come back out, and now we’re doing the over-the-shoulder from the other person’s perspective. And now, they have to hit that same performance again, with that same emotional beat,” he notes.

“With Performance capture, if we’ve got that really evocative performance, Jim can do whatever angle and coverage he wants – he can do an over-the-shoulder, a closeup, a medium, a wide establishing shot of that moment. And change all the lighting then, too. And in this method, the actors always hit their marks, because the system is playing back the exact same performance, over and over again. He is simply moving the camera around within the space.”

And, in the moment, he can make any changes he realizes he would like to make to the environment or lighting, simply by requesting on-set assistance from the “Brain Bar” – a group of animators and artists ready to move things around within the virtual world that he desires. “Just as a director might adjust lighting, set dressing or performances on a live set, Jim, thanks to the Brain Bar, has the ability to make those adjustments, simply by requesting it from different departments,” Champney states. “If he’s got a shot where he finds he walks through a wall, he can call out, ‘Hide that wall’ or ‘I need more in the background. This is a negative space over here, so I want something to fill this.’ He can craft that in the moment.” Notes Landau, “He can move the sun, a piece of set dressing. And instead of talking to a gaffer or a greens person, he’s talking to the Brain Bar, right there on set, and they do it for him.”

There are also a number of “virtual gags” they can also help him accomplish. The camera can be “platformed” or “parented” to an object or character within the virtual world. “He can say, ‘Attach me to that ship’ or ‘Share the position of these two birds, because I want to move on the average of how they’re moving, so I can stay with them,” Champney states. “If we’re on the water, I want to be riding on the movement of the waves, virtually, so that I’m not going in and out of the water.”

“He can say, ‘This Banshee is flying – parent me to the Banshee,’ and then he’s going with the Banshee,” adds Landau. “He’s not running around the stage. The camera is parented on top of it, and he can even move on top of that, to create dynamic moves. And he can also say, “Make me 5-to1 in vertical.’ And all of a sudden, he’s doing a big crane move.”


Jake Sully (Sam Worthington) riding a Skimwing.

The process is an additive one. “We call the position where he is on the stage the ‘base transform’ – the ground shot of how Jim is moving. Then think of it like a magic carpet. ‘Okay, but now let’s put the stage under something else that’s moving.’ There are all sorts of tricks and gags we can do with the virtual camera that we can’t really do with a live camera.”

One thing that is critical to Cameron about his camera moves is that remain truthful. “He doesn’t want to have some fantastic camera move that will take the audience out of the story,” Champney notes. For example, if there’s a chase through the forest, he doesn’t want the audience to say, “That looks too fantastic – no helicopter or drone could do that.” “And, remember, it’s additive. Even though we create the shot virtually first, we then have to go and shoot live action parts of it, too, later. So we don’t want to do anything, virtually, that we wouldn’t be able to achieve on the physical stage.”

He also is careful to be. . . not too careful. As Landau likes to say, “CG can be a perfect artform – film is an imprecise artform.” A little bit of natural, normal camera shake, as would occur if a camera operator was moving the camera on a set, is important to simply leave in, avoiding refining or touching up. “This is a testament to Jim as a filmmaker,” the producer says. Jim had a mandate very early on, in the first movie, which is that he wanted this to feel as natural as a filmmaking process could, in a virtual space. Physics have to be applied – and Jim knows those physics.” Notes Champney, “If you’re tracking something, when it stops, the camera, typically, will overshoot it. If you’re just doing a perfect camera, and it just stops immediately, there’s something almost Pixar about that. It should look organic. So we don’t put too much smoothing on it or augment it too much. He wants that ‘lived in’ feel.”

Cinematographer Russell Carpenter was not directly involved in the lighting of the virtual world, which Cameron would typically light with the visual artists during the creation of the environment in Gazebo, knowing where he would likely be getting camera coverage and what he would want lit and how. Carpenter’s focus would mainly be centered around lighting the live action sets, but did offer some input to Cameron, during Cameras.

“Russell came in about a year before we started live action, to get versed in the look and feel of everything,” Champney explains. “He learned that visual vocabulary, of how Jim was lighting all the virtual world, to help inform how he would be lighting his practical sets in New Zealand later.” The seasoned cinematographer, who won an Oscar in 1998 for his work with Cameron on Titanic, knew that anything that Cameron was using in Gazebo had to be something he could actually create practically, on a movie set. “It’s easy to cheat things in the Gazebo version, which Russell knew couldn’t be done practically. He might say, ‘Okay, how are you getting that fill light right there? Because I would actually have to have a fill light in the lens. We can’t have an invisible light in Gazebo, shooting back into the camera like that.’”

During Cameras, Carpenter would watch Cameron’s work, taking notes and making recordings, and make suggestions or changes, in the moment. “He really started contributing to the lighting in the virtual world, so that, when he got to the practical set, he could make sure that the bounce, key light, everything, matches the lighting that was pre-set in the virtual world.” The DP, Champney, Baneham and others, then, also made trips to New Zealand, to scout the under-construction sets, to study how scenes would need to be lit, practically.
James Cameron on set of 20th Century Studios Avatar 2

Once the Cameras step was completed, the Lab artists once again would take a pass, cleaning up the data, doing any additional work on visuals for each of the shots, before the nearly 2500 shots of material could be turned over to Wētā FX to begin doing the real animation work

Live Action
While The Way of Water is predominantly a virtual production, there are many live action elements that had to be filmed, with actors/human characters on sets. The scenes have to be carefully filmed, in a manner which will allow them to be inserted into the virtual scenes – and vice versa. Needless to say, it’s not simple.

The scenes – and Cameron’s shots – were designed to have those live action elements placed within the overall world already constructed in Gazebo, and Cameron, during the Cameras step, has already decided on where the camera is to be placed within the physical/practical sets. So it must be placed precisely in those very same spots in those sets, essentially capturing the very same shots.

“We have a very tight template of what needs to be shot, how it needs to be lined up with the previously-shot virtual material,” says Champney, “where it needs to be lit, where the cameras go, what type of camera it is, what the focal length is. Because we had already predetermined all of that virtually. When we prep the stage for live action, 90% of that work has already been done. Now, it’s just a matter of how to translate that into the stage space, how to make sure that stage space matches where the camera was, how it lines up in this world, so that Jim is getting a similar view to what he got with the Virtual Camera during Cameras. All of these things have to line up. So there is a lot of math and coordination that goes into making sure the physical space lines up with the virtual space, so that you’re getting the correct composition.” Casey Schatz, the Techvis/Simulcam Supervisor, is responsible for orchestrating all of these variables in the marriage between virtual and physical spaces.

In addition, the sets themselves have to be built into the exact same locations as they appear in the virtual Gazebo version, so that they and the live actors’ movements will line up precisely with the previous design – and with the movement of the Performance Capture performances of actors whose work will be represented in the completed scenes by animated characters.

The sets also feature anything the actors will be physically interacting with, with anything beyond that being created in CG later by Weta FX as a set extension. “We don’t want people pantomiming or pretending,” says Champney. “It just takes the audience out of it. It has to be grounded in reality. The more digital we have, the less believable it is.”


Quaritch (Stephen Lang)

Like before, Cameron will act as the only camera operator, with exceptions being for Steadicam or crane shots, and even then (the latter), he will operate the controls of the camera on the crane.

The camera, in this case, is no longer a virtual one, but a 3D camera rig featuring a pair of smaller Sony Venice Fusion cameras, set up using a “beam splitter.” The image from the lens passes through a half-silvered mirror (the beam splitter), set at a 45 degree angle, to allow the light to pass both directly into one camera behind it, and also sent to the second camera, which is set in the rig with its axis 90 degrees from that of the other camera. The ”interocular” setting – the distance between the cameras, as if they were a pair of eyes – and the “convergence angle” – the angle between the way the two “eyes” are pointing at the subject – are controlled by a team of skilled stereo technicians, all based on very specific desires of Cameron, who has much experience in such photography. “Jim has very strong concepts on 3D,” Champney notes.

The rig, like the Virtual Camera, has a pair of marker balls atop it, to enable its position, rotation, etc., to be tracked accurately within the space of the virtual world. Now on a set with plenty of activity, as well as physical set pieces, the camera is tracked not only with the Motion Capture system of infrared lights and cameras, but also makes use of onboard IMUs – Internal Measurement Units – similar to what we have in our smartphones for GPS use.

“GPS is really only accurate to within a couple of meters,” says Champney. “So it’s obviously not going to be accurate enough for camera tracking. But on a live action set, it’s chaos. We put our Motion Capture cameras where we think we’re always going to be able to see them, but for instances where things change, there’s occlusion, and we can’t see those markers. So the IMU provides additional stabilization. So if we lose the global reference from the tracking system, the IMU will know that, ‘Okay, I’m still panned and tilted this orientation. I just might not know, physically, where I am, in relation to the stage.’ So that gives the best guess of where the camera is right now.”

The most important part of the system, though, is what’s known as “Simulcam.” That system allows Cameron to see not only what he’s viewing directly on set, via a typical director’s monitor, but also sends that camera image – and the camera’s position – to the Gazebo version, where it can combine the live on-set image with whatever elements from the pre-shot virtual world that will complete the scene. This allows Cameron to see what the completed scene will look like, via a second monitor, located just above the other, while shooting, enabling him to frame for the complete shot, even for items which aren’t present on the practical set.

“On the first movie,” Champney recalls, “Jim was saying, ‘If I’m tracking my Virtual Camera, why can’t I track a live action camera – and why can’t I see the Gazebo version of it? Why can’t I see that on the live action set?’” Prior to the invention of Simulcam, a director would have to simply shoot using a very conservative camera movement. “It would be, like, ‘Okay, the dinosaur’s gonna go across this blue screen here, but don’t do anything too crazy, because that’s gonna make the shot more expensive later.’ Or maybe it would break your composition, because we don’t know where exactly it will be.”

He adds, “On Movie 1, most everything was shot on green screen, and the rest would be a digital set extension. But we decided, ‘Let’s make that so Jim can see the virtual world, so he can change his composition.’” While a director might otherwise tend to focus the camera on the part of the set that’s built, “There may be something more interesting going on in another part of the frame. So it allows you to have a better sense of composition, without having to wait for the animation to be done later in post.”


Lo'ak (Britain Dalton) and a Tulkun swim together

While the first film had perhaps 10 or 15 Simulcam shots, nearly every shot in Movie 2 was Simulcam. “In the first film, there was not a lot of Na’vi / human interaction, just a handful of scenes where they were in the same environment. It was usually either human OR Na’vi scenes, with just a couple where they interact. It was something we purposely avoided. But in Movie 2, everything is in the same world together.”

So Simulcam was key. “We have scenes where we had live action sets that were missing very key CG components,” Landau explains. “How could you ask a director to compose for something that’s not there? With Simulcam, Jim could hold up his handheld camera, and he’s seeing in his eyepiece animation driven by the Performance Capture shot earlier, so he can frame and compose a shot including it, as if it’s right there.” The combined live/virtual image arrives with a slight delay – just 4 to 6 frames – due to the propagation of the camera tracking, drawing of the game engine imagery and compositing.

Since the production of Movie 1, there have been improvements to the Simulcam system, especially in the lighting system. The MotionBuilder-driven framework comes with only eight suggestive lighting schemes the user could light the virtual scene with. “But Wētā FX wrote some additional software, Gazebo, that allowed us to light with a complete lighting package, the same that they will be using to create the lighting in the final scenes, down the line,” says Landau. “So when Jim holds up the Virtual Camera, he can move a light or he can add a filter. And it’s really representative of what he would want it to look like. It’s much higher fidelity. We had double the amount of VFX shots than we had on Movie 1. So by doing the lighting in the scene from the beginning, within the game engine software, using lighting that can be imported into Wētā FX’s own lighting system, they’re not having to start from scratch. They don’t have to try to interpret, ‘Jim, what did you mean, when you put this shadow here or this lighting here.’ It’s much more efficient and much easier when it arrives there.”


Kiri (Sigourney Weaver)

An even more important change, though, offers the ability to place CG characters within the Simulcam image at the correct depth within the image. “In the first movie,” says Landau, “it was like a weatherman, standing in front of a green screen. The character could only be placed in front of an object. It couldn’t be placed behind a live action actor or a live action set piece. So we challenged Wētā FX and the teams, to create what we call ‘Real Time Depth Compositing.’”

“It’s kind of analogous to the layers one sees in Photoshop,” Champney explains, where one layer is a background layer, one is the foreground, and one in the middle. “With Depth Compositing, every single pixel has a Z-value – including the live action itself,” thanks to a pair of stereo witness cameras built into the 3D camera rig filming the action. "The bespoke system uses a trained deep learning AI model to process the stereo camera images and generate an initial depth map," adds Letteri. "This is further processed into a dense (per pixel) depth map that overlays with the image from the picture camera. In our in-house real time compositing system LiveComp this depth map is combined with the live action images and composited together with CG elements to give a convincing, correctly occluded image - all in real-time, in-camera with unnoticeable time offset."

Another new addition was something called the Eyeline system. Typically, in visual effects films, when a live action actor is doing a scene with a CG character to be added later, to give him/her something to help identify where that character will be, an assistant director or other staffer will hold a tennis ball, on a stick and string, in the approximate position of that character’s head. But, from an acting standpoint, that leaves a lot to be desired, Cameron felt. “Jim can see the actor and the virtual character, and the whole world, but the actor can’t,” Champney explains. “They can’t wear goggles, they can’t have an AR system, so that they can see where this virtual character is.”

The solution lay in using a cablecam system, much like the ones used in sports game broadcasts, which can move the suspended camera in any direction, to any location above the field, as well as rotate or tilt the camera, as needed. In this case, a small director’s monitor and speaker system is suspended, instead of a camera. Its movement, position and attitude are then programmed, based on the position of the CG character’s head, with incredible accuracy, based on that character’s actor’s Performance Capture data. The character moves through the set, as it does in the virtual version of that set (and, of course, the practical set is identical in size, location and design), as if it were physically present.

Most importantly, the character’s head and face are seen by the actor playing the scene with it, as well as hearing their voice/delivery of scene lines. “The actor isn’t guessing where they think the character is,” says Landau. “The CG character is not static. They’re moving left, forward, back, perhaps looking away momentarily in thought. Just little movements that you could never simulate with somebody moving a tennis ball.” And, by seeing the CG character’s facial response and hearing its speech tone, the actor can interact and respond as if they were acting with an in-person scene partner, performing the scene as Cameron intended.

Also on set, a live action actor who had acted their human character performance during Performance Capture (as an aid to blocking, etc), will now repeat the scene, this time in costume, doing the exact same motion as they did a year or more earlier on the Performance Capture stage. “And because they had played the part in a Performance Capture suit,” informs Champney, “they’ve already got the muscle memory from having done it before, and just have to reproduce that performance.”

Because of the complex nature of integrating CG elements and live action, at the end of each shooting day, the Lab artists do a “Post Viz” on the day’s work. Much of this work involves refining CG elements to match perfectly what was just shot on set, such as adjusting the animation of a hand touching an actor’s shoulder, or adding movement of a plant, as the actor passes through a forest, not having brushed against it originally in the CG version.

The film now has all-CG shots created by Cameron during the Cameras step, as well as new live action scenes, so a new, complete, final edit of the film is made. The edit is then turned over to Weta FX to begin their work, bringing the CG elements to life.

Completing The Picture
It is up to the hundreds of skilled artists at Weta FX in New Zealand to take the imagery seen on set, in James Cameron’s 3D camera and in the world of the CG game engine animation, and create a fully fleshed-out believable realm, where humans, aliens and otherworldly creatures exist in a believable world. If it looks like a cartoon, the audience won’t buy it and won’t be able to go along for the ride with Cameron’s and his cast’s characters. It needs to be real life. This starts with the environments the characters appear in, both the Na’vi’s beautiful forest life, with its remarkable flora and fauna, as well as the human world of machines and buildings. It also includes the characters’ animated costumes and props.

All of these items are considered, in VFX language, “assets” – an entire collection of everything that makes up the entire film’s universe, about 38,000 pieces. As described earlier, those assets begin life in design by Production Designers Dylan Cole and Ben Procter, and given the beginnings of their digital creation by Lightstorm’s VFX team, before being fully fleshed out by Wētā FX, under the supervision of Senior Head of Assets Department Marco Relevant. Relevant has countless VFX film titles to his credit – including the first Avatar film. He works extensively with Cole and Procter, along with costume designer Deborah Scott and the props team.

The first step is to turn the lo res versions of each of these assets into hi res designs which can be placed within each scene, in place of those earlier versions. “We go back and look at the artwork again, to try to figure out what had to be skipped, to make the item fast for use on the Performance Capture stage,” Joe Letteri explains. “We look at the references the designers had originally looked at, when they were doing their artwork, because they’re always looking at real world reference, and also any other reference that we can find, to bring to it.”

Motion tests are performed, photographing real world examples of plants – and, importantly, Scott’s costumes. “Deb and her team will build human-sized versions of the costumes, that we can test in the air or on the water, which we use to give us an idea of what we should be building, digitally,” trying to mimic Scott’s precise construction techniques. The motion tests of those physical models help the animators study, and then reproduce, the behavior of pieces of costume, say, in the way they might blow in the wind or float atop or within water – so that they look real to us. “We reference those motion tests, to make sure that, even though we’re now on a 9 ft tall character and the proportions are different, the costumes behave the same way.”

Ben Procter’s super cool technology of the future also goes through motion studies, again, so that, when seen onscreen, things like Construction Bots and the six-legged Sectapeds (which look like insects crawling around, performing tasks), or the Crab Suits (miniature submarines that look crab-like) don’t look like cartoons, taking us out of the scene. Part of what makes those items believable, too, says Baneham, is that each is based on things we know, or at least can foresee, coming from our own reality. “That is true science fiction. The difference between science fiction and science fantasy is that one of them has permission to break rules. But science fiction just takes what we know today and extrapolates out from there. For instance, the Sectaped (HEXBOT/Swarm Assemblers) walks like an insect and can negotiate surfaces that way – we’re damn close to being able to produce those types of autonomous bots. I don’t think we’re that far away.”

Besides the aforementioned Direhorses, there are a fascinating collection of other beasts, including two marine animals, the Skimwing and the Ilu, both of which can be ridden by the Metkayina clan, with whom Jack and his family have taken up, to escape their enemies. “We treat them like working animals, almost like on a farm,” Letteri explains. “The Ilu are almost like ponies. Out in the country, if you were on a farm 150 years ago, you would ride your horse out to go hunting. Here, you might ride your Ilu out to go hunting in a reef. These animals a part of the life of these people.”

And, just like people have with their horses, says Baneham, “People have incredible bonds with their horses. And the horses have personalities. But it’s not overt. There’s a playfulness, the way sea lions and seals have. So we tried to fuse those two ideas together, so they have different personalities, among them.” Without those interesting qualities, they would simply be an interesting creature. Here, we recognize that familiar relationship, without it being over the top.

Even more central to the film are the whale-like Tulkun, one of which, Payakan, befriends Lo’ak, the teenage son of Jake Sully and Neyteri. “Like Lo’ak, Payakan is also a teenager, and wants to hang out and play,” says Baneham. “And that is a very endearing moment,” Payakan playfully tossing Lo’ak up in the air. “They go do stupid stunt stuff that your teenage kids would do,” he laughs.

It is the Tulkun’s facial expression that is key to connecting the two. “We were so limited with facial expression, because of his size. So we had to carry a lot of the weight of the expression in the eye and the soft tissue around the eye. They use body language to communicate. And, as with so many of our great cast, Britain Dalton, as Lo’ak, gave a great performance, to make you believe Payakan was present. That’s key to making these scenes work as well as they do.”

But nowhere does the performance in a CG character’s face have more importance than in those of the Na’vi. “And that starts with an empathetic performance” by the actor, in Performance Capture, says Baneham. “And animators who understand how a human face behaves the way it does, and why we respond emotionally to those things that we see,” Letteri adds.

An important change was made to the facial animation system for this film, to enable animators to make the characters’ faces appear even more natural and real. A new tool was created that helps the animators get beyond simply seeing the dots on the face, from the Performance Capture recording, and trying to reproduce that in their character model, the way animators have worked for years. “But the dots only give you what’s happening on the surface,” explains Baneham. “They don’t tell us anything about the muscles beneath. And there's a massive difference between the two.”

The new system takes into account the differences people have in the levels of fats and collagen in their skin. “We talked about the muscles that we thought were moving underneath, because that’s the vocabulary you need to use,” says Letteri. “But we were always just guessing. So this time, we built a neural network. So we’re much closer to understanding the balance between those muscles. But more importantly, that network gives the animators instant control over the entire face. You’re not just moving one bit at a time – everything moves together. And in each performer, it moves together in different ways. We built this tool to help animators understand the actor’s performance.”

It's important, though, they discovered, not to simply put the face through the software and accept the results as the finished product. “We look at what the actors are doing, frame by frame, and go through a pass of trying to lock all that down. But then we sit back and watch it in motion. Because no matter how accurate you think you are, if it doesn’t feel right, it ain’t right. So you go back and ask yourself, ‘Okay, what are we missing?’ Cause you may get exactly what an actor’s doing, but it may not be right for that character.”

The animators will also make use of the original Performance Capture reference footage, to fully understand what the actor is feeling and communicating. Notes Baneham, “It’s critical to be able to capture the subtext of what a person’s saying – which is how somebody feels as they’re responding. If you can get that through it, if you get it to read clearly to an audience, then you have something quite powerful performance coming through.”

A younger face, such as those of the child actors portraying Na’vi characters, are slightly different. “They have high amounts of collagen – they have elasticity in their skin,” Baneham explains. “And that helps, in some ways, because you get less separation, so you get a little more direct relationship with the substructures. So it’s clearer – although younger kids can do really weird things with their faces,” he laughs. “And they can bounce those facial rig cameras around a lot, too!”

An interesting hybrid is the case if Sigourney Weaver, who plays a 14-year-old Na’vi, Kiri, the daughter of her human character, Grace. “You might just have to move things a little bit differently,” Letteri notes. “Again, you don’t want it to just be technically correct. You want the emotional beat.”

Another fun challenge for the animators came in the form of creating animation of Na’vi babies. When Jake announces the birth of a new baby, holding the child up, it is actually the image and movement of the newborn who had just arrived in the life of actor Joel David Moore, who plays Norm in the film. “We just said, ‘You know what? Bring him in,’” Baneham explains. “We didn’t put any markers on him, but we put plenty of reference cameras around. And we captured his performance.” Similarly, in New Zealand 8 months later, a baby was needed to reference a particular behavior. “One of our animation supervisors at Wētā, Stephen Clee, had a new baby, and he said, ‘Oh, my kid does that.’ He put multiple cameras on him, as well, and extracted a 3D image of him.”

Having a real baby to reference truly makes a difference. “It’s so powerful to have. They don’t behave like anything else. They twitch their muscles in unusual ways, because they don’t have control of their muscles. There’s a lot of involuntary stuff going on, that is not the same as an adult. And, in the animation, it’s real to us. You recognize it without knowing you’re recognizing it.”

The one facial feature that is at the center of every animated performance in any Avatar film is the eyes. There is complete, real emotion being passed through them, to both the other characters and to the audience. It is the difference between a scene with real characters with real relationships, and just simply animated characters.

Among other things, a technical advancement seen in Way of Water is an innovation that actually builds the structure of the eyes, geometrically, Letteri explains. “This allows us to shoot rays into the eyes, bounce light around properly, and scatter and come back out. And that’s what really gives it that quality of photography a real eye.”

The movement of the eyes, too, is completely natural, including natural blinking. It is often unnatural blinking animation that is the dead giveaway of an animated face. Often, the blinks are unnaturally slow. “Don’t forget,” says Baneham, “the reason you blink is because your brain doesn’t like blur. It’s not only the reason you blink, but it’s the reason you blink quickly.” There are natural blinks when a character momentarily glances, say, at another person, while the main character he is listening to is speaking. “And that’s usually staccato. You don’t move in one clean movement. You’ll do it in multiple short strides, in order to eliminate motion blur. And that’s one of the reasons why our characters’ eye motion appears so natural.”

Like in any great movie, at the center of every character is still a great acting performance. “It all starts with an empathetic performance” by the actor, in Performance Capture, concludes Baneham. “We learned that in the first movie, because we understood that the way you get a really consistent performance is to ground it in your actors. And it’s our responsibility to protect that performance and shepherd it to the screen. That’s where the gold is.”

COMMENTS
Roger Fraser's picture

It takes a certain talent to write such a script.

JuliaBrown's picture

That's right. Only constant work on your skills will lead to results. I found top essay services https://uktopwriters.com/best-essay-writing-services/ in time that gave me the opportunity to tighten my writing skills. And over time, my writing has gotten better.

X