In L.A. Noire Asking Questions Trumps Firing Bullets

Today's Best Tech Deals

Picked by PCWorld's Editors

Top Deals On Great Products

Picked by Techconnect's Editors

Rockstar and Team Bondi's L.A. Noire wants to upend your expectations about what's possible in a video game. Not their plans to replicate Los Angeles circa 1947, the game's stylish film noir aesthetic, or that it stars Mad Men's Aaron Staton and Alexa Alemanni or Fringe's John Noble. No, I'm talking about L.A. Noire's remarkable digital performers.

Their digital faces, I mean. They don't look like any you've seen in a game. In fact they don't look like any you've seen in most movies. The young Jeff Bridges character in Disney's CGI-saturated Tron: Legacy looks like a Botox zombie. The detective actor Aaron Staton plays in L.A. Noire looks like, well, Aaron Staton.

How'd they do that? With a little ingenuity, sure--and you know, 32 cameras wrapped around each actor to capture every forehead wrinkle and eyebrow lift. We asked Team Bondi lead Brendan McNamara to talk about what the new technology entails for the industry.

Game On: You've somehow managed to leapfrog not just one industry [gaming] but really kind of two [film]. Why'd it take us this long to get here?

Brendan McNamara: I think that the leap you're describing with [L.A. Noire] is thanks largely to the interactivity necessary for our game, and for the industry at-large. We have tried to develop a new type of gameplay based on asking questions. In this game asking a question is more important than firing a bullet. When that's the case you need to be able to make decisions based on how a character responds.

We're seeing a lot of films these days push some really advanced tech, but ultimately they only have to deliver to a passive audience. There's nothing the movie viewer can do to change the actions or movements happening on-screen. Conversely, our experience completely relies on the audience's interpretation of characters and scenes, and then ultimately, their decision-making. Without MotionScan this would not be possible.

So in the end you have to create the tools and processes that will make the entertainment you want to deliver possible. For Avatar that was retargeting human performances to alien characters, while for us it's a new way of capturing the essential humanity of our actors and deciding whether we believe them or not.

GO: How much of an actor's performance has to be retouched or tweaked in post-performance editing?

BM: Thanks to MotionScan, the experience is seamless and very little post-processing work is required. For the first time, we don't need animators or artists to touch up footage after filming is done. MotionScan allows us to directly capture and transfer the performance into the game, so post-processing becomes a much more efficient and streamlined effort.

Unlike traditional voice acting in games, we couldn't get away with using someone just for the quality of their voice. Players will see and interact with our actors' complete performances, so it was necessary to work with talented individuals that add unique dimension and nuance to characters both physically and emotionally.

GO: How much of L.A. Noire's gameplay hinges on simply "reading" people's intentions through their facial expressions?

BM: Interrogations play a key role throughout the course of L.A. Noire, and reading people's faces is a large part of that process. As a player, you'll use keen detective skills in identifying clues, but will also need to analyze a suspect's overall disposition. Whether suspects seem nervous, agitated, scared, or withholding, players must decide how best to react and respond. Their actions will then determine the course of the interrogation--in L.A. Noire, we want players to use their words and their emotions to connect and interact with their surroundings instead of sheer violence. That being said, you'll have to use your gun and take down criminals when necessary.

GO: What are some of the things MotionScan can't do?

BM: Right now with MotionScan, we're limited to capturing an actor from the neck-up. The next step, naturally, will be to make the technology capable of scanning an entire body. That would eliminate the use of traditional motion capture and increase fidelity and realism in performances while minimizing the time and resources needed during development. MotionScan is definitely an exciting technology, so we are looking forward to the future and seeing what else it can do.

GO: How long before others start copying what you've accomplished with MotionScan?

BM: We've noticed a few similar technologies. Tron: Legacy, for example, used a similar setup to capture Jeff Bridges' face. We showed off MotionScan at E3 in the past and have received an incredibly positive reception from people throughout the industry. We've spoken to most of the major developers and publishers and they seem genuinely excited by the potential. Having said all that LA Noire will be the first game to utilize the technology when it comes out on May 17. We are excited to have players get their hands on it and see how they react to a new type of gameplay.

GO: Thanks Brendan!

Keep tabs on us: Twitter - Facebook - RSS | Tip us off or get in touch

Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read our affiliate link policy for more details.
Shop Tech Products at Amazon