Exclusive: Sony Answers 12 Questions about PlayStation 3 Motion Control
By Matt Peckham
“Take that, Microsoft,” may not be what PlayStation 3 special projects manager Dr. Richard “EyeToy” Marks actually said when he took the stage back in early June at E3 2009. Still, you could practically hear it whispered in the rare downbeats as Marks divulged Sony’s own two-fisted take on precision motion-control.
Imagine a microphone with a translucent bulb in lieu of the mic’s metal mesh, capable of lighting up and changing color, almost a wand of sorts. “Just a prototype,” said Marks then, and the final look will probably change, but you hold it like you’re gripping the hilt of a sword. Now imagine that device (or devices–Sony eventually rolled out two) working in tandem with the PlayStation Eye camera/microphone to offer stunningly precise 1-to-1 tracking, and you have what Sony informally dubbed “PlayStation Motion Control.”
I recently posed a series of questions to Marks about the technology by email. These are his responses.
Game On: At what point did you settle on your current two-peripheral approach to motion-control? When did you say “this is it”?
Richard Marks: I’ll give you two very different answers to this.
The first answer is that we have been moving toward this solution for several years now. We learned a lot from our experience creating EyeToy, and also from other research we have done, and from the experiences we have observed for other products. We learned that while people definitely enjoy physical interaction and movement, they also want precise control and a simple, fast, reliable way to trigger actions. We designed our new control system to accomplish all of this. We believe the path we have chosen is an ideal combination of both spatial and action/button input, and of course we can combine that with voice and video data from the PlayStation Eye mic array and camera.
The second answer is much less complicated. The first time I pressed a button and saw a virtual light sword extend up out of the controller, and watched it move just as it should when I swung it, I thought “this is it”. Then, when I saw the reaction of my kids when they tried the same, I knew we had it right.
GO: Just to clarify, the total setup will consist of the currently available PlayStation Eye, the two “wands” you demonstrated at E3, and the games themselves? Will the technology be backward compatible with any existing PS3 games?
RM: The new controller is designed to provide new and innovative gameplay. At E3, we showed both one and two-handed experiences. We are currently looking into the possibility of incorporating many familiar characters and franchises with these new experiences. More details will be provided when we make the official announcement of the product.
GO: Will it be called PlayStation Motion Control, or is that just a temporary concept name?
RM: No, that is just a temporary mouthful. We’ve yet to announce an official name and will provide more product details at a later time.
GO: After E3, Nintendo delivered a kind of backhanded compliment in welcoming Sony Computer Entertainment and Microsoft to the motion-control stable and saying it was “flattered” by your announcements. But SCE’s controller-free motion-sensing EyeToy–which, since its introduction in 2003, has sold over 10 million units worldwide–was out years before the Wii, wasn’t it?
RM: Of course EyeToy came out before Wii, but that does not diminish the contribution Nintendo made to game interfaces. I’m a gamer first, so the way I see it both EyeToy and the Wii controller represent advancements that broadened the gaming market and enabled new experiences. Our new controller takes this even further by combining the strengths of previous interface approaches with responsive new high-precision tracking.
GO: The EyeToy delivered
controller-free motion capture, then the Wii introduced two-handed controls, and now Microsoft’s put together what for all intents and purposes resembles a high-resolution EyeToy. Tying into the first question, what triggered your decision to reintroduce controllers, i.e. the “wands,” after the EyeToy and PlayStation Eye’s controller-free approach?
RM: EyeToy was created to allow players to physically interact with games using their body. The unencumbered feeling of no wires and feeling free (instead of connected to your television) was very important, as was the simplicity of the controls. Everyone, even non-gamers, felt like they could just jump in and play, which was great.
We still believe that is the best interface for some experiences, but for other experiences, additional capabilities are important. We discovered during our research that some experiences demand precise control and a simple, fast, reliable way to trigger actions. We also found that some experiences just feel more natural when holding a tool, or a “prop”. Our new controller adds these new capabilities to those we already have from PlayStation Eye.
GO: What about the functional advantages of controller vs. no-controller? Do peripherals allow for more precise tracking? Is it a matter of “amplification through simplification”?
RM: Having a hand-held controller greatly increases the precision that is possible, since we have designed it specifically for that purpose. The new controller’s high-precision embedded sensors detect the sensitive movements of the hands, and the PlayStation Eye tracks the sphere on the controller to precisely detect the position in real-life 3D space.
As I mentioned before, another huge benefit of having a controller comes from being able to trigger abstract actions with a simple button press. This is very important, because this event triggering capability is complementary to the spatial input provided by the tracking, and some experiences need both.
GO: Is the PlayStation Eye capable of effecting what Project Natal’s claiming, i.e. peripheral-free highly detailed three-dimensional body tracking and advanced voice recognition? Would SCE ever use the Eye in that capacity like Natal, controller-free?
RM: Peripheral-free, marker-free, highly detailed three-dimensional body tracking is a challenge, even with a 3D camera, and it is even more difficult with a 2D camera. Partial solutions are possible, and these are often more appropriate for creating a compelling play experience. Again, the PlayStation Eye taught us that while people definitely enjoy physical interaction and movement, buttons are needed for some experiences.
Regarding voice recognition, the controller itself does not have any such capabilities. However, the PlayStation Eye has a four-microphone array, which we designed primarily to enable far-field voice input, so voice recognition is a possibility.
GO: For all its advantages, the Wii is notoriously imprecise with a relatively restrictive motion “box.” Assuming SCE’s “wands” operate line-of-sight, how free will we be to move around? What’s the virtual “box” size, roughly speaking in “real” space, that we’ll be able to move about in?
RM: We specifically designed PlayStation Eye with a wide field of view (75 degrees). This means when you are 10 feet away from the camera, the range of motion is 12 feet across by 9 feet high.
GO: When we spoke a few years ago, I asked you about brain interfaces, and you were skeptical about the absent biofeedback mechanism. One of my issues with the original Eye Toy and now Project Natal (at least in theory) is that both approaches have you interacting with nothing, or nothing tactile anyway. The Wii Remote, at least, becomes the hilt of your sword, the stock of your rifle, the wood of your bow, and so forth. The physicality of the interface buoys the illusion. Your thoughts?
RM: As mentioned before, I completely agree. While it is okay for some experiences, we learned from EyeToy the limitations for a camera-only interface. While there are definitely some benefits to improving on the camera—like adding 3D, for example—we didn’t feel a camera-only interface was the best solution for games. We looked very carefully down that path, and we chose to follow a different one.
GO: Riffing on the last question, what about…not precision tracking, but precision simulating? A finger pulling against air, even with visual aids, can’t easily detect the tension point of a gun trigger. That tension point is absolutely crucial toward letting you know at what point a tiny cordite-laced missile is going to pop out the end of the barrel. A pair of wands with buttons (or analog triggers, like on a gamepad) can at least compensate by leaving the fine motor mechanics, e.g. thumbing hammers, pulling triggers, plucking strings, etc. to the latter. Your thoughts?
RM: You’ve hit on it exactly. No matter how good our visual tracking might become, the feeling you get from actually squeezing something physical is a better simulation than just positioning your finger. This relates to an interface phenomenon I call “somatic gratification”. The feeling of the interaction can be just as important as the effectiveness.
GO: You’re aiming for a Spring 2010 release. Will you ship with one or more “demonstration” games along the lines of what Nintendo did with Wii Sports? A bundle that includes the PlayStation Eye? I’m assuming you’ll be able to buy the wands by themselves… Do you have a ballpark price range, say under $100, for the pair?
RM: The new controller will be available to consumers Spring 2010. Further details will be provided when we make the official announcement.