When I got one of Tobii’s eye-tracking devices a few years ago, I didn’t know what to do with it. Tobii bills itself as “the world leader in eye tracking,” and as far as I could tell the claim rang true. The device certainly worked, and I could see it being very useful for people who (for any number of reasons) can’t use a keyboard and mouse.
But Tobii clearly wanted to broaden its ambitions and turn eye-tracking into a more mass-market solution, not an alternate means of interfacing with PCs but a primary means. I tried it. Ubisoft builds Tobii support into all its games, and I played Assassin’s Creed: Rogue with Tobii’s eye-tracker swinging the camera wherever I looked. “Cool tech,” I thought, and then promptly went back to using a mouse and keyboard.
It felt like a solution in need of a problem. Lucky for Tobii, that problem arrived: Virtual reality.
Point and click
Integrated into a VR headset, Tobii’s eye-tracking technology is a godsend. I got the chance to demo a unit last week at GDC—essentially a retrofitted HTC Vive. The only visible change was a series of infrared emitters embedded around the Vive’s lenses. The rest of the eye-tracking tech was contained inside the headset itself.

Setup was pretty similar to using Tobii’s desktop unit. I put the headset on, then followed a series of dots around with my eyes so it could calibrate correctly. From there, we went straight into demos.
It speaks to Tobii’s confidence in this space that the demos were simple and concise, but demonstrated a huge range of promise for the tech. My favorite came toward the end. I was placed into a living room environment, similar to any number of VR video apps. With eye-tracking off, I was asked to pick out a movie to watch, dim the lights, and so on.
And I did. VR is still way more intuitive than using a mouse and keyboard. You point the controller at an object, you click, you adjust. It replicates just enough of our day-to-day interactions, but abstracted to make the controller viable. Great.
Eye-tracking makes VR feel like magic, though. We turned Tobii’s tech on, and then interacting with an object was as simple as glancing at it. Imagine you could sit at your desk, glance at the light switch on the wall, and the lights would turn on—that’s VR with eye-tracking. Dimming the lights just entailed looking at the light, then swiping the trackpad on the controller. No need to lift my hand, point at the lamp first, none of that. The same applied to scrolling through a list of films. Look, scroll, look, scroll. No body movement needed.

This is essentially what the living room demo looked like, albeit…in VR.
It may sound simple, and it is. But that’s exactly the point: It removes another layer of abstraction from the environment. It’s intuitive. For Netflix-style applications where realism isn’t important, Tobii seems like a natural fit. Doubly so on more limited interfaces like the Oculus Go or Google Daydream, where in the absence of position-tracked controllers, pointing to select an object is usually done with the forehead. The eyes are certainly easier.
Games? Tobii seemed less important to me there, and I said as much. Virtual reality is meant to emulate reality, after all, and in games there’s a certain amount of realism engendered by reaching out and grabbing an object, not adjusting it telepathically.
Tobii’s response was that eye-tracking isn’t necessarily across-the-room telepathy. For some developers, it’s just clarification. One demo, for instance, put me in front of a large panel full of knobs to turn. With eye-tracking enabled I could sit back and adjust these dials without ever reaching toward them—probably not ideal. But I could also lean in and adjust the dials “manually,” and in that case eye-tracking just helped confirm which dial I was trying to interact with. That’s a pretty common problem in some games like Star Trek Bridge Crew and Interkosmos, and I came away believing eye-tracking could make a difference.

You can wink!
Even more impressive was a demo where I was asked to throw rocks at some bottles. While I have a pretty good handle on VR physics, especially well-implemented VR physics, eye-tracking gives developers one more vector of information to work with. The demo could distinguish which bottle I was aiming at based on my eye position, and the rock would then slightly compensate its velocity to ensure I was successful. Again, that’s not a feature every developer would want. You probably wouldn’t put that level of help into your skill-based carnival minigame collection or whatever. It’s a nifty tool, though.
Performance-ready
The coup is Foveated Rendering. The technology isn’t new by any means—Nvidia’s shown off demos in the past, and both the Oculus Go and Oculus Santa Cruz prototype use “Fixed Foveated Rendering.” But eye-tracking allows VR headsets to implement foveated rendering in a much smarter way.
So what is it? Basically, our eyes only see detail in a small circle that’s directly in front of where we’re looking. Our peripheral vision is crap. Because of this, it’s a waste to render full-quality scenes anywhere that we’re not looking—you’re basically just burning GPU power on detail your eyes can’t pick up.
Fixed Foveated Rendering thus cuts down on rendering by basically assuming you don’t need the outer edges of the screen. (Other techniques, like Nvidia’s multi-res shading, work off similar principles.) I don’t know exactly how Oculus’s final Santa Cruz version will function, but typically the screen is divided into a number of regions. Maybe the outer ring only renders at half-quality, the next ring in is at 75 percent, and so on, all the way until the largest area (the central section) which renders at full quality.

This is Oculus’s current Fixed Foveated Rendering example. White areas are rendered at native quality, red are 1/2, green are 1/4, blue 1/8, and magenta at 1/16 quality.
The problem: If you do look into those outer edges, you’ll notice it’s blurry. This isn’t really as problematic as it sounds—I can tell you I used the Oculus Santa Cruz prototype last year and didn’t even notice the effect. The vast majority of the display is still full-quality, and it’s just those edges that are degraded, a.k.a. places you’ll rarely look. The possibility exists, though. Sometimes you’ll notice a line isn’t as crisp as it should be, for instance. Even worse, the full-quality area is still so large you don’t actually save much in rendering cost.
Enter eye-tracking. Eye-tracking allows for Dynamic Foveated Rendering. Basically, wherever your eye looks is rendered at full quality, but everything immediately in the periphery is degraded. Your eyes move? That full quality region follows. And because the game knows exactly where your eyes are looking, quality can start to drop off immediately outside that area.
Tobii claimed you’d see as much as a 50-percent performance increase. For VR, where you need to hit a 2160×1200 resolution at 90 frames per second (or 2880×1600 in the case of the upcoming Vive Pro), that’s huge. Tobii said it could save you an entire generation of hardware, meaning for instance that an Nvidia GTX 960 with Dynamic Foveated Rendering would perform as well as a GTX 1060 without. It’s also a huge deal for any standalone headsets, which are working with both lower-powered hardware and limited battery life.

This incredibly exaggerated render is silly-looking, but does sort of give you an idea how extreme the peripheral quality drop-off can be with Dynamic Foveated Rendering.
In other words: Eye-tracking is essential for VR. I’m sold. Tobii’s demo was seamless, too. Dynamic Foveated Rendering was actually enabled while I was in the process of playing, without anyone telling me, and I didn’t even notice. Even after I was told it was on, I still struggled to see any difference—a bit more blurring of text in the periphery maybe, but nothing that impacted my playing.
I then left the headset and could see the tech from the other side, as one of Tobii’s team wore the headset while I saw the headset output mirrored on a laptop. There, I could see the environment blur and clear up again depending on where he was looking. If it were a ruse, it’d have to be really elaborate.
And that’s pretty amazing, because last I heard Dynamic Foveated Rendering was still a few years off. It doesn’t seem that way after my demo—I came away a believer. There are a few issues to sort out, like, “what happens when you move around a lot and the headset shifts,” and “what happens if the lenses fog up?” But the underlying technology seems sound.
Bottom line
It’s exciting. Quite the opposite of my last Tobii demo. As I said, I think Tobii’s desktop technology is pretty intriguing. It’s been built into Alienware laptops for the last few years, and I still have the sensor hanging off my monitor. I just don’t find much use for it day-to-day. I’m too conditioned by the mouse and keyboard.
Virtual reality still seems ripe for new control interfaces, though, and eye-tracking seems like a natural fit. It helps that it’s cheap and unobtrusive, too. As demonstrated by Tobii, it fits inside the current-gen Vive headset, and I was told it doesn’t add too much onto the manufacturing cost. That’s more than can be said for full-body haptics and all the other wild ideas people are trying nowadays.
Combine that with the performance benefits? It’s easy to imagine second-generation VR headsets will want to include some form of eye-tracking. Tobii told me the company already has five partners lined up, four of which are still unannounced. We’ll see what happens next.