Moore’s law may keep us supplied with octa-core smartphone processors and PCs packed with millions of transistors, but not all areas of technology keep the pedal to the proverbial metal as enthusiastically as the chip technology. Specifically, desktop displays—the portals through which we glimpse the output of those hulking CPUs—are stuck in neutral while the technology in the rest of your PC tears ahead at breakneck speed.
Sure, Retina-level displays look mighty fine, but c’mon. This is the 21st century, not 1999. Fortunately, several forward-thinking ventures are ditching traditional PC flat screens in favor of innovative designs that could one day redefine the way we look at our computers. These, folks, are the PC displays of the future—or at least they aim to be.
Any discussion about PC displays of the future would be incomplete if it didn’t mention virtual reality, and the virtual reality kit that has gained the most notoriety lately is the Kickstarter-backed Oculus Rift. This headset has captured the attention of gaming enthusiasts en masse. Powered by a sensor package that includes a gyrometer, an accelerometer, and a magnetometer, the Oculus Rift uses the data generated by those components to monitor your head movements and translate them into 3D gaming worlds with virtually no latency, giving you a truly immersive VR experience.
It’s seriously awesome, and a software development kit is slated to make its way to developers soon. Check out our own Alex Wawro giving the Oculus Rift a whirl in the video above.
Canon Mixed Reality
Similar to the Oculus Rift in that it’s a big, black headset that transports you to virtual environs, Canon’s recently announced Mixed Reality device targets industrial designers rather than gamers.
The headset connects to a beefy workstation and sports two forward-facing cameras. Working in concert, all this hardware is designed to present an augmented-reality blend of the real and the imaginary. The system can transform simple real-world props into fully fleshed out representations of a designer’s creation, as demonstrated in the image below. Once immersed, you can manipulate items in real time in the augmented equivalent of real space, complete with an accurate sense of scale.
Cool, huh? Now for the downside: Canon’s Mixed Reality currently costs $125,000 up front and another $25,000 per year in maintenance fees. The $300 Oculus Rift dev kit, on the other hand, is made from off-the-shelf parts. But remember that today’s high-priced novelty is tomorrow’s consumer-priced commodity.
Beyond straight-up virtual reality, most of the other attempts at pushing displays forward involve creating 3D imagery of some kind—but no one wants to wear those super-dorky glasses that most current-day 3D technology depends on. Enter autostereoscopic technology, a catch-all term for glasses-free 3D.
There are many ways to implement this technology, but the most interesting autostereoscopic displays use a technique called movement parallax, which alters your view of the 3D object depending on your head position, creating a true (albeit simulated) 3D experience.
Microsoft Research is working on just such a system, which you can see in action at the 1:55 mark of this video by the Verge. (Warning: Since the technology relies on beaming signals directly to the eyes of individuals, it doesn’t film well.) Microsoft’s display tracks your eyes with a Kinect camera, then uses the information to beam two separate yet simultaneous images at you from behind an LCD screen—one to your left eye and the other to your right. The dual pics trick your brain into seeing a 3D image on screen, the depth and location of which adjusts according to your position (as also tracked by the Kinect camera).
Ready for something really creepy/awesome? Two people staring at the same screen could be staring at two completely different images when Microsoft’s technology matures. Alas, today it’s still in its infant stage.
SpaceTop, another project born of Microsoft research, mixes traditional 2D desktop computing with an innovative interface that lets you manipulate objects in three dimensions.
To accomplish this, SpaceTop relies on a transparent screen sitting between you and the system’s keyboard and touchpad. A camera built into the rear of the screen keeps track of your hands for motion-control purposes, while a user-facing camera tracks your head’s position to display 3D images on screen in the correct scale and perspective. Rather than trying to explain more, I’ll just point you to the video above. This display needs to be seen to be understood.
If that sounds a bit too esoteric, check out Leonar3do, which its builders’ call “the world’s first desktop VR kit.” The software, paired with a crucial set of 3D glasses and a unique 3D mouse called The Go Bird, allows you to view and manipulate objects in 3D.
The video above provides a demonstration of what it’s like to use Leonard3do, which targets a wide range of markets including 3D modeling, gaming, and even education. We spent some time with the technology at CES and came away impressed. Said our on-the-spot editors: “From our time with the virtual work engine, it seems like a stunning way to create, demonstrate, and visualize virtual 3D objects in real space.”
One oft-cited imagining of the future entails a preponderance of utterly massive displays: wall-size beasts that dwarf the monitor sitting on your desk right now. But screens that large invite unique interface quandaries—especially if they’re touchscreen enabled. How does a mammoth monitor respond to multiple users? What if you can’t reach the top of the display? Will the slingshot in Angry Birds even be manageable? And so on.
Microsoft Research is already working hard to tackle these issues before they become widespread problems. The Towards Large-Display Experiences project, as outlined in the video below, has come up with one such possible solution that involves using a stylus for fine work and touchscreen gestures for basic controls, paired with user recognition tied to now-ubiquitous smartphones. It’s an interesting response to a potentially huge (pun intended) problem.
Pushing all those pixels is another problem altogether, but fear not: Microsoft Research is hard at work on that, too. A project called Foveated Rendering aims to reduce processing requirements by taking advantage of the fact that the human eye doesn’t see as much detail at its periphery. Basically, the display tracks your peepers and makes sure the area you’re staring at is rendered in full glory, while dropping the resolution at the farther reaches of the screen.
During tests, users couldn’t tell the difference between a full-fledged image and one using foveated rendering—yet the less-detailed image required only one-sixth as much computing power to create.
Thinking even bigger
Wall-size displays? Pfffft. That’s small potatoes for the team behind LightSpace, yet another Microsoft Research project that wants to kill monitors as we know them, and turn every object in your office into a potential PC display.
LightSpace relies on a series of cameras and projectors to “create a highly interactive space where any surface, and even the space between surfaces, is fully interactive.” In a nutshell, the cameras track your movement around the room and observe your interactions with the images projected by the setup, which can be cast on any surface—walls, chairs, desks, you name it. At a basic level, the camera tracking allows you to manipulate the images using familiar multi-touch gestures, but it also supports more-exotic commands such as dragging the image from one object to another, or “picking up” the image and handing it to another person.
It’s nifty stuff. You’ll definitely want to watch the video above, if only to skip to the interesting bits. It’s also complicated stuff: LightSpace must be calibrated to the room it’s being used in. At the moment, that knocks it from the realm of everyday use, but as cheap sensors like the Kinect grow more powerful and perhaps capable of mapping a room in 3D on the fly, a system like LightSpace could take off.
Talking heads in 3D
While others are busy trying to revolutionize displays entirely, USC’s Institute for Creative Technologies is trying to make those tedious videoconferences just a tad less so with its 3D Video Teleconferencing System.
If you’ve ever had the (dis)pleasure of sitting through a videoconference, you’re well aware that those calling in spend most of their meeting time staring into their laptop screens rather than the webcam itself, making even the most extroverted alpha males seem like disconnected introverts.
No more! (Or, more accurately, maybe no more one day!) USCICT’s technology turns videoconferencers into 3D talking heads by projecting a real-time, high-speed video of the speaker onto a spit-polished aluminum mirror rotating more than 15 times a second. “Effectively, the mirror reflects 144 unique views of the scene across a 180-degree field of view with an angular view separation of 1.25 degrees,” the research group’s website explains.
On the videoconferencer’s side, the 3D Video Teleconferencing System displays a feed of the people at the receiving end gaping at the holographic floating head. The system tracks the remote videoconferencer’s head position and gaze, allowing the virtual 3D head to establish eye contact and turn from speaker to speaker during a meeting. (And you thought the autofocus feature in Google+ Hangouts was cool!)
Don’t care about real-time chatting? The USCICT team has used similar technology to create 3D images that can be walked around and observed in a full 360 degrees. Bonus points: This 3D is autostereoscopic!
The short term
Finally, here’s a futuristic display goal that, while not quite as ambitious as the rest of the entries, might be a bit more realistic. A sweet multi-monitor setup might not change the way you look at data, but, hey, pixel densities have only increased while display prices have dropped (albeit slowly) over time. Now that’s value. And if you poo-poo a six-panel, 5760-by-2400-pixel rig as the pinnacle of excess, you’ve obviously never used one.