The new rules of the interface — virtual reality and motion tracking at CES 2014
CES 2014 was jam-packed with all sorts of crazy stuff, but one of the many distinct themes at the show was the evolution of interfaces by way of virtual reality and touchless sensors. VR has been an unfulfilled pipe dream for too long, but when we start seeing concrete products that just might actually work, it’s hard for even the jaded among us to not get a little giddy.
Given, neither VR nor motion tracking are especially new in the world of computing, but a post-Kinect generation of consumer-friendly motion-sensing products is inching ever-closer to the mainstream. These trends stand to shift (or at least supplement) the finger-friendly direction gadgets have been on for the last couple of years, and may potentially interrupt the entrenched mouse-and-keyboard habits of a whole generation.
Go for the eyes
By far the most influential and impressive player in this sphere at CES 2014 was Oculus. They put together a new hardware prototype of their virtual reality system called Crystal Cove. Crystal Cove has sensors and a paired camera to track head movement in new dimensions; with it, you can lean and change the origin of your perspective, rather than just rotating your view around a single fixed point and relying solely on a keyboard or gamepad to change position.
There are also refinements to reduce motion blur when whipping your head around. While playing EVE: Valkyrie, a sci-fi flying game built for Oculus, head bobs translated into subtle view changes in-game, adding to the overall immersion. If you can’t tell from the video, I’m legitimately blown away by what these guys are doing. It really is the future of gaming.
It’s important to note that VR involves a lot more than simply filling your entire field of view with pixels — it’s about interaction. In EVE: Valkyrie, you can lock missile systems by pointing your head in the right direction. As cool as things are now, Oculus is still a work-in-progress more than 18 months after it was first recvaled, so we may be ways out from retail availability. If you just can’t wait for the real deal, Dive is an accessory that uses your smartphone’s existing app ecosystem and sensors to deliver an accessible (if lower fidelity) alternative. On the higher end, Gameface is having a go at an Android-powered VR headset. It’s hard not to feel a little silly strapping a screen to your face, but as soon as you’re immersed, you quickly stop caring about anything on the outside. Yes, with Oculus, it’s that good.
Not the only glass in town
Though you’re not especially likely to be walking around outside with an Oculus on (nor would doing so be safe), many Google Glass-style competitors were at CES. Vuzix has been involved in digital eyewear long before Google unveiled Glass, and they’re enjoying lots of popularity thanks to the surge in publicity around wearble computers that Glass has brought. Vuzix’s applications are more commercial and industrial in nature, which many might consider as more fitting use cases for these kinds of products. Warehouse workers checking inventory while handling machinery or doctors needing to access detailed information without hands seem a lot more important than getting a text message right away. Plus it’s easier to get away with looking like a knob if it’s all for business. Still, Vuzix announced a consumer-focused product at CES which we should be seeing next year.
Some ambitious product developers at Innovega were showing off a contact lens which expands the field of view so you could see an otherwise-hidden display mounted inside a pair of glasses. This is something that’s been in the works for awhile though, and won’t hit primetime for at least a year. More traditional digital eyewear was all over CES, offering cheaper options than Google Glass. For one, GlassUp had their projection-based system available for preorder. Lumus had a pair of shades that also included a camera for augmented reality applications. Pivothead, meanwhile, was focusing solely on a center-mounted camera with no heads-up display, but included interchangeable modules to increase battery life or run specific applications. Needless to say, this space is getting crowded, and fast, though the perfect product has yet to show itself.
Master your iPhone in minutes
iMore offers spot-on advice and guidance from our team of experts, with decades of Apple device experience to lean on. Learn more with iMore!
Augmented reality is growing up very quickly with these hardware devices. In particular, Occipital’s Structure accessory for iPad allows users to create 3D models on the fly with a quick tour around the subject. Checking this out in person was hugely impressive, and I’m really pumped to see what we can do in a world that can be easily virtualized. Smart software can fill in any gaps that the sensor doesn’t pick up, and you can do all sorts of stuff with the final product: send off the model for 3D printing, build a dynamic and accurate blueprint of a room for renovation or decoration, or create stages for augmented reality games. Then there’s the FLIR, who introduced a new iPhone case at CES with an embedded a thermal camera, so users can see sources of heat in dark, through smoke, or which were otherwise hidden. Without a doubt, the tech we saw at CES doesn’t only replicate human vision, but enhances it.
Digital manipulation
The next step to the current generation of augmented and virtual reality is being able to reach out into your data and interact with it. The leaders on this front, Leap Motion, weren’t showing off much at CES. They had a preview of their next firmware update which should refine their sensor’s fidelity, especially when it comes to tracking fingers that might be hidden from the sensor by the rest of your hand. The main business play here is to get the sensors integrated into devices, which they’ve already managed to do with one HP PC.
The bigger issue than business models is usability. I really hoped that Leap would let me use Windows 8’s touch-focused UI in a standard mouse and keyboard environment, but it’s still a bit too much of a hassle as interfaces for important apps that I use daily are built for a high-precision pointer. This puts a lot of the onus on developers to adapt their desktop apps to be more in line with gesture-friendly layouts, which can be tricky, even with the additional screen real estate of a PC.
Luckily, mobile has paved the way a fair bit in terms of having the right design sensibilities, but software developers will have some catching up to do if they want to take advantage of these new interface types. High-speed infrared cameras like Leap’s are the most common way to track motion, but the guys at Myo were showing off an armband that detects muscle movement directly and translates it into device input. Even if these alternative methods don’t take off, they could easily increase the accuracy of other sensor methods.
Looking where you’re looking
Hands aren’t the only things capable of making gestures. Two exhibitors were showing some impressive eye-tracking technology so you don’t even have to wave anything in the air to interact with your devices. The Eye Tribe has had their PC-based eye-tracking sensor bar available for some time, and they’ve been working on an Android one that, among other things, allows for finger-free Fruit Ninja matches. I also needled these guys about working with TV, and it’s apparently possible if there’s enough power pumped into sensor bar. The only real challenge is recognizing multiple faces at once, and figuring out which one to track. The more perturbing use case for eye sensing is being able to track which ads users are looking at and for how long. For many folks, this will be a little too close for comfort.
Right across from The Eye Tribe’s booth was Tobii, who have made good progress in hardware partnerships like Leap’s. They announced at the show that their eye-tracking sensor bar would be built into a gaming peripheral made by well-respected SteelSeries, though they weren’t showing it off just yet. What they did have was World of Warcraft running with the ability to glance at the side of the screen where you wanted the camera to go. They were also making a concerted effort to offer a solution to those with disabilities and enabling precision input among those that can’t use a mouse. Their desktop system still incorporated keyboard presses, which is pretty important considering your eyes are constantly active, and if it were left on, your cursor would be erratic to say the least.
I do worry that Tobii’s gaze tracking won’t be as natural or efficient as existing mouse and touch input; if I have to press a key, look at my target, then release a key to activate it, that’s three steps. With a mouse, you move the cursor and click. With touch, you just tap. Three steps versus two or one will only be worthwhile if the eye tracking is incredibly fast. At the very least, gaze detection should work well for broad motions, like scrolling through pages. Leap mentioned that in the long-term, they aim to replace mouse input. It’s a lofty goal, and I’m rooting for them, but I think it will take awhile before it becomes a reality, and longer still before eye-tracking gets there.
Flailing into the future
Full-body motion tracking has been making advances too. Virtuix was at the show with their Omni system. It combines Kinect with Oculus and a custom-built platform with low-friction footwear to allow players to run and duck through virtual settings. Gaming remains the primary use case for this kind of thing, but looks altogether exhausting for longer sessions.
Fatigue is certainly a big issue with gesture recognition, and one I’ve already encountered even in using the Leap Motion for anything longer than 15 minutes. With any luck, gaze tracking will be able to sidestep the issue altogether. The Omni is already available for preorder at $500, but the bigger demand is likely to be on your living space. Can you justify using up a significant amount of floor space for something you’re going to be playing relatively infrequently? There are certainly some non-consumer training scenarios that Omni could be used for, in any case.
For those that want to go all-out, PrioVR was relaunching their failed Kickstarter campaign at CES. These guys have a whole harness system full of sensors to enable complete motion capture for upper and lower torso. For all of its accuracy compared to Kinect, it’s an awkward set-up that will only appeal really hardcore gamers.
So close you can almost touch it
Despite all the promise in virtual reality and gesture tracking at CES, there are still some significant roadblocks. For one, haptics. Waving your hands in the air or moving your eyes around doesn’t offer any feedback, as opposed to most current input methods. However, touch interface has managed to take off despite a reduction in tactile response, so it’s entirely possible we’ll get dexterous enough to use these touchless devices with training. I don’t see mobile implementations as being a big challenge; folks from both Oculus and Leap said they were working on Android, and I sincerely hope that the common interface will help apps transfer from one platform to the other. Leap’s ecosystem already hosts quite a few mobile-borne titles.
The biggest problem is finding compelling use cases. Accessing notifications at a glance is a major one for glasses, but that’s competing with smartwatches, which have a much lower mental and financial barrier for entry. In all likelihood, gaming will be the first popular use for virtual reality and touchless gestures, but creating marketable games with these technologies will require significant investment into a risky and unproven market. Despite the risk, it’s clear that there’s a ton of competition in the areas of VR and motion tracking, with only a few players that will take an obvious lead over the others. Over the next couple of years it’s going to be a ton of fun watching these guys try to out-do one another.
Editor-at-very-large at Mobile Nations, gamer, giant.