Analyst Q&A: The significance of Intel's move into Mixed Reality hardware

We catch-up with Ian Hughes, Internet of Things Analyst at 451 Research

This week Intel announced the Project Alloy – a Mixed Reality or “Merged Reality” headset – at its developer forum in San Francisco. This is not totally immersive like a full Virtual Reality (VR) or Augmented Reality (AR) headset, but instead designed to bring in the physical world, via its external facing camera. It is also completely wireless but as Ian Hughes, Analyst for the Internet of Things at 451 Research, explains it “is not a fully standalone device, but instead relies on Wi-Fi to connect to a PC”.

“As the Internet of Things (IoT) grows we are more likely to need full mobile AR as the user interface for the location centric applications it allows,” he says. “Project Alloy as a reference design appears to go part of the way towards this but will likely evolve as the industry matures.”

A full lightly edited Q&A with Hughes is published below. This looks at the wider significance of this device, Mixed or Merged Reality as a whole and what this is likely to mean longer term.


What practical ways do you see this technology, or something similar, being used in future?

In the workplace there is scope to remove the fixed screen from the desktop with VR and AR. Multiple monitor setups are costly and take up expensive office space. Working on the go or in coffee shops is not conducive to large data filled multiple screens. If you have tried working on a long flight when the person in front reclines their seat you will have experienced the screen size problem too. Headsets or personal projection offers ability to work anywhere with as much screen space as you need. The VR versions of this are problematic in not being able to see your cup of coffee or things around you but mixed reality keeps the situational awareness. There is a problem of social acceptability, but rather like smartphones usage society will adjust.


Clearly this is an early iteration of this kind of technology but what do you think this release signifies?

We have had a surge of VR equipment previously, that was before its time and before the rise of mobile computing and the web. We also had a surge in virtual worlds in 2006 to 2009. Gaming has sparked the VR resurrection, crowdfunding was the catalyst. Everyone wants to try and get into VR, rather like chasing the smartphone market. It is an evolution of the screens we use today, but one for each eye this time. It is a stepping stone to technology which breaks from the solid screen approach completely.


Do you think Mixed or Merged Reality will become more useful or popular than Augmented and Virtual Reality? Why and what timescale are we looking at in your opinion?

Whatever we end up calling it, mixing the digital and physical world is more important than pure virtual experiences in most cases. If you need to be completely focussed or immersed then VR is the mode, but most tasks and experiences rely on things and people around you. A full AR system can still switch to a VR system by blocking out and replacing more of the real world so it makes sense it will be more successful in the long run. VR is exhibiting the traits of a bubble, but I think it is less likely to burst and more likely to be side lined by the AR headsets in an enterprise setting. These things always take longer than we think, the virtual world boom should have taken hold but was railroaded by people discovering smartphones and Facebook etc. Social media is well bedded in now as are touch screens. So we can expect a wave of what’s next, for the next two to three years VR will improve, but content will still be catching up whilst the AR headsets surf that wave and pop up as must haves in four to six years.


Is there anything people are not talking about enough when it comes to this technology?

Often people do not relate this change in display technology to the need for changes in how we interact and input to systems. In the Internet of Things we see a range of sensors and devices that are connecting to understand the world around us, but that will also need to include understanding our intentions and needs. Combining the evolution of Artificial Intelligence with new forms of input such as voice, gesture, brain waves and more dynamic display technology appears a powerful mix. It is much more about those experiences than simply placing screens closer to our eyes.



Also read:

Intel shows it’s ‘hands’ in Merged Reality