We all know VR is rapidly evolving, but how quickly it will become commonplace is difficult to predict. But Oculus chief scientist Michael Abrash has made some bold predictions about VR development over the next five years, during his traditional address at the Connect Conference in San Diego, CA.
Mr Abrash told the audience he believed it was more important to improve the visual features of VR in the short term. Modern high-end headsets such as Rift and Vive offer a 100-degree field of view and 1080×1200 display panels, which equals around 15 pixels per degree – in comparison a human has a 200-degree field of view, or — about 120 pixels per degree. Mr Abrash claims that display and optics technologies are far away from achieving this purpose (forget 4K or 8K, this is beyond 24K per eye). According to his words, we can hope for the doubling of the current pixels per degree, with the widening of field of view to 140 degrees, using a resolution of around 4000×4000 per eye. with fixed depth of focus for headsets to become more variable. Now it’s impossible to create such headsets, but Mr Abrash thinks that this problem can be solved in the future.
Rendering 4000×4000 per eye at 90Hz is beyond the capabilities of the most powerful everyday computers. The most important technology to achieve such figures is “foveated rendering” (from latin – fovea centralis), which the research Department of NVIDIA, demonstrated in July, 2016.
Foveated rendering is a method where only a tiny portion of an image that lands on the part of the retina that sees significant details is rendered at full quality, with the rest blending to a much lower fidelity. One of the features of our visual apparatus is the best detail recognition in the centre and the detail deterioration for improved motion recognition at the edges of our field of view. So, foveated rendering uses the natural feature of sight, reducing the cost of resources to render images. But there is a problem. To use this method, we need a system of oculography (tracking the direction of a person’s gaze) of such accuracy that modern devices have not given us — yet. But there are five years ahead of us…
Mr Abrash also outlined his vision of graphics and audio. The function of personalized head position dependent audio transfer, aka, head-related transfer function (HRTF) will make VR more realistic. For Rift, the current solution for surround sound generates HRTF in real time, based on head tracking, but it is generated for all users at the same time. The HRTF function should vary depending on the head and body size and from the shape of ears – this will take the quality of sound to a new level. While Mr Abrash didn’t delve into the details, where a test in an echo-less environment is required, he pointed out that in the next five years a method of “easy and fast” home setup will exist. He also expects vast progress in the modelling of reflections, diffraction and interference.
As for controllers, Mr Abrash is sure that in handheld motion devices, like Oculus Touch, would remain the default interaction technology for the “next 40 years”. No doubt that during this time there will be improvements in ergonomics, precision and functionality — but the essence will not change. Though, Mr Abrash predicted the tracking of hands without gloves or controllers (like with Leap Motion) could be the industry standard within five years.
Along with the improvements in partial images, tracking and data transfer, headsets are also expected to become lighter. Particularly, it will be achieved by the best weight distribution of components throughout cases, to make wearing the devices more natural. More flexible optics correction is also expected depending on the characteristics of a user — which is, perhaps, growing out of the technology of the focus depth change.
According to Mr Abrash, headgear will inevitably become wireless, but first, on wireless data transmission and foveated rendering have to be improved.
Mr Abrash also talked about bringing the real world into virtual space, something he referred to as “augmented VR”. The headgear of the future will be able to scan space and move things out of the physical environment. “Teleportation” of a user in real-time into filmed locations can=will also become a reality. In fact, it’s mixed reality, success in the development of which Microsoft researchers regularly share. But Augmented virtual reality headgear will be very different from augmented reality headgear, because they allow not only the overlay of graphics, but also the control of each pixel in the mixed scene. Magic Leap does something similar.
The progress in virtual teleportation, tracking of the body, head and hands, will provide a significant improvement in the creation of VR avatars. But Mr Abrash thinks realistic display of a human body in real-time is one of the most difficult tasks, because we are very attentive to expressions and body language. Breakthrough in realistic environment transmission, cooperative spaces is possible within five years, but the creation of an avatar, which can convey the feeling of a real person may take decades.
At the end of the address, the Oculus chief researcher revisited his “dream workspace” which he discussed last year. You can create any atmosphere, add an unlimited number of interactive whiteboards, monitors and all other objects in a virtual environment. Virtual humans should be added, and it will become an equally powerful group working tool. But beforehand, some improvements should be made. Virtual reality must become so comfortable that it can replace mediated technologies: monitors, mice, keyboards, man digitising and a mass of software solutions.
You can watch the video of the full speech below:
You can find the recordings of all sessions in our special material.
Source: Road to VR