The idea of VR was forming long before the first computers arrived. Photography drove the desire to step inside the camera ‘space’. In 1838 Sir Charles Wheatstone patented the stereoscope and the stereo viewer was born. The key feature of the viewer was to provide each eye with a slightly different view of the scene, thus replicating depth and a powerful three dimensional effect. At a time when photography was still a form of magic, stereographs were extremely popular and remained so for over 100 years. Fast forward to the 1990’s it was therefore no surprise that putting ourselves inside thee computer space, became a new obsession. VR quickly gained cultural significance marked out by cult books like Snow-crash and films like the Matrix, during a period where computer power increased by over a million times between 1995 and 2012. It was in 2012 that Oculus Development Kit 1 was released providing the first viable consumer hardware the world had seen. The basic premise of the device remained similar to the early stereo viewers, presenting the world in full 3d albeit with a much wider field of view for better immersion.
18th century stereo viewer
Valve Index HMD
Like with any new technology the VR adoption trajectory has followed a familiar path. Initial excitement and wonder gives way to a more realistic outlook of the limitations and challenges of the medium. VR is slightly complicated in this respect due to the symbiotic relationship between the hardware and the content designed to provide the experience. Whilst we can rely on the hardware to improve over time, in much the same way we do with phones or televisions for example, there is no similar assurance with content because we are still learning what does or does not work well; VR is a computer without an operating system, a completely blank slate. Despite this the use cases in many industries are seductive and present opportunities for how we can engage our clients and ourselves in ways that improve how we currently work on a screen. We can finally enhance the act of looking or seeing and give the user an ‘experience’ that if executed properly will provide far more bandwidth than traditional media can.
VR hardware is extremely difficult to make work and despite decades of research and development the technology was simply not available to produce a fully functioning VR headset. The best systems cost millions of pounds but were still uncomfortable and not particularly immersive. Only recently due to several technologies maturing at the same time, has a working VR headset become possible. John Carmack, the creator of Doom and the world’s first 3d graphics engine built the initial prototype, and since then we have seen some of the sharpest minds in programming and engineering contribute to the hardware we see today. Oculus Rift and HTC VIVE amongst others represent the first generation VR devices and despite how advanced, they are in many ways analogous to very early black and white television sets; they work fine but there is room for improvement in key areas like resolution and field of view. Aside from this we still achieve a feeling of immersion because the latency is low enough and the field of view high enough (20ms and 90+degrees respectively) to give the user ‘presence’ (a feeling of being there).
A VR scene must provide 90 images a second for a comfortable experience thus requiring a much faster method for rendering. Interestingly During the 90’s and 00’s (the same period that we dreamed about VR hardware) came the development of rendering techniques for computer games such as DOOM, which permitted a 3d scene to be navigated in real-time. These early software engines were also a catalyst for graphics hardware and drove the resulting (GPU) arms race which today is dominated by Nvidia and AMD. Since this inflection point the development path between graphics hardware and real-time engines has been closely aligned, and the boundary between hardware and software functionality overlaps frequently.
Driven as much by a pure spirit of discovery as well as the games industry itself new announcements and publications continue to advance real-time technology at annual fixtures such as Siggraph and GDC, whilst the hardware continues to exceed Moores law. At the time of writing it is possible to get near cinematic quality at 90fps using a decent quality consumer graphics card.
With all the excitement and hype surrounding VR hardware it is easy to see why the content creation technology has not received as much interest. 3d Graphics engines are however the primary enablers for VR and currently Unity and Unreal Engine dominate this space. The ability to render entire scenes at near photo-real quality so they can be inhabited and experienced is tantalizing.
Comparison between a real-time scene and a cgi
A curious aspect of VR is how it regularly betrays our expectations whilst providing experiences we did not plan for; it is counter intuitive. As artists we are used to controlling the medium, framing the image, using composition and layout to tell a story. There is great value in being able to contrive content like this, hovering between true representation and artistry. The same can be said of animation, where we build narrative with the same techniques through time. VR is different; there is no framing or composition or timing in the conventional sense. At It’s most basic VR is about placing the user somewhere else – anywhere at any time. Beyond that there are no limitations to what can be done which makes it, in essence, capable of anything you can imagine. With so much potential it is important to provide some framework for the medium through the following areas:
VR sickness and locomotion
At its heart VR is an illusion; it is a trick of the mind pulled off by having almost no latency anywhere in the system being used. From the moment we move our head the headset records a new position based on previous predictions, predicts a few positions into the future, makes a call from the rendering engine, which in turn draws a new image on the screen – this whole process needs to take less than 20ms to be effective, from motion to an updated image. For the most part VR sickness has been eliminated through good hardware design so we must ensure it doesn’t return in our content. Most VR sickness occurs when there is a difference between what our eyes see and what our balance system feels (the vestibular system). For this reason one of the first areas of research has been how to move around our virtual worlds. Using a normal joystick or game controller doesn’t work because whilst you are moving in VR the balance system doesn’t feel it and nausea sets in. Teleportation has therefore emerged as the de-facto standard for locomotion, and together with physical movement allows us to explore huge worlds with ease.
Most people think about VR as an analogue to the real world – being in a virtual house or room or street with everything rendered at life-size. Scale is an essential part of the design process, and traditional physical models are painstakingly built in order to understand different aspects of our designs. Viewing a building as a 1:100 model whilst seamlessly switching between scales so that you can communicate everything from details to a masterplan with one simulation is extremely powerful. Scale is particularly important in VR because the stereoscopic design of the headsets underpins this primary illusion, and so can be utilised with great effect.
Working in real-time opens the lid to an extremely agile and flexible design development toolkit, where 3d content can be made and experienced simultaneously, both on-screen and in VR. In this way VR and real-time combine into a process driven medium that permits unhindered iteration for refining a design.Testing furniture layouts in a given space for example would mean standing in that space and switching between them, whilst moving around to make a definitive assessment. VR in this way provides a platform in which people can participate and refine a project rapidly. The explicit nature of VR makes decision making much less ambiguous then when reviewing an image and because everything can be changed instantly there is no bottleneck for updates.
Much has been said about the headsets being an issue for presentation; They are unsightly, cumbersome or nerdy – people feel isolated when wearing them. The fact remains that VR is a very private experience and even with some of the social VR platforms and experiences coming to market, the sensation of being alone in VR persists. Rather than fight against this we can play to its strengths; VR is the perfect medium to hold someone’s attention and we can capitalize on the users focus and use immersion to engage people emotionally. In many ways this is the primary goal of VR: to captivate our audience whoever they may be, so that when the headset comes off the message has successfully gone in!
The final layer to the VR development stack is the ‘user experience’ and to what extent this enhances the content being shown. At its most basic level VR is frequently a matter of loading a 3d model and providing a teleportation system for navigating around. The user experience here is simply a navigation mechanic, and in many cases this is optimal. Alternatively we can design a passive experience where the movement through the model is scripted so the user doesn’t need to do anything. In both cases we are providing basic navigation but the user experience is profoundly different. In this way small changes to how we engage the user, perhaps through a user interface or other methods can have a huge effect on how the content is experienced.