Understanding AR, VR and 360 videos

Augmented Reality, Virtual reality and co. An introduction on these technologies, their combinations and their applications in visual storytelling.

Cover image by By ESO/S. Brunier Cascading Milky Way, CC BY 4.0

It’s a brave new world out there! The fast development of mobile technology is bringing a lot of innovation in the media world. Feeds are starting to get crowded with new types of content that are consumed very differently that classic media. The difference is so big that we might be on the verge of the rise of a new type of narrative, like in the early XX century, when cinema was discovering its storytelling potential. But it also presents very new challenges from a technical and narrative point of view, that are not easy to solve by the regular videographer.
You wouldn’t like to be left out of this new world, right? In order to start, let’s go through each one and see their differences. In this article we will cover the basics with an easy language.

We might be on the verge of the rise of a new type of narrative

Wait… but why?

Both 3D stereoscopic imagery and VR are as a matter of fact relatively old technologies. Then why we haven’t seen them this much before? In order to understand why this shift is happening now, we first have to get a little bit technical. IMHO, basically, we’re seeing this technology now because of these key points:

These were the main technical aspects that lead us to today’s landscape. Let’s see now how these rather old technologies mutated into new opportunities. Lets start from the oldest to the newest.

3D video (Stereoscopic images)

The name is pretty self explanatory: videos that give a sense of depth. What they do is clear. But how do they work? That’s where the “stereoscopic” word makes its appearance.
Basically, we humans understand video thanks to the fact that we have two eyes. Each eye observe reality from a slightly different position (the distance between our eyes). That shift of perspective gives the brain enough information to interpretate the relative position of objects, and what is in the foreground and what in the background. 3D videos make use of that knowledge, creating images in the exact same way: using two cameras, a twin set of images is created, giving that offset information to each eye. What changes mostly in 3D images, tech-wise, is the way images are displayed. There are three ways:

Polarized glasses

Both images are shown on the same screen, and the “distribution” of each image happens on the eyeglasses. In this family it’s the good old red/blue geeky glasses that we all know. In that case each image was tinted in a different color, and the color glass rendered one of the two images invisible for the other eye. The newest technologies perform this division differently, alternating the images faster than what our eye can perceive, and the eyeglasses literally oscurate one eye and then the other at the same frequency, like when your eye doctor makes you read the letters from a distance, with the difference that each time you shut one eye, the image changes, roughly 60 times a second.

Polarized screens

This technology works on the same principle of the eyeglasses, but in this case the division happens right on the screen. Remember those rulers that changed the image whenever you tilted them? Well that’s a polarizer. In these kinds of screens there’s an extra coat, that deflects light from one eye and the other, making one of the two images visible to your left eye and the other to the right one.

VR viewers

This is the most straightforward tech in terms of simplicity. In this case, each eye watches either one screen or a half of the same screen. The fact that the viewer is attached to the user’s face gives an important extra layer of information that lead to the next point in this article.

360 videos and Virtual Reality

This is where things get good. 360 videos are videos that allow the user to change it’s point of view of the image watched. This works by using arrays of cameras, that film in six (or more) different directions, covering the whole field of view. The images are then stiched together and wrapped up with a process called “projection”, that creates a large 2D video, just like regular videos, but with a very weird look.

Then the software embedded in the video player “unwraps” the image again, cropping it into the screen size and desired position. In a 360 video you can “pan” around the POV, but the position of the view is fixed to the position of the camera when the video was made.
Virtual reality is based on the same principle, but instead of being shot straight from reality, the images are computer-generated, allowing to interact with the watched space, and not only panning but also translating spatially into the virtual scene.
In both cases, the use of sensors and interfaces allow the user to navigate the scene, sometimes naturally, like in the case of VR viewers, in which a sensor gives information of the head movement to the processor, that creates a “virtual set of eyes” that mimic our movement, making us feel “inside” the scene.

Augmented reality

Augmented reality is the latest arrival in the media world, and probably the most fascinating. In augmented reality, our actual environment is partially or totally analyzed and “understood” by our devices, allowing them to insert new images and information that become then “attached” to the real scene. Technically, a camera or set of cameras “read” our real environment, and through complex algorithms they analize the image, and reproduce a 3d scene with the camera position. Then a render engine positions elements in that 3d scene, and overlays them into the recorded image. Voil√†! Augmented reality

Mix it right!

The most amazing part of these technologies is that they can be combined. 360 videos can indeed be stereoscopic, becoming 360 3D videos. The level of immersion in these videos is uncanny. When you combine stereoscopic technology with Virtual reality, you get the highest level of immersion possible. Navigating a 3D scene, perceiving objects and space the same way we perceived them in real life.

Navigating a 3D scene, perceiving objects and space the same way we perceived them in real life.

360 videos and Visual Storytelling

I started my career in the visual effects industry, so for me these mediums were not obscure, only super niche. The infrastructure to both create and play these kind of content was too high to become mainstream.
But the times have changed. The most interesting aspect of 360 videos in my opinion is not so much the hype around them, but the challenges in terms of filmmaking narrative techniques, something that is far away from the tech side of it, that is far complex on its own.

Bringing down walls

The limitations of film were actually the pillars of its grammar. One of the main aspects of filmmaking narrative is what we call “off-camera”. Filmmaking has inherited from the proscenium theater the term “walls”. The “walls” are the boundaries of the fictional world, and the term “breaking the fourth wall” means to aknowledge the presence of the audience. 360 video technology by its nature will force us to bring down all the walls. Let me explain myself better.

Camera and off-camera

“Off-camera”. We use that term for everything that is left outside of the frame. This “untold” space is used on a daily basis in filmmaking, since it gives the chance to be recreated by storytelling. For instance, imagine a person framed that is looking to the left of the frame and talking. We think immediately that this person is talking to someone, but that is only a guess, since we can never know for sure, unless the filmmaker WANTS us to know, either by panning and showing the other person, or by cutting to another shot where we see the other person.

We won’t be able to do this anymore

The physical limitations of the frame actually founded the grammar of visual storytelling, because the filmmaker chooses what people watch. With 360 technology, visual storytellers have no longer that choice. We don’t deal with scenes anymore, but rather with hyperscenes. Scenes that can be viewed entirely, where the spectator makes his own choices each time he watches the video. The off-camera still exists in a way, but now is not fully controlled anymore by the filmmaker.

We don’t deal with scenes anymore, but rather with hyperscenes. Scenes that can be viewed entirely, where the spectator makes his own choices each time he watches the video.

The other huge narrative challenge of 360 video, is the redefinition of match-cuts. In editing, a match-cut (or in french “raccord”) is the relationship between two different objects, spaces or compositions put together. The match cuts help drawa connection between the two shots, creating new meanings that go beyond of what each image might mean on its own.
Clearly, match cuts are a decision made by the filmmaker often based on what it’s being framed. The question that arises is: how do we build match-cuts in a frame that is no longer controlled by the filmmaker? We will need to create hyper-match-cuts instead, that could work regardless the user’s choice of view.

The same way that the limitations of the medium in the beginnings of filmmaking created its grammar, we, the hyperfilmmakers of today will have to write down the new grammar of this new medium. And for that we will need to build narrative superpowers. Once again, there’s a brave new world out there and we’re lucky enough to be born in the time that it is not fully discovered yet. So prepare your boat, and your hammer to bring down those walls.

To be continued

If you like (or not) the content of this article, leave your comment and follow our social channels to continue reading!

comments powered by Disqus

You might also want to read

Do you like what we do? We're sharing our process.

Sign up, and we'll send you news and insight on visual storytelling, media strategy and remote collaboration, fresh from our experience.

Choose the content of your interest:

Do you have a project? Let's do it together!

Contact us