An Introduction to VR for Camera Operators

Above photo: Sphericam 2 Camera, Courtesy: Sphericam

By Alx Klive

In just the last few months, interest in VR (Virtual Reality) has skyrocketed, and announcements such as the recently introduced support for 360° video on Facebook, and earlier in the year with YouTube, has lit a touch paper under the live-action side of VR.
Even those on the gaming side are now looking enviously towards our side of the table, where just a few months ago, gaming was all anyone talked about.

It’s exciting times for sure, and the danger is somehow equating VR with the disappointments of 3D.

3D was simply too tricky for people to do at home it turns out, and not compelling enough of an experience for consumers to make the effort. On the contrary, anyone one can put on a VR headset, and the experience elicits wows from consumers. It’s new, its fun, it’s not your grandad’s tech!

iZugar Z2XC twin GoPro rig w/ 194 degree Entaniya Lenses, Courtesy: iZugar

iZugar Z2XC twin GoPro rig w/ 194 degree Entaniya Lenses, Courtesy: iZugar

Creating content for VR 

As camera operators, we’re used to making immersive experiences for people already. There’s a reason someone comes out of a movie theater thinking they’re in the movie. Compelling storytelling and incredible cinematography ‘pull you in’, notwithstanding the 60 degree FOV.

With VR we now have 360 FOV. It’s no less of a development for our industry than the introduction of sound or color, and requires new methods of working, although storytelling remains as important as ever.

Currently there is huge demand from brands to create live-action VR experiences. Everyone is experimenting in this nascent medium, from live 360 streaming of sports and concerts, to 360 music videos, short films, documentaries, current affairs and hybrids of CG and live action.

The jury is still out whether consumers will watch long-form cinematic content in VR, as some people experience nausea after prolonged exposure, and turning your head constantly can literally be a pain in the neck. It also tends to go against the fundamentally passive nature of film and television, where we let content flow over us. But being aware of this issue is the first step towards creating content that works.

EYE™ VR Camera, Courtesy: 360 Designs

EYE™ VR Camera, Courtesy: 360 Designs

Rethinking our Role as Camera Operators

To film in VR means rethinking a lot of what we’re used to as camera operators. We’re used to literally being behind the camera all the time, but that’s not possible with VR.  We’re used to spending a lot of time thinking about camera angles, depth of field and lenses, whereas with VR these factors are far less relevant.

Allowing people to see literally anywhere, means hiding the crew while shooting, and finding creative ways to focus the viewer where you would ideally like them to look. Sound, lighting, and talent cues, which direct the viewer in subtle ways, are all techniques that have been used successfully to date.

Professional VR Camera Equipment

The options for professional VR cameras have been thin on the ground until recently. Progress had been previously driven mainly by technology companies, who don’t seem to necessarily grasp the needs of professional camera operators.

All-in-one cameras from the likes of Jaunt, GoPro and Nokia, which are being touted as professional cameras, have fixed lenses and little in the way of manual control. While they have their uses, they leave a lot to be desired for professionals.

It’s this reason that has led a few within the industry to design their own 360 VR rigs, typically around Red Dragon Epic’s, Codex action cams, Sony A7S II and others. One company that has been active here has been Radiant Images, another is my company 360 Designs. We’re making use for example of the new Blackmagic Micro Cameras, which offer excellent control and synching abilities.

For additional POVs, and for uniquely small spaces, there’s a ton of options if you don’t mind using GoPro’s. Another camera (Sphericam 2) is looking a likely candidate when it ships in January.

3D printed rigs are readily available, typically combining six GoPro’s for a mono rig, although you can get away with just two cameras back to back, using wider than 180 lenses, such as those from Etaniya.

GoPro’s don’t sync obviously and heat/data issues make reliability a pain, but people are getting around this by using high frame rates to fix in post, and jerry-rigging fans, or even using a fridge nearby to keep them cool (seriously). One company that has been leading the way with GoPro rigs is Freedom 360, who make excellent kits.

Selecting a camera means giving careful thought to whether the output is 2D or stereoscopic, and which kind of stitching software you are likely to use.

Custom RED Dragon VR Rig by The Diamond Bros. Photo Credit: Steven Breckon

Custom RED Dragon VR Rig by The Diamond Bros. Photo Credit: Steven Breckon

Stitching options

Traditional Stitching is tried, tested and well understood – it’s the ‘matching up images’ approach. Software from the likes of Kolor, the Foundry, Videostitch and Dashwood, can all do an amazing job of stitching mono, and even stereo single-axis VR this way.

The challenge for live action VR long term is finding a way to do ‘full’ stereo 3D VR, i.e. in all three axes of head movement, which is the experience readily offered by CG.

Unfortunately, for video it’s a highly technical challenge to solve, and all we can do as camera operators at this stage is to capture as much visual data as possible, for future stitching and presentation methods that are yet to be released. Our company has patents pending here, but a market ready solution requires integrations between multiple parts of the food chain, and faster display devices, so is likely a couple years away at least.

One step along the path is computational photography (CP) stitching, which offers a new approach to stitching 360 video, and significant advantages. By employing more camera heads, with greater overlap, it’s possible to essentially remove all traces of the stitching using clever image analysis and interpolation techniques (similar to time slicing), to essentially recreate the entire 360 scene by calculating depth and other cues from every available bit of pixel data.

For stereo the system can generate left and right eye views that lie in-between camera positions – you don’t need left and right eye cameras – and this provides a much improved stereo result. Again, stereo is not possible in 3-axes now, but if you use a spherical rig with many camera heads, your content will be future proofed for 3-axis stereo later. Parallax issues are significantly reduced too with CP stitching, and the same camera can be used for both mono and stereo – no toe-in required.

The only company that has publicly announced CP stitching software is Google, although other companies are working on it too. Most VR cameras are not designed for CP stitching and will not be compatible when Google and others launch their solutions in the months ahead (ours is – fyi), so do be aware of this.

The requirements for CP stitching directly affect the layout of the cameras in a VR rig, and the choice of lenses. At its simplest, and as a rule of thumb, you need each point in a 360 scene captured by at least two cameras, not just a simple 10-20% overlap. Resolution is also important here, much higher is better, as it gives the software more to grab onto.

Summary

If you have any interest in VR filmmaking, now is a great time to start getting involved. You can perhaps use cameras you already have to play around with, and demo software from the stitching companies as a starting point. If you want an off-the-shelf solution, I’d recommend the F360 rig from Freedom 360, or you could simply duct tape a twin GoPro rig using Etaniya lenses. If you have access to a 3D printer at home, this can be an option too for printing rigs, or cheese plates of course.

If you like the sound of computational photography stitching, which is definitely the future, follow our company’s progress, or plan to reconfigure those GoPro’s in future… You’ll also need a really big pipe to the Internet!

See you in VR!

Light Field Capture

Jumping into our sci-fi future, light field capture (aka plenoptic capture) offers the ability to capture more than just color and luminance. Plenoptic sensors capture the direction of light rays as they hit the sensor, which offers potential advantages over purely optical capture. For one thing you can refocus after the fact (perhaps less relevant when everything is in focus), and some level of six degrees of freedom becomes possible, depending on the camera’s size (although this is also possible with multi-camera-head rigs). The data rates for doing so all this are insane, but a camera announced from Lytro promises just this. It requires a refrigerator sized box dragged around with you, to handle the many terabytes of data!

 

 

Alx-Klive-StudioAlx Klive
Alx Klive begged his first Super 8 at the age of 7, his first computer at the age of 10, and has been enthralled by cameras and computers ever since. He worked as a camera operator in the 1990’s (for Bravo! and CNBC), was founder of the Millennium Photo Project—a crowdsourced effort to document the entire world in a single day, and established the create-your-own TV station platform WorldTV.com. He’s now Founder and Chief Architect at 360 Designs—a San Francisco based VR company, making the EYE™ VR camera and live VR streaming solutions. Photo Credit: Pat Johnson

 

Follow Us

Read More Stories from Camera Operator

Print and Digital Versions Available