Introducing: Zubr VR Mixed Reality Video Studio

You might think that we spend most of our time developing AR and VR applications for smartphones – but we actually do plenty of experimental R&D with different cameras, scanners, VR headsets, inputs and physical setups. It’s all part of the philosophy of pioneering new ways of making content, and continuing to create accessible experiences with off-the-shelf hardware, which we then feed into our other projects.

One of these research projects is our ongoing exploration into a Mixed Reality Video Studio.

 

Meet Mixed Reality Video

Making a mixed reality video is generally considered to be the practice of creating a video which make it look like the VR user is actually placed in the virtual world they are seeing. 

As more and more studios create awesome VR content to show off, the popularity of producing mixed reality videos has risen sharply. For most purposes, a simple green screen setup and basic camera synchronisation will produce a pretty sweet video. But what about making something a bit more advanced, where you want to embed the video footage right into the middle of the scene, with virtual objects not only behind but also in front of the person?

Like with any video production, you can always push your video through a conventional post production pipeline, spend some solid hours compositing footage with foreground elements in Adobe After Effects, and end up with something which is broadcast-worthy. That’s fine, but it can easily become restrictively expensive, and, well, we’re not a video post-production house.

 

Realtime leads the way! Again!

Which brings us onto the next possibility: Compositing video footage directly into the VR game engine, in real-time. 

We’ve been playing with depth-sensing cameras such as the Microsoft Kinect since the beginning. We have Kinects, Zedcams, Realsenses and Tangos permanently scattered all over our studio, being put to use for game inputs, volumetric video capture, 3D scanning…you get the idea. So why not use depth-sensing abilities for compositing mixed reality videos?

Well, that’s what we thought when we started using the Kinect V2 for some early efforts at realtime MR compositing. However, the ‘bubbly’ noise produced by the depth image, coupled with the infrared interference with the HTC Vive, made the Kinect V2 a difficult choice for this. That’s why we started looking into Stereolab’s ZED cam – which calculates depth values from the separation between two RGB cameras.

Clearly, many people around the world are exploring similar ideas. Most notably, at the same time that we were getting stuck into it, the geniuses at Owlchemy Labs – a VR games company in Texas – blew away any expectations of what can be achieved with realtime depth compositing with the video clips and explanations on their Mixed Reality Tech blog. With their VR game Job Simulator being a massive hit across all the high-end VR devices, these guys have clearly made a priority out of finding an intuitive way to show audiences what it looks like to see a person immersed in their virtual scenes.

 

 

Anyway – the ZED cam does a very nice job of producing a realtime depth map. It isn’t perfect – part of the depth calculation is an algorithm which basically smooths/estimates some depth values. Notice the soft, blurred areas in the depthmap above. But for the most part, it’s very good. That is not to say it is in any way easy to make this thing work how we wanted it to – blimey!

So, we mounted this camera on a special rig along with a HTC Vive controller (for positional tracking, which keeps it synchronised with the position of the virtual camera), an Android smartphone (on which we render a realtime virtual viewfinder app for the cameraperson to see what they’re filming), and a game controller (for the cameraperson to control virtual zoom, exposure and lighting controls).

Have a look for yourself:

 

Our progress so far

 

 

  • Composites video directly in to the game engine in realtime
  • MR Camera Rig includes depth camera, HTC Vive tracker, input controller and realtime virtual viewfinder
  • The camera operator can adjust virtual zoom, exposure and lighting controls from the physical camera rig
  • The subject can both cast and receive lights and shadows from its’ virtual surroundings
  • The subject is correctly depth-sorted in the scene, even behind transparent and translucent objects

Current areas being worked on:

  • Our greenscreen/keying setup is a quick fix – this is the reason for the dodgy edges in the video, NOT the depth feed! So, we need to upgrade our keying facility, basically.
  • Lighting and shadows can be smoother
  • Calibration needs to be easier

 

Where to next

Increasing general reliability and flexibility of the system is a big step to conquer next before we can see how it works in some real, non-demo content.

Adapting it further for TV/Broadcast users is an important one for us. We want real, human-world camera operators to feel at home with our absurd camera rig.

Integrating a 3D-scanned face overlay to the mixed reality result, inspired by Google’s experiments, is a great way of making sure we can still see the virtual user – we’re working on it.

Expanding usage to non-VR – At the moment we’re focusing on how this solution applies to virtual reality content. However, in the near future, we will be expanding its’ use cases to meet conventional practices in broadcasting and the wider media industries (Think along the lines of the hilariously expensive BBC News Virtual Studio, but so versatile you can even use it in the field).

Interested in the Zubr Mixed Reality Video Studio?

We are interested in making partnerships in broadcast and media industries to help bring this solution to fruition. Please contact us if you’d like to learn more about the system, arrange a demonstration and perhaps work with us 🙂

LEAVE A REPLY

Your email address will not be published. Required fields are marked *