Spatial Journalism in Mixed Reality

Company: The New York Times

Role: Senior R&D Software Engineer

Tech: VR/MR, Web

Tools: Unity, C#, WebGL, WebXR, Three.js

Hardware: Apple Vision Pro, Meta Quest 3, Meta Quest Pro

Overview:

The New York Times wanted to explore storytelling and Mixed Reality utilizing the Apple Vision Pro and to see what was currently possible with visionOS + Unity. Also, another component was to be able to build out experiences with Unity that could be played on both the Apple Vision Pro & Meta Quest Headsets. A walkthrough of the prototype development can be found here.


The video above is an example of an MR story prototype that I worked on for the Apple Vision Pro.

The process of understanding how to build the Apple Vision Pro was broken down into the following steps:

  1. Gathering older Mixed Reality NYT content built on other platforms and porting those experiences to the Apple Vision Pro with Unity.

  2. Documenting the workflow of porting those experiences.

  3. After that exploration, I was tasked with building out a slice of an MR story using media gathered from the Newsroom.

  4. After that AVP prototype was completed, I had to port that experience to work on the Meta Quest 3.

For Step 1, I was tasked with porting an MR experience from a few years ago built for the Quest 2 (w/ an older Interaction SDK) to the Apple Vision Pro to learn which parts of both platforms were similar and different while also getting a better understanding as to how to develop for visionOS via Unity's Polyspatial Package. In this process, I learned which features were available in visionOS 1.0 in Unity as well as some pain points (ie. many of the original materials/shaders were not compatible with visionOS, so I had to rebuild the shaders in shader graph to visionOS compatible version).

For Step 2, I had to make thorough documentation to break down the conversion process, so, that if any other developers had to go through the process in the near or distant future - they would have a guide to do so.

For Step 3, I created a new MR experience for the available NYT media to explore visionOS from a fresh project as well as explore various types of storytelling in mixed reality. The video above just demonstrates a small sliver of that experience. In the experience, different media types (traditional text, traditional video, spatial video, spatial audio, 3D models, Gaussian Splats, etc.) were explored. The experience follows a main linear timeline with juncture points allow the user to take deep dives/side quests into more detailed & granular aspects of the large story.