A few times a year, our developers dedicate half a day to learning about something specific. We do this by proverbially getting our hands dirty with some coding exercises. We call this a “Code Camp,” and it’s a long-standing, appreciated tradition here at Bontouch.
This time, the topic at hand was spatial computing and, more specifically, Apple visionOS development. Stick around, and we’ll talk about the exercise and some of our takeaways.
Content in apps running on visionOS can either be presented in 2D windows, as if there were essentially a giant iPad floating next to you, or as parts of the user’s surroundings in 3D, in what Apple refers to as an immersive space.
For a typical iOS developer, writing a windowed visionOS app is surprisingly similar to how one would go about writing a regular iPhone app. The main difference is that the size of the window may change significantly, and it hovers. Granted, these are both unfamiliar phenomena for an iPhone app developer, but how you go about the development is anything but. You use the same frameworks and largely the same methods.
Writing an immersive space app, on the other hand, is a much more exotic endeavor. For most iOS developers, it requires learning Apple’s 3D framework, RealityKit, while at the same time trying to remember matrix transforms from linear algebra class from their first year of university.
For our code camp, we decided to dive into the deep end and get right to writing an immersive space app, complete with 3D models, physics, and spatial audio.
The exercise consisted of building a fairly modest Connect 4 game, including a 3D plastic frame to hold the chips. The game itself wasn’t particularly important to the exercise. It only served as a reason to interact with 3D content.
We wrote the game logic and the data model of the game ahead of time so that more time could be spent on the visionOS end during the actual code camp.
We even included an AI player of very questionable quality to play the game against. Not to brag or anything, but I personally beat the AI player nine times out of ten (and don’t even get me started on what happened that 10th time; I’m pretty sure the AI player cheated).
There’s only so much you have time to learn in half a day, so parts of the visionOS code was also prepared beforehand. The exercises were designed to provide a whirlwind tour of visionOS and the RealityKit framework, covering several different features. Each exercise described a piece of missing functionality that the developers would then add to eventually complete the app.
We learned many things, both during the code camp and the preparations thereof. Far more than we can get into now, in fact, but here’s a quick rundown of a few of our more pertinent observations.
We are fortunate enough to have a couple of Vision Pro headsets lying around the office. For visionOS development, having access to the hardware is paramount. It makes it that much easier to nail the interactions and the feel of the app than if you only use the visionOS simulator.
Entering a 3D world of your own creation is a thrilling experience. Unfortunately, it’s also an experience that cannot fully be described to someone else. 2D videos just don’t give a sense of presence justice. This is more of an observation about VR/AR in general, but as the saying goes, you had to be there.
If you are unfamiliar with the world of 3D model file formats, you’d be excused for assuming that your 3D artist could simply create a 3D model in Maya, Blender, or similar 3D modeling software and just save it to a file that you could then immediately use in your app. Sadly, this is often not so.
Apple uses a file format created by Pixar known as Universal Scene Description (or USD for short) and it’s slowly being adopted by the 3D software community. The tooling support for USD is still nascent and the quality and availability of the support varies.
Apple provides rudimentary tools for converting more common 3D file formats, such as FBX, to USD. Unfortunately, it’s a process that is more manual and fraught with complications than we would like.
The good news is that Apple’s converter tools are still labeled as beta, and hopefully, Apple will have something more robust up its sleeve to share with the developer community.
Just like in regular iOS apps, polished animations in carefully selected places can really sell the sense of quality in a visionOS app.
For more advanced animations, there’s a case to be made for employing the services of a professional 3D animator for high-quality animations. It really makes that big of a difference.
There is a technical quirk with RealityKit in its current form: It only supports one animation per USD model file. If you have a 3D model that you want to animate in two different ways, you have to save the entire model twice, with one animation per file. This leads to a waste of loading time and memory, as the model has to be loaded twice. The USD file format itself is capable of supporting multiple animations without duplicated mesh data.
For VR experiences where the “real world” is replaced with a fully virtual environment, you would typically rely on visual cues, such as dynamic shadows and light sources, to sell the illusion of 3D objects being grounded in the virtual world. Unfortunately, light sources that cast shadows are not available in RealityKit on visionOS.
In AR apps, this is fine, because you’d want the light to match the real world environment, which visionOS does admirably well. RealityKit also supports shadows that are projected on the ground in AR, which helps.
There is hope, however! RealityKit on iOS does support light sources and shadow maps. It wouldn’t surprise us if these features also make it to RealityKit for visionOS at some point.
For many mobile app developers, sound is not high on the list of things that are the most important for an app. For visionOS apps, the situation is different. In some sense, spatial audio in visionOS plays the role that haptics play in iPhone apps.
While you can’t feel the buttons you press in visionOS, spatial audio augments the experience and can provide immediate feedback that is picked up by a different human sense, in lieu of the sense of touch.
Ambient soundscapes also do more for immersion than you might think. In fact, we think good sound design is one of the most important parts of visionOS app development.
While visionOS and the accompanying tools and frameworks sometimes can seem rough around the edges, it’s important to remember that visionOS 1 is the first iteration of a new operating system for a new class of Apple devices.
Historically, the first generation of Apple products has often seemed like proof of concept prototypes. A preview of what’s to come when the potential has been fully realized. Then, the following generations evolve rapidly. There’s no reason to think that visionOS and the hardware it runs on won’t continue that trend.
We have learned a lot about the many exciting features of visionOS and also about some of its frustrating quirks. Apple is still just getting started and we’re excited to see in what direction they’re planning to take visionOS and the hardware. Wherever they’re going, you can be sure we plan to be in the thick of it!