Greetings.

My name is Michael and welcome to my portfolio website and blog. Here I document my adventures in cinematic music creation and more.

Hope you have a nice stay!

Dolby Atmos, Spatial Audio, Netflix Spatial and speaker configurations in 2.1.2, 5.1.4, 7.1.4, 9.1.2, 9.1.4, 13.16.6…

Dolby Atmos, Spatial Audio, Netflix Spatial and speaker configurations in 2.1.2, 5.1.4, 7.1.4, 9.1.2, 9.1.4, 13.16.6…

Dolby Atmos, Spatial Audio, Netflix Spatial (using Sennheiser’s AMBEO); speaker configurations in 2.1.2, 5.1.4, 7.1.4, 9.1.2, 9.1.4, 13.16.6… It’s confusing right?

I’ve been having many conversations around the subject of immersive sound lately and it seems there are vast confusions around the technology involved, and rightly so, it’s fascinatingly complicated. “Spatial Audio” and “Dolby Atmos” are essentially two different technologies which can, and often do work together, but can also function independently of one another.

Dolby’s Atmos is a revolutionary, scalable and immersive audio technology which at the higher end requires multiple speakers and height channels (speakers on the ceiling) to emit sound from the left, centre, right, left side, right side, left behind, right behind and above. These discreet (or individual) channels can then be accessed individually by audio engineers and filmmakers to effectively move objects within your “space”… The technology utilises beds (fixed-channel mixes) and objects (meta-based audio) within the mix to define your immersive and spatial experience… While the beds don’t “move”, as they are traditionally pre-mixed channels, objects can be distributed freely within the multichannel environment across all speakers.

A ‘Bed’ is the ‘channel-based’ main output bus of say 7.1.2 (Left, Centre, Right, LFE, Side & Rear Surrounds, and a pair of Overhead Surround Channels which have no forward/ backward separation, just a stereo overhead).

An ‘Object’ has far more accurate panning, as it’s all governed by metadata rather than channel-based laws. This allows sound engineers to define audio within a given virtual 3D space. You can have up to 128 “objects” within the 3D virtual environment. Given Atmos can support 64 speakers; audio objects will move through speakers in which the Atmos decoder deems the closest to where the audio object should be. Should a bee fly all around the room, at some point it will be heard from each one of those speakers respectively alongside the fixed position of the “Bed”.

I’m contrast, Spatial Audio (in the Apple context) is its own immersive sound technology that uses the various sensors (specifically the accelerometers and gyroscopes) in Apple's AirPods Pro or AirPods Max to track the listener's head movements. It then creates a virtual space based on the listener's head and the device they are listening from.

Netflix Spatial uses Sehhneiser’s AMBEO 2-Channel Spatial Audio renderer to translate both surround and/or immersive (Atmos) mixes into a 2-channel audio experience adding a a wider “space” to your sound within a stereo field, or surround (5 or more speakers).

If we consider that “objects” within an Atmos mix are essentially stored as metadata alongside audio files, its easier to understand how “Spatial” technologies then “Spatialise” the sound using the principles of psychoacoustic via headphones, or use the metadata for an “enhanced” stereo experience.

Is TikTok About To Dominate?

Is TikTok About To Dominate?