SpaceFit Sound: Tailoring Audio to Your Space
The listening environment and its acoustic properties must be considered to truly optimize sound from TVs. Samsung’s latest TVs come with the SpaceFit Sound feature that leverages AI technology to assess surrounding environments and ensure sound is appropriately adjusted. When enabled, the feature can identify various factors present in the room, such as the distance between the TV and the wall as well as the room’s acoustic properties, to measure the reflection of the TV’s sound. AI is used to enhance the sound accordingly.
SpaceFit Sound leverages AI to learn the acoustic properties of the environment and calibrates the sound accordingly.
“Traditionally, TVs use a set of dedicated sounds to check sound through the mic. SpaceFit Sound, on the other hand, utilizes real content to analyze viewing environments and modifies sound accordingly,” Kim explained. “The technology was designed not only for real scenarios but also real-time circumstances as the feature automatically and conveniently adjusts sound.”
Samsung’s SpaceFit Sound has become the industry’s first technology to earn the Spatial Sound Optimization certification from VDE.
In recognition of the company’s dedication to innovation and user-centric design, Samsung’s SpaceFit Sound also became the first audio technology to receive the Spatial Sound Optimization certification from Verband Deutscher Elektrotechniker (VDE) in 2021.
Q-Symphony 3.0: Harnessing Hyperconnectivity for Three-Dimensional Sound
Whether it’s through bigger screens or deeper acoustics, today’s viewers desire more cinematic experiences. Q-Symphony is a proprietary technology from Samsung that orchestrates a harmonious interplay between a TV’s speakers and a connected soundbar, resulting in a richer, more vibrant audio experience. As the name suggests, the feature allows two audio outputs to synchronize, similar to a carefully conducted concerto, creating a unified, immersive soundstage. More specifically, the soundbar plays primary audio channels, while the TV speakers add background and surround sounds to create a dynamic and three-dimensional audio experience.
Q-Symphony allows soundbar and TV speakers to work in perfect sync, producing harmonious blends of sound for immersive experiences.
Despite the apparent simplicity in concept, Q-Symphony leverages a wide range of AI technologies to produce and sync sounds with such accuracy. Any gaps in sound levels between the TV speakers and soundbar must be precisely calibrated to prevent unwanted echoes and achieve perfect audio harmony. Over the years, Samsung engineers have tirelessly worked to refine this technology. Q-Symphony 1.0, which originally utilized the TV’s top speakers, evolved into Q-Symphony 2.0, which controls all speakers with improved channel separation technology for a greater sense of depth and immersion.
The latest Q-Symphony 3.0 takes sound to the next level by integrating the neural processor and AI-based real-time voice separation technology. This advanced Q-Symphony feature provides three-dimensional sound by distinguishing and optimizing various audio elements including voices, background music and sound effects based on the type of content and users’ volume settings. The resulting sound is a perfect replication of the audio track the creators intended.
Samsung’s AI algorithm can also take the input signals and play them through multiple channels, whether it’s the soundbar or all the TV speakers, customizing each channel for powerful and dynamic sounds. In addition to content featuring Dolby Atmos or 5.1-channel audio, content with regular stereo channels can also be processed to create 20 individual channels. This means that any media can be delivered with exceptionally immersive sound quality on Samsung TVs and soundbars.
“Q-Symphony is a revolutionary algorithm by Samsung that can masterfully synchronize audio volume and timing in perfect harmony,” said Kibeom Kim.
“The sound offered through Q-Symphony is so immersive that viewers feel as though they are physically present on set with so many different background sounds coming alive through the feature,” said Kim.
Unifying Picture and Sound Experiences for Optimal Viewing Experiences
Today, AI sits at the core of the Samsung audio strategy with its applications extending across numerous products. This widespread adoption of AI has resulted in features like Q-Symphony and SpaceFit Sound that enrich the audio experience, alongside other audio technologies that breathe life into content through dialogue and movement. Additional capabilities include Active Voice Amplifier, which adjusts and improves dialogue clarity with the speaker and surrounding noise in mind, Human Tracking Sound, which dynamically reproduces the sound based on the position of the on-screen speaker, and OTS Pro, which creates a dynamic soundscape based on the movement of objects or the speaker on the screen.
These features are the outcome of two symbiotic AI technologies: a content analysis model and a sound separation model. The neural processors work with both auditory and visual cues, among other signals, to create the perfect audio experience that syncs with what is happening on the screen. Despite the complexity of this process, Samsung was able to bring these features to life by forming a cross-departmental team of engineers and pooling resources across picture, sound and other departments.
(From left) Seongsu Park, Sunmin Kim and Kibeom Kim at the Sound Device Lab are on a mission to capture audio just as the artists intended.
The Future of Sound: Reshaping Audio Experiences With Samsung’s Sound Device Lab
In the Sound Device Lab, there is a clear and unwavering goal that guides every innovation: recreate the original sound as closely as the artists intended, with consumers top of mind. AI is a critical tool that enables Samsung to do this.
Sunmin Kim, who heads the Sound Device Lab, believes that sound is just one side of the coin: “The focus on sound quality is a given, but all innovations that made breakthroughs came from user-centric design. Sound settings and features need to be straightforward and user friendly.”
Seongsu Park noted that 70% of TV sound comes from the product while the remaining 30% is shaped by the space in which the sound is played: “Our products will continue to leverage the latest measurement systems and AI algorithms to analyze space and sound settings for optimal sound quality.”
On a final note, Kibeom Kim also shared the Sound Lab’s ambition in the era of multi-device sound experiences initiated by the Q-Symphony feature: “There is unlimited potential in Q-Symphony to produce audio that works in perfect harmony. We will continue to improve inter-device connectivity and enhance the feature for our users.”