Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.

Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acou...

Full description

Saved in:
Bibliographic Details
Main Authors: Aravindan Joseph Benjamin, Kai Siedenburg
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2025-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0316442
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841533208465768448
author Aravindan Joseph Benjamin
Kai Siedenburg
author_facet Aravindan Joseph Benjamin
Kai Siedenburg
author_sort Aravindan Joseph Benjamin
collection DOAJ
description Music pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood. Here, we assess listeners' ability to detect a musical target within mixes with varying degrees of spectral manipulations using the so-called EQ-transform. This transform exaggerates or downplays the spectral distinctiveness of a track with respect to an ensemble average spectrum taken over a number of instruments. In an experiment, 30 young normal-hearing (yNH) and 24 older hearing-impaired (oHI) participants with predominantly moderate to severe hearing loss were tested. The target that was to be detected in the mixes was from the instrument categories Lead vocals, Bass guitar, Drums, Guitar, and Piano. Our results show that both hearing loss and target category affected performance, but there were no main effects of EQ-transform. yNH performed consistently better than oHI in all target categories, irrespective of the spectral manipulations. Both groups demonstrated the best performance in detecting Lead vocals, with yNH performing flawlessly at 100% median accuracy and oHI at 92.5% (IQR = 86.3-96.3%). Contrarily, performance in detecting Bass was arguably the worst among yNH (Mdn = 67.5% IQR = 60-75%) and oHI (Mdn = 60%, IQR = 50-66.3%), with the latter even performing close to chance-levels of 50% accuracy. Predictions from a generalized linear mixed-effects model indicated that for every decibel increase in hearing loss level, the odds of correctly detecting the target decreased by 3%. Therefore, baseline performance progressively declined to chance-level at moderately severe degrees of hearing loss thresholds, independent of target category. The frequency domain sparsity of mixes and larger differences in target and mix roll-off points were positively correlated with performance especially for oHI participants (r = .3, p < .01). Performance of yNH on the other hand remained robust to changes in mix sparsity. Our findings underscore the multifaceted nature of selective listening in musical scenes and the instrument-specific consequences of spectral adjustments of the audio.
format Article
id doaj-art-062aea516c6f4a1886d4e2bca16dd046
institution Kabale University
issn 1932-6203
language English
publishDate 2025-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj-art-062aea516c6f4a1886d4e2bca16dd0462025-01-17T05:31:22ZengPublic Library of Science (PLoS)PLoS ONE1932-62032025-01-01201e031644210.1371/journal.pone.0316442Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.Aravindan Joseph BenjaminKai SiedenburgMusic pre-processing methods are currently becoming a recognized area of research with the goal of making music more accessible to listeners with a hearing impairment. Our previous study showed that hearing-impaired listeners preferred spectrally manipulated multi-track mixes. Nevertheless, the acoustical basis of mixing for hearing-impaired listeners remains poorly understood. Here, we assess listeners' ability to detect a musical target within mixes with varying degrees of spectral manipulations using the so-called EQ-transform. This transform exaggerates or downplays the spectral distinctiveness of a track with respect to an ensemble average spectrum taken over a number of instruments. In an experiment, 30 young normal-hearing (yNH) and 24 older hearing-impaired (oHI) participants with predominantly moderate to severe hearing loss were tested. The target that was to be detected in the mixes was from the instrument categories Lead vocals, Bass guitar, Drums, Guitar, and Piano. Our results show that both hearing loss and target category affected performance, but there were no main effects of EQ-transform. yNH performed consistently better than oHI in all target categories, irrespective of the spectral manipulations. Both groups demonstrated the best performance in detecting Lead vocals, with yNH performing flawlessly at 100% median accuracy and oHI at 92.5% (IQR = 86.3-96.3%). Contrarily, performance in detecting Bass was arguably the worst among yNH (Mdn = 67.5% IQR = 60-75%) and oHI (Mdn = 60%, IQR = 50-66.3%), with the latter even performing close to chance-levels of 50% accuracy. Predictions from a generalized linear mixed-effects model indicated that for every decibel increase in hearing loss level, the odds of correctly detecting the target decreased by 3%. Therefore, baseline performance progressively declined to chance-level at moderately severe degrees of hearing loss thresholds, independent of target category. The frequency domain sparsity of mixes and larger differences in target and mix roll-off points were positively correlated with performance especially for oHI participants (r = .3, p < .01). Performance of yNH on the other hand remained robust to changes in mix sparsity. Our findings underscore the multifaceted nature of selective listening in musical scenes and the instrument-specific consequences of spectral adjustments of the audio.https://doi.org/10.1371/journal.pone.0316442
spellingShingle Aravindan Joseph Benjamin
Kai Siedenburg
Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
PLoS ONE
title Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
title_full Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
title_fullStr Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
title_full_unstemmed Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
title_short Effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing-impaired listeners.
title_sort effects of spectral manipulations of music mixes on musical scene analysis abilities of hearing impaired listeners
url https://doi.org/10.1371/journal.pone.0316442
work_keys_str_mv AT aravindanjosephbenjamin effectsofspectralmanipulationsofmusicmixesonmusicalsceneanalysisabilitiesofhearingimpairedlisteners
AT kaisiedenburg effectsofspectralmanipulationsofmusicmixesonmusicalsceneanalysisabilitiesofhearingimpairedlisteners