IHR and Department of Electronic & Electrical Engineering, University of Strathclyde, PhD in Electronic and Electrical Engineering

Dynamic speech-based compression for hearing prostheses

with Prof. John Soraghan and Dr William Whitmer
j.soraghan@eee.strath.ac.uk, bill@ihr.gla.ac.uk

Successful communication through speech is dependent on the ability to follow the fast spectrotemporal information contained within the speech signal. This ability is often compromised by hearing loss. Modern hearing aids amplify sounds with dynamic compression. In some cases, the temporal characteristics of the compression are based on physiological models, but they are not specifically designed to preserve the temporal variations in speech. The current project is aimed to develop a dynamic compression scheme to provide optimal moment-to-moment speech output for hearing-impaired individuals.

The project will involve establishing an ideal algorithm for speech detection and for speech enhancement using state-of-the-art methodology such as local binary pattern analysis and empirical mode decomposition. Furthermore, we will identify the acoustic speech parameters affected by different hearing pathologies, and then apply spectrotemporal amplification to maximise speech intelligibility in a given individual.

You should have a first- or upper second-class degree in engineering, physics, mathematics, or computer science, although graduates with other relevant backgrounds may also be considered. Computer programming experience (e.g., Matlab) would be beneficial. You will receive extensive training in signal processing, hearing-aid design and psychophysical testing. The project will be based in the Department of Electronic and Electrical Engineering at Strathclyde University; the psychophysical experiments will be conducted at the IHR Scottish Section.

Kates J (2005). Principles of digital dynamic-range compression. Trends in Amplification 9, 45-76.

Rutledge J et al. (2010). Performance of sinusoidal model based amplitude compression in fluctuating noise. http://ieeexplore.ieee.org.

Zhu Q et al. (2012). 1-D Local binary patterns based VAD used INHMM-based improved speech recognition. http://ieeexplore.ieee.org.