2024-03-28T20:25:36Zhttp://digital.csic.es/dspace-oai/requestoai:digital.csic.es:10261/975502020-12-09T17:46:24Zcom_10261_123com_10261_8col_10261_376
Khanagha, V.
Daoudi, K.
Pont, Oriol
Yahia, Hussein
Turiel, Antonio
2014-05
Neurocomputing 132: 136-141 (2014)
http://hdl.handle.net/10261/97550
10.1016/j.neucom.2012.12.061
Looking for new perspectives to analyze non-linear dynamics of speech, this paper presents a novel approach based on a microcanonical multiscale formulation which allows the geometric and statistical description of multiscale properties of the complex dynamics. Speech is a complex system whose dynamics can be, to some extent, geometrically and statistically accessed by the computation of Local Predictability Exponents (LPEs) unlocking the determination of the most informative subset (Most Singular Manifold or MSM), leading to associated compact representation and reconstruction. But the complex intertwining of different dynamics in speech (added to purely turbulent descriptions) suggests the definition of appropriate multiscale functionals that might influence the evaluation of LPEs, hence leading to more compact MSM. Consequently, by using the classical and generic Sauer/Allebach algorithm for signal reconstruction from irregularly spaced samples, we show that speech reconstruction of good quality can be achieved using MSM of low cardinality. Moreover, in order to further show the potential of the new methodology, we develop a simple and efficient waveform coder which achieves almost the same level of perceptual quality as a standard coder, while having a lower bit-rate. © 2013 Elsevier B.V.
eng
openAccess
Multiscale signal processing
Nonlinear speech processing
Complex signals and system
Non-linear speech representation based on local predictability exponents
artículo