** **

** **

**Efficient Estimation Algorithm for
Speech Signal**

** **

** **

Karthikeyan SIVAPRAKASAM, Sasikumar SUBRAMANIAN, Madheswaran MUTHUSAMY

*Department of
Electronics and Communication Engineering, P.S.N.A. **College** of **Engineering** and Technology, Dindigul-624622. **Tamilnadu, India.*

Email(s):__skarthik_82@rediffmail.com__,
__sonapoppy@yahoo.co.in__

**Abstract**

In this paper, a modified estimation algorithm has been developed refers to covariance shaping least square estimation based on the quantum mechanical concepts and constraints. The algorithm has been applied to the speech signal and the performance is estimated using probability theories. The same models can be applied with additive white Gaussian Noise which estimates the bias in the parameter and the validity of the uncertainty estimates refers to Monte Carlo simulation. Building upon the problem of optimal quantum measurement design, the performance of the CSLS estimator is discussed and it is compared with LS, James Stein, Shrunken and Ridge estimators for speech analysis and it proved that the CSLS estimator attain appreciably improved than others at low to moderate SNR.

**Keywords**

Monte Carlo Simulation, Covariance Shaping, Least Square Estimation, Signal to Noise ratio.

** **

** **

**Introduction**

The development in the field of signal processing is tremendous and quantum signal processing in particular has motivated the rigorous growth and research in the past few decades[1,2 and 3].The estimation using digital signal processing concepts have been the research area in the recent past. The quantum mechanical concepts have been shown more interest in the signal analysis due to its inherent properties [4, 5 and 6]. The introduction of Quantum mechanical concepts, which rely on estimation almost entirely on some of signal processing algorithms that are implemented with various techniques. In many DSP applications we don't have complete or perfect knowledge of the signals we wish to process. We are faced with many unknowns and uncertainties like noisy measurements and unknown signal Parameters [3].

We consider the class
of estimation problems represented by the linear model _{} (1)

where _{} denotes a deterministic vector with
unknown parameter, _{} is
a known_{}
matrix and_{}
is a Zero-mean random vector with covariance_{}. It is well known that among all
possible unbiased linear estimators, the LS estimator minimizes the variance
[4]. However, this does not imply that the resulting variance or mean-squared
error (MSE) is small, where the MSE of an estimator is the sum of the variance
and the squared norm of the bias. Various modifications of the LS estimator for
the case in which the data model is assumed to hold perfectly have been
proposed [5].The Stein [6] showed that MSE is small in the LS estimator for
certain parameter values when compared to other estimators. An explicit
(nonlinear) estimator with this property, which is referred to as the
James–Stein estimator, was later proposed and analyzed [7]. This work appears
to have been the starting point for the study of alternatives to LS estimators.
Among the more prominent alternatives are the Ridge estimator [8], the shrunken
estimator [10] and Stochastic Gaussian Maximum Likelihood (ML) estimator [11]
that deals with sub-Gaussian signals. In the estimation of unknown parameter
the parameterized structure of the Maximum A-posteriori Probability (MAP)
estimator with prior Gaussian distribution was designed as an improvement of
the mean squared error (MSE) over the least-squares (LS) estimator [9]. Because
of some uncertainties in the prominent alternatives minimum mean-squared error
and Maximum A Posteriori (MAP) estimators [12] cannot be used in many cases.
The minimum mean-squared error linear estimator does not require this prior
density.

The problem of estimating the deterministic parameter vector x in a linear regression model, with the mean squared error (MSE) as the performance measure that can be applicable for both admissible and dominating linear estimators [22]. In the past 30 years attempts have been made to develop linear methods that may be biased but close to the true parameter in MSE sense. These include the Tikhonov regularizer[12],the shrunken estimator[13] and the covariance shaping least square estimator[14].Another recent approach is constrained to a subset and then seek linear minimal estimators that minimize a worst case measure of MSE [15,16,17,18,19,20 and 21].Next the problem of estimating random unknown signal parameters in a noisy linear model is processed [23]. In [24] the problem of estimating an unknown deterministic parameter vector in a linear model with a Gaussian model matrix has been analysed. The maximum likelihood estimator associated with this problem and show that it can be found using a simple line search over a unimodel function which can be efficiently evaluated. We then analyze its performances using Cramer Rao bound.

Specifically, it is achieving the CRLB for biased estimators [25, 26] when the noise is Gaussian. The efficient estimation algorithm of ARMA model with White Gaussian Noise can be performed in [27] based on the quantum mechanical concepts and constraints. The efficient estimation algorithm of Exponential and other Trigonometric model with White Gaussian Noise can be performed in [28].

To improve the performance of
the LS estimator at low to moderate SNR, we propose a modification of the LS
estimate using speech signal, in which we choose the estimator of _{} to minimize the
total error variance in the observations_{}, subject to a constraint on the
covariance of the error in the estimate of_{}. The resulting estimator of _{} is derived as the
CSLS estimator. Here the problem of estimating an unknown deterministic
parameter vector in a linear model with a white Gaussian noise has been
analyzed. The CSLS estimator has a property analogous to the property of the LS
estimator. Instead of the traditional mean squared error (MMSE) approach, we
propose a linear estimator that minimizes the MSE which is averaged over the white
noise only. In which case, the LS estimate may result in a poor estimate. This
effect is especially predominant at low to moderate signal-to-noise ratio
(SNR).

** **

*Least Square
Estimation*

This approach uses short-time FFT to transform the input signal into the spectral domain. Similar to the generation of a spectrogram, the FFT is applied on data collected from a short time frame, which advances in time with some overlap. Threshold detection is performed in each spectral profile to determine whether a signal is present with certain likelihood. When such detection occurs in a number of consecutive frames, the frequencies of the detected peaks are used to determine the starting frequency and chirp rate of the signal by means of linear regression, i.e., the least squares method.

The processing sequence of this method is shown in Fig.1.

*Fig.1**.
LS method for detection and estimation of signal*

* *

In our method, the LS estimator resulting variance or trace mean-squared error (MSE) is small, where the MSE of an estimator is the sum of the variance and the squared norm of the bias. In which case, the LS estimate may result in a poor estimate. This effect is especially predominant at low to moderate signal-to-noise ratio (SNR). In linear algebra, the trace of an n-by-n square matrix A is defined to be the sum of the elements on the main diagonal (the diagonal from the upper left to the lower right) of A, i.e.,

_{} (2)

where
a_{ij} represents the entry on the ith row and jth column of A.
Equivalently, the trace of a matrix is the sum of its Eigen values, making it
an invariant with respect to chosen basis. For an m-by-n matrix A with complex
(or real) entries and ^{*} being the conjugate transpose, we have

_{} (3)

with equality only if A = 0. The assignment

_{} (4)

yields an inner product on the space of all complex
(or real) m-by-n matrices. If m=n then the norm induced by the above inner
product is called the Frobenius norm of a square matrix. Indeed it is simply
the Euclidean norm if the matrix is considered as a vector of length n^{2}.

** **

*Covariance Shaping Least-Squares Estimation*

The CSLS estimator is
directed at improving the performance of the LS estimator at low to moderate
SNR by choosing the estimate of _{} to minimize the total error variance in
_{} subject to
a constraint on the covariance of the error in the estimate so that we control
the dynamic range and spectral shape of the covariance of the estimation error.

The CSLS estimate of _{}, which is denoted _{}, is chosen to
minimize the total variance of the weighted error between_{} and _{}, subject to the constraint
that the covariance of the error in the estimate_{} is proportional to a given covariance
matrix _{}.
From (4.1), it follows that the covariance of_{} is equal to _{}so that the covariance of _{}, which is equal to
the covariance of the error in the estimate_{} is given by _{}. Thus_{}, is chosen to minimize

_{} (5)

subject to _{} (6)

Where_{},_{} is a given covariance matrix and_{}>0 is a constant
that is either specified in advance or chosen to minimize the error.

** **

*James-Stein Estimator*

Suppose θ is an unknown parameter vector of
length m and let y be observations of the parameter
vector such that_{}y=N~(θ,σ^{2},I).The James-Stein estimator is given by

_{} (7)_{}

James and Stein showed that the above estimator dominates _{}_{LS}
for
any m>3, meaning that the James-Stein estimator always achieves lower MSE
than the least
squares estimator. Stein has shown that, for_{}, the least squares
estimator is admissible, meaning that no estimator dominates it. A
consequence of the above discussion is the following counterintuitive result:
When three or more unrelated parameters are measured, their total MSE can be
reduced by using a combined estimator such as the James-Stein estimator whereas
when each parameter is estimated separately, the least squares (LS) estimator
is admissible. This quirk has caused some to sarcastically ask whether in order
to estimate the speed of light, one should jointly estimate tea consumption in Taiwan and hog weight
in Montana. The response
is that the James-Stein estimator always improves upon the total MSE, i.e., the
sum of the expected errors of each component. Therefore, the total MSE in
measuring light speed, tea consumption and hog weight would improve by using
the James-Stein estimator. However, any particular component (such as the speed
of light) would improve for some parameter values and deteriorate for others.
Thus, although the James-Stein estimator dominates the LS estimator when three
or more parameters are estimated, any single component does not dominate the
respective component of the LS estimator.The conclusion from this hypothetical
example is that measurements should be combined if one is interested in minimizing
their total MSE. For example, in a telecommunication
setting, it is reasonable to combine channel tap measurements in a channel
estimation scenario, as the goal is to minimize the total channel
estimation error. Conversely, it is probably not reasonable to combine channel
estimates of different users since no user would want their channel estimate to
deteriorate in order to improve the average network performance.

** **

**Results
and Discussion**

In Figure 2 the analog input is obtained through the channel 1 at a sample rate of 8000 and duration of 1.25 seconds and number of samples obtained from the speech signal is about 10000. The signal is obtained as a column vector. This column vector is converted into a square matrix. Now Hilbert transform is performed on this matrix so that the numerical values of the signal can be obtained. FFT is performed on the signal so that the spectral values of the signal can be obtained. As the concept of QSP is to be satisfied, now the spectral matrix is being converted into orthogonal matrix using Gram-Schmidt orthogonalization procedure. In the orthogonal matrix, white noise with zero mean and unit standard deviation added to the signal.

*Fig.
2**: Block diagram for converting speech signal to orthogonal vectors*

*Fig. 3**:
Original Speech Signal* *Fig.4**. Noise Corrupted
Signal*

* *

Input is a continuous speech signal given through microphone. This signal is plotted with its amplitude with respect to time in figure 3. The additive white Gaussian noise can be added with original speech signal and it can be plotted in figure 4.The noise corrupted signal is applied to different estimators such as CSLS estimator, Shrunken Estimator, Ridge and James Stein Estimator and finally least square Estimator.

*Fig. 5**.
MSE output for Speech*

* *

Table 1shows that the estimation at SNR -60 dB level, the CSLS estimator has low MSE compare to LS, Shrunken estimator. But at this range Ridge estimator give maximum error. At -50 dB the LS has double the amount of MSE compare to CSLS estimator. MSE are gradually decrease upto 0dB level and increase slightly and reach the LS value at 20 dB level. The CSLS estimator reaches the same MSE at the level of 40 dB SNR, here after the entire estimator gives the same performance. Finally the error performances of all estimators are observed. It also replies the CSLS yields minimum variance for all SNR range.

* *

Table1. MSE in estimating amplitude in Speech model

SNR (dB) |
*CSLSE 1.0e+004 * |
SHRUNKEN 1.0e+008 * |
RIDGE 1.0e+007 * |
JAMES-STEIN 1.0e+006* |
LS 1.0e+007 |

-60 |
1.4206 |
1.1608 |
4.0586 |
8.4547 |
1.8061 |

-50 |
0.1608 |
0.1314 |
0.4593 |
0.9569 |
0.2045 |

-40 |
0.0162 |
0.0132 |
0.0462 |
0.0963 |
0.0205 |

-30 |
0.0017 |
0.0014 |
0.0050 |
0.0104 |
0.0022 |

-20 |
0.0002 |
0.0001 |
0.0005 |
0.0010 |
0.0002 |

-10 |
0.0000 |
0.0000 |
0.0001 |
0.0001 |
0.0000 |

0 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

10 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

20 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

30 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

40 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

50 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

60 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |
0.0000 |

For all SNR levels the error values in various estimation methodologies are plotted in figure 5.It shows that, CSLS estimation gives less MSE compare to LS, Shrunken, James Stein and Ridge estimator.

** **

** **

**Conclusion**

The CSLS estimator has a property analogous to the property of the LS estimator. Specifically, it is shown to achieve the Cramer- Rao lower bound (CRLB) for biased estimators when the noise is Gaussian. This implies that for Gaussian noise, there is no linear or nonlinear estimator with a smaller variance, or MSE, and the same bias as the CSLS estimator. The algorithm developed has been applied to speech signal with additive white Gaussian noise which gives efficient MSE values. For the optimal quantum measurement design, we observed the performance of the CSLS estimator and it is compared with LS, Shrunken and Ridge and James-Stein estimators and it proved that the CSLS estimator performed significantly better than others at low to moderate SNR.

** **

** **

**References**

1. S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory. Prentice-Hall, 1993.

2. D.C.Griffiths, Introduction to Quantum mechanics, PrecticeHall, Inc., 1995.

3. Y.C.Eldar and A.V.Oppenheim, “Covariance shaping Least Square Estimation”, IEEE Transactions on Signal Processing, Sep 2001.

4. Y.C. Eldar, Quantum signal processing, Ph.D. thesis, MIT, Cambridge, MA, 2001.

5. Y. C. Eldar and A. V. Oppenheim, "Quantum Signal Processing,'' Signal Processing Magazine, vol. 19, Nov. 2002, pp. 12-32.

6. James E. Buck, “On Stein Estimators: ‘Inadmissibility’ of Admissibility as a Criterion for Selecting Estimators,” PEAS LXXII. 1985.

7. A. E. Hoerl and R.W. Kennard, “Ridge regression: Biased estimation for nonorthogonal problems, Tachometer” vol. 12, Feb. 1970, pp. 55–67.

8. L. S. Mayer and T. A. Willke, “On biased estimation in linear models, Tachometer” vol. 15, Aug. 1973. pp. 497–508.

9. A. Benavoli, L. Chisci, “Estimation of constrained parameters with guaranteed MSE improvement” IEEE transactions on signal processing, July 2005.

10. Y. C. Eldar and A. V. Oppenheim, “MMSE whitening and subspace whitening,” IEEE Transactions on Information Theory, vol. 49, July 2003, pp. 1846–1851.

11. Don Johnson Signal Parameter Estimation. Version 1.4: Aug 18, 2003. The Connexions Project and licensed under the Creative Commons Attribution License Connexions module: m1126

12. A.N.Tikhonov and V.Y.Arsenin, Solution of Ill-Posed Problems. Washington, DC: V.H.Winston, 1977.

13. L. S. Mayer and T. A. Willke, “On biased estimation in linear models,” Techno metrics, vol. 15, Aug. 1973, pp. 497–508.

14. Y. C. Eldar and A. V. Oppenheim, “Covariance shaping least-squares estimation,” IEEE Transactions on Signal Processing, vol. 51, no. 3, Mar. 2003, pp. 686–697.

15. M.S. Pinsker, “Optimal filtering of square-integrable signals in Gaussian noise,” Problems Inform. Trans., vol. 16, 1980, pp. 120–133.

16. J. Pilz, “Minimax linear regression estimation with symmetric parameter restrictions,” J. Stat. Planning Inference, vol. 13, 1986, pp. 297–318.

17. Y. C. Eldar, A. Ben-Tal and A. Nemirovski, “Robust mean-squared error estimation in the presence of model uncertainties,” IEEE Transactions on Signal Processing, vol. 53, no.1, Jan. 2005, pp. 168–181.

18. Y.C. Eldar, A. Ben-Tal, and A. Nemirovski, “Linear minimax regret estimation of deterministic parameters with bounded data uncertainties,” IEEE Transactions on Signal Processing, vol. 52, no. 8, Aug. 2004, pp. 2177–2188.

19. A. Beck, Y. C. Eldar, and A. Ben-Tal, “Minimax mean-squared error estimation of multichannel signals,” SIAM Journal of Matrix and Analytical Applications.

20. A. Beck, A. Ben-Tal, and Y. C. Eldar, “Robust mean-squared error estimation of multiple signals in linear systems affected by model and noise uncertainties,” Math Prog., B, Springer-Verlag, Dec.2005.

21. Z. Ben-Haim and Y. C. Eldar, “**Maximum set estimators with bounded
estimation error” **IEEE Transactions
on Signal Processing ,Volume 53, Issue 8, Aug. 2005, pp. 3172–3182.

22. Yonina C.Eldar, “Comparing between Estimation Approaches: Admissible and dominating Linear Estimators" IEEE transactions on signal processing, vol. 54, no. 5, May 2006.

23. S.A. Vorobyov, Y.C. Eldar and A.B.Gershman, "Probabilistically Constrained Estimation of Random Parameters with Unknown Distribution," Proceedings of 4th IEEE Sensor Array and Multichannel Signal Processing Workshop, SAM'2006, USA, July12-14, 2006, pp.404-408.

24. A. Wiesel and Y. C. Eldar, "Maximum Likelihood Estimation in Random Linear Models: Generalizations and Performance Analysis,'' Proceedings of EEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2006), France, May 2006, vol.5, pp.993-996.

25. Onkar Dabeer, Aditya Karnik “Consistent signal parameter estimation with 1-bit dithered sampling”, School of Technology and Computer Science, Tata Institute of Fundamental Research University of Waterloo Colaba, Mumbai

26. M. H. J. Gruber, Regression Estimators: A Comparative Study. San Diego, CA: Academic, 1990.

27. S.Sasikumar, S.Karthikeyan, M.Suganthi M.Madheswaran “A
New Approach to Estimation Algorithm for ARMA Model with Quantum Parameters”, Far East
Journal of Electronics and Communications, October 10^{th}, 2007

28. S.Sasikumar,
S.Karthikeyan, M.Suganthi, M.Madheswaran “Covariance Shaping Least Square
Estimation Algorithm for Exponential and Trigonometric Model with Quantum
Parameters” IETECH Journal of Information systems, August 2^{nd} 2007