Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
In digital signal processing, linear prediction is often called linear predictive coding (LPC) and can thus be viewed as a subset of filter theory. In system analysis (a subfield of mathematics), linear prediction can be viewed as a part of mathematical modelling or optimization.
The prediction model Edit
The most common representation is
where is the predicted signal value, the previous observed values, and the predictor coefficients. The error generated by this estimate is
where is the true signal value.
These equations are valid for all types of (one-dimensional) linear prediction. The differences are found in the way the parameters are chosen.
For multi-dimensional signals the error metric is often defined as
where is a suitable chosen vector norm.
Estimating the parameters Edit
The most common choice in optimization of parameters is the root mean square criterion which is also called the autocorrelation criterion. In this method we minimize the expected value of the squared error E[e2(n)], which yields the equation
for 1 ≤ j ≤ p, where R is the autocorrelation of signal xn, defined as
The above equations are called the normal equations or Yule-Walker equations. In matrix form the equations can be equivalently written as
where the autocorrelation matrix R is a symmetric, circulant matrix with elements ri,j = R(i − j), vector r is the autocorrelation vector rj = R(j), and vector a is the parameter vector.
Another, more general, approach is to minimize
where we usually constrain the parameters with to avoid the trivial solution. This constraint yields the same predictor as above but the normal equations are then
where the index i ranges from 0 to p, and R is a (p + 1) × (p + 1) matrix.
Optimisation of the parameters is a wide topic and a large number of other approaches have been proposed.
Solution of the matrix equation Ra = r is computationally a relatively expensive process. The Gauss algorithm for matrix inversion is probably the oldest solution but this approach does not efficiently use the symmetry of R and r. A faster algorithm is the Levinson recursion proposed by Norman Levinson in 1947, which recursively calculates the solution. Later, Delsarte et al. proposed an improvement to this algorithm called the split Levinson recursion which requires about half the number of multiplications and divisions. It uses a special symmetrical property of parameter vectors on subsequent recursion levels.
- G. U. Yule. On a method of investigating periodicities in disturbed series, with special reference to wolfer’s sunspot numbers. Phil. Trans. Roy. Soc., 226-A:267–298, 1927.
- J. Makhoul. Linear prediction: A tutorial review. Proceedings of the IEEE, 63 (5):561–580, April 1975.
- M. H. Hayes. Statistical Digital Signal Processing and Modeling. J. Wiley & Sons, Inc., New York, 1996.
See also Edit
- de:Linear prediction]]
- it:Equazioni di Yule-Walker]]
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|