# Autoregressive moving average model

*34,190*pages on

this wiki

Assessment |
Biopsychology |
Comparative |
Cognitive |
Developmental |
Language |
Individual differences |
Personality |
Philosophy |
Social |

Methods |
Statistics |
Clinical |
Educational |
Industrial |
Professional items |
World psychology |

**Statistics:**
Scientific method ·
Research methods ·
Experimental design ·
Undergraduate statistics courses ·
Statistical tests ·
Game theory ·
Decision theory

In statistics, **autoregressive moving average** (**ARMA**) **models**, sometimes called **Box-Jenkins models** after George Box and G. M. Jenkins, are typically applied to time series data.

Given a time series of data *X*_{t}, the ARMA model is a tool for understanding and, perhaps, predicting future values in this series. The model consists of two parts, an autoregressive (AR) part and a moving average (MA) part. The model is usually then referred to as the ARMA(*p*,*q*) model where *p* is the order of the autoregressive part and *q* is the order of the moving average part (as defined below).

## Contents

[show]## Autoregressive model Edit

The notation AR(*p*) refers to the autoregressive model of order *p*. The AR(*p*) model is written

where are the * parameters* of the model, is a constant and is an error term (see below). The constant term is omitted by many authors for simplicity.

An autoregressive model is essentially an infinite impulse response filter with some additional interpretation placed on it.

Some constraints are necessary on the values of the parameters of this model in order that the model remains stationary. For example, processes in the AR(1) model with |φ_{1}| > 1 are not stationary.

### Example: An AR(1)-processEdit

An AR(1)-process is given by

where is a white noise process with zero mean and variance . (Note: The subscript on has been dropped.) The process is covariance-stationary if . If then exhibits a unit root and can also be considered as a random walk, which is not covariance-stationary. Otherwise, the calculation of the expectation of is straightforward. Assuming covariance-stationarity we get

thus:

where is the mean. For c = 0, then the mean = 0 and the variance is found to be:

The autocovariance is given by

It can be seen that the autocovariance function decays with a decay time of [to see this, write where is independent of . Then note that and match this to the exponential decay law ] The spectral density function is the inverse Fourier transform of the autocovariance function. In discrete terms this will be the discrete-time inverse Fourier transform:

This expression contains aliasing due to the discrete nature of the , which is manifested as the cosine term in the denominator. If we assume that the sampling time () is much smaller than the decay time (), then we can use a continuum approximation to :

which yields a Lorentzian profile for the spectral density:

where is the angular frequency associated with the decay time .

An alternative expression for can be derived by first substituting for in the defining equation. Continuing this process *N* times yields

For *N* approaching infinity, will approach zero and:

It is seen that is white noise convolved with the kernel plus the constant mean. By the central limit theorem, the will be normally distributed as will any sample of which is much longer than the decay time of the autocorrelation function.

### Calculation of the AR parameters Edit

The AR(*p*) model is given by the equation

It is based on parameters where *i* = 1, ..., *p*. Those parameters may be calculated using **Yule-Walker equations**:

where *m* = 0, ... , *p*, yielding *p* + 1 equations. is the autocorrelation function of X, is the standard deviation of the input noise process, and δ_{m} is the Kronecker delta function.

Because the last part of the equation is non-zero only if *m* = 0, the equation is usually solved by representing it as a matrix for *m* > 0, thus getting equation

solving all . For *m* = 0 have

which allows us to solve .

#### Derivation Edit

The equation defining the AR process is

Multiplying both sides by X_{t-m} and taking expected value yields

Now, by definition of the autocorrelation function. The values of the noise function are independent of each other, and *X*_{t − m} is independent of ε_{t} where *m* is greater than zero. For *m* ≠ 0, . For *m* = 0,

Now we have

Furthermore,

which yields the Yule-Walker equations:

## Moving average model Edit

The notation MA(*q*) refers to the moving average model of order *q*:

where the θ_{1}, ..., θ_{q} are the parameters of the model and the ε_{t}, ε_{t-1},... are again, the error terms. The moving average model is essentially a finite impulse response filter with some additional interpretation placed on it.

## Autoregressive moving average model Edit

The notation ARMA(*p*, *q*) refers to the model with *p* autoregressive terms and *q* moving average terms. This model contains the AR(*p*) and MA(*q*) models,

## Note about the error terms Edit

The error terms ε_{t} are generally assumed to be independent identically-distributed random variables sampled from a normal distribution with zero mean: ε_{t} ~ N(0,σ^{2}) where σ^{2} is
the variance. These assumptions may be weakened but doing so will change the properties of the model. In particular, a change to the i.i.d. assumption would make a rather fundamental difference.

## Specification in terms of lag operator Edit

In some texts the models will be specified in terms of the lag operator *L*. In these terms then the AR(*p*) model is given by

where φ represents polynomial

The MA(*q*) model is given by

where θ represents the polynomial

Finally, the combined ARMA(*p*, *q*) model is given by

or more concisely,

## Fitting models Edit

ARMA models in general can, after choosing p and q, be fitted by least squares regression to find the values of the parameters which minimize the error term. It is generally considered good practice to find the smallest values of p and q which provide an acceptable fit to the data. For a pure AR model then the Yule-Walker equations may be used to provide a fit.

## Generalizations Edit

The dependence of *X*_{t} on past values and the error terms ε_{t} is assumed to be linear unless specified otherwise. If the dependence is nonlinear, the model is specifically called a *nonlinear moving average* (NMA), *nonlinear autoregressive* (NAR), or *nonlinear autoregressive moving average* (NARMA) model.

Autoregressive moving average models can be generalized in other ways. See also autoregressive conditional heteroskedasticity (ARCH) models and autoregressive integrated moving average (ARIMA) models. If multiple time series are to be fitted then a vectored ARIMA (or VARIMA) model may be fitted. If the time-series in question exhibits long memory then fractional ARIMA (FARIMA, sometimes called ARFIMA) modelling is appropriate. If the data is thought to contain seasonal effects, it may be modeled by a SARIMA (seasonal ARIMA) model.

Another generalization is the *multiscale autoregressive* (MAR) model. A MAR model is indexed by the nodes of a tree, whereas a standard (discrete time) autoregressive model is indexed by integers. See multiscale autoregressive model for a list of references.

## See alsoEdit

## ReferencesEdit

- George Box and F.M. Jenkins.
*Time Series Analysis: Forecasting and Control*, second edition. Oakland, CA: Holden-Day, 1976. - Mills, Terence C.
*Time Series Techniques for Economists.*Cambridge University Press, 1990. - Percival, Donald B. and Andrew T. Walden.
*Spectral Analysis for Physical Applications.*Cambridge University Press, 1993.

This page uses Creative Commons Licensed content from Wikipedia (view authors). |

This page uses Creative Commons Licensed content from Wikipedia (view authors). |