Autoregressive (AR) models
Moving average (MA) models
Autoregressive moving average (ARMA) models
Using ACF & PACF for model ID
11 Feb 2019
Autoregressive (AR) models
Moving average (MA) models
Autoregressive moving average (ARMA) models
Using ACF & PACF for model ID
Autoregressive models are widely used in ecology to treat a current state of nature as a function its past state(s)
An autoregressive model of order p, or AR(p), is defined as
\[ x_t = \phi_1 x_{t-1} + \phi_2 x_{t-2} + \dots + \phi_p x_{t-p} + w_t \]
where we assume
\(w_t\) is white noise
\(\phi_p \neq 0\) for an order-p process
AR(1)
\(x_t = 0.5 x_{t-1} + w_t\)
AR(1) with \(\phi_1 = 1\) (random walk)
\(x_t = x_{t-1} + w_t\)
AR(2)
\(x_t = -0.2 x_{t-1} + 0.4 x_{t-2} + w_t\)
Recall that stationary processes have the following properties
AR(1) models are stationary if and only if \(\lvert \phi \rvert < 1\)
Same value, but different sign
Both positive, but different magnitude
Recall that the autocorrelation function (\(\rho_k\)) measures the correlation between \(\{x_t\}\) and a shifted version of itself \(\{x_{t+k}\}\)
ACF oscillates for model with \(-\phi\)
For model with large \(\phi\), ACF has longer tail
Recall that the partial autocorrelation function (\(\phi_k\)) measures the correlation between \(\{x_t\}\) and a shifted version of itself \(\{x_{t+k}\}\), with the linear dependence of \(\{x_{t-1},x_{t-2},\dots,x_{t-k-1}\}\) removed
Do you see the link between the order p and lag k?
Model | ACF | PACF |
---|---|---|
AR(p) | Tails off slowly | Cuts off after lag p |
Moving average models are most commonly used for forecasting a future state
A moving average model of order q, or MA(q), is defined as
\[ x_t = w_t + \theta_1 w_{t-1} + \theta_2 w_{t-2} + \dots + \theta_q w_{t-q} \]
where \(w_t\) is white noise
Each of the \(x_t\) is a sum of the most recent error terms
A moving average model of order q, or MA(q), is defined as
\[ x_t = w_t + \theta_1 w_{t-1} + \theta_2 w_{t-2} + \dots + \theta_q w_{t-q} \]
where \(w_t\) is white noise
Each of the \(x_t\) is a sum of the most recent error terms
Thus, all MA processes are stationary because they are finite sums of stationary WN processes
Do you see the link between the order q and lag k?
Model | ACF | PACF |
---|---|---|
AR(p) | Tails off slowly | Cuts off after lag p |
MA(q) | Cuts off after lag q | Tails off slowly |
An autoregressive moving average, or ARMA(p,q), model is written as
\[ x_t = \phi_1 x_{t-1} + \dots + \phi_p x_{t-p} + w_t + \theta_1 w_{t-1} + \dots + \theta_q w_{t-q} \]
We can write an ARMA(p,q) model using the backshift operator
\[ \phi_p (\mathbf{B}) x_t= \theta_q (\mathbf{B}) w_t \]
ARMA models are stationary if all roots of \(\phi_p (\mathbf{B}) > 1\)
ARMA models are invertible if all roots of \(\theta_q (\mathbf{B}) > 1\)
Model | ACF | PACF |
---|---|---|
AR(p) | Tails off slowly | Cuts off after lag p |
MA(q) | Cuts off after lag q | Tails off slowly |
ARMA(p,q) | Tails off slowly | Tails off slowly |
NONSTATIONARY MODELS
If the data do not appear stationary, differencing can help
This leads to the class of autoregressive integrated moving average (ARIMA) models
ARIMA models are indexed with orders (p,d,q) where d indicates the order of differencing
For \(d > 0\), \(\{x_t\}\) is an ARIMA(p,d,q) process if \((1-\mathbf{B})^d x_t\) is an ARMA(p,q) process
For \(d > 0\), \(\{x_t\}\) is an ARIMA(p,d,q) process if \((1-\mathbf{B})^d x_t\) is an ARMA(p,q) process
For example, if \(\{x_t\}\) is an ARIMA(1,1,0) process then \(\nabla \{x_t\}\) is an ARMA(1,0) = AR(1) process