Deterministic vs stochastic elements
Regression with autocorrelated errors
Regression with temporal random effects
11 Feb 2019
Deterministic vs stochastic elements
Regression with autocorrelated errors
Regression with temporal random effects
Consider this simple model, consisting of a mean \(\mu\) plus error
\[ y_i = \mu + e_i ~ \text{with} ~ e_i \sim \text{N}(0,\sigma^2) \]
The right-hand side of the equation is composed of deterministic and stochastic pieces
\[ y_i = \underbrace{\mu}_{\text{deterministic}} + \underbrace{e_i}_{\text{stochastic}} \]
Sometime these pieces are referred to as fixed and random
\[ y_i = \underbrace{\mu}_{\text{fixed}} + \underbrace{e_i}_{\text{random}} \]
This can also be seen by rewriting the model
\[ y_i = \mu + e_i ~ \text{with} ~ e_i \sim \text{N}(0,\sigma^2) \]
as
\[ y_i \sim \text{N}(\mu,\sigma^2) \]
We can expand the deterministic part of the model, as with linear regression
\[ y_i = \underbrace{\alpha + \beta x_i}_{\text{mean}} + e_i ~ \text{with} ~ e_i \sim \text{N}(0,\sigma^2) \]
so
\[ y_i \sim \text{N}(\alpha + \beta x_i,\sigma^2) \]
Consider a simple model with a mean \(\mu\) plus white noise
\[ y_t = \mu + e_t ~ \text{with} ~ e_t \sim \text{N}(0,\sigma^2) \]
We can expand the deterministic part of the model, as before with linear regression
\[ y_t = \underbrace{\alpha + \beta x_t}_{\text{mean}} + e_t ~ \text{with} ~ e_t \sim \text{N}(0,\sigma^2) \]
so
\[ y_t \sim \text{N}(\alpha + \beta x_t,\sigma^2) \]
These do not look like white noise!
There is significant autocorrelation at lag = 1
We can expand the stochastic part of the model to have autocorrelated errors
\[ y_t = \alpha + \beta x_t + e_t \\ e_t = \phi e_{t-1} + w_t \]
with \(w_t \sim \text{N}(0,\sigma^2)\)
We can expand the stochastic part of the model to have autocorrelated errors
\[ y_t = \alpha + \beta x_t + e_t \\ e_t = \phi e_{t-1} + w_t \]
with \(w_t \sim \text{N}(0,\sigma^2)\)
We can write this model as our standard state-space model
\[ \begin{align} y_t &= \alpha + \beta x_t + e_t \\ &= e_t + \alpha + \beta x_t\\ &\Downarrow \\ y_t &= x_t + a + D d_t + v_t\\ \end{align} \]
with
\(x_t = e_t\), \(a = \alpha\), \(D = \beta\), \(d_t = x_t\), \(v_t = 0\)
\[ \begin{align} e_t &= \phi e_{t-1} + w_t \\ &\Downarrow \\ x_t &= B x_t + w_t\\ \end{align} \]
with
\(x_t = e_t\) and \(B = \phi\)
\[ y_t = \alpha + \beta x_t + e_t \\ e_t = \phi e_{t-1} + w_t \\ \Downarrow \\ y_t = a + D d_t + x_t\\ x_t = B x_t + w_t \]
MARSS()
\[ y_t = a + D d_t + x_t \\ \Downarrow \\ y_t = Z x_t + a + D d_t + v_t \]
y = data ## [1 x T] matrix of data a = matrix("a") ## intercept D = matrix("D") ## slope d = covariate ## [1 x T] matrix of measured covariate Z = matrix(1) ## no multiplier on x R = matrix(0) ## v_t ~ N(0,R); want y_t = 0 for all t
MARSS()
\[ x_t = B x_t + w_t \\ \Downarrow \\ x_t = B x_t + u + C c_t + w_t \]
B = matrix("b") ## AR(1) coefficient for model errors Q = matrix("q") ## w_t ~ N(0,Q); var for model errors u = matrix(0) ## u = 0 C = matrix(0) ## C = 0 c = matrix(0) ## c_t = 0 for all t
Recall our simple model
\[ y_t = \underbrace{\mu}_{\text{fixed}} + \underbrace{e_t}_{\text{random}} \]
We can expand the random portion
\[ y_t = \underbrace{\mu}_{\text{fixed}} + ~ \underbrace{f_t + e_t}_{\text{random}} \]
\[ e_t \sim \text{N}(0, \sigma) \\ f_t \sim \text{N}(f_{t-1}, \gamma) \]
We can expand the random portion
\[ y_t = \underbrace{\mu}_{\text{fixed}} + ~ \underbrace{f_t + e_t}_{\text{random}} \]
\[ e_t \sim \text{N}(0, \sigma) \\ f_t \sim \text{N}(f_{t-1}, \gamma) \]
This is simply a random walk observed with error
\[ y_t = \mu + f_t + e_t ~ \text{with} ~ e_t \sim \text{N}(0, \sigma) \\ f_t = f_{t-1} + w_t ~ \text{with} ~ w_t \sim \text{N}(0, \gamma) \\ \Downarrow \\ y_t = a + x_t + v_t ~ \text{with} ~ v_t \sim \text{N}(0, R) \\ x_t = x_{t-1} + w_t ~ \text{with} ~ w_t \sim \text{N}(0, Q) \]
We can expand the fixed portion
\[ y_t = \underbrace{\alpha + \beta x_t}_{\text{fixed}} + ~ \underbrace{f_t + e_t}_{\text{random}} \]
\[ e_t \sim \text{N}(0, \sigma) \\ f_t \sim \text{N}(f_{t-1}, \gamma) \]
\[ y_t = \alpha + \beta x_t + f_t + e_t ~ \text{with} ~ e_t \sim \text{N}(0, \sigma) \\ f_t = f_{t-1} + w_t ~ \text{with} ~ w_t \sim \text{N}(0, \gamma) \\ \Downarrow \\ y_t = a + D d_t + x_t + v_t ~ \text{with} ~ v_t \sim \text{N}(0, R) \\ x_t = x_{t-1} + w_t ~ \text{with} ~ w_t \sim \text{N}(0, Q) \]