GARCH Models in R
Kris Boudt
Professor of finance and econometrics
Robert Engle
Tim Bollerslev
For AR(MA) models for the mean, see Datacamp course on time series analysis.
To make the GARCH process realistic, we need that:
$\omega$, $\alpha$ and $\beta$ are $>0$: this ensures that $\sigma^2_t >0$ at all times.
$\alpha + \beta < 1$: this ensures that the predicted variance $\sigma^2_t$ always returns to the long run variance:
$$ \sigma^{2}_{t} = \omega + \alpha e^{2}_{t-1} \beta \sigma^{2}_{t-1} $$
# Set parameter values
alpha <- 0.1
beta <- 0.8
omega <- var(sp500ret) * (1 - alpha - beta)
# Then: var(sp500ret) = omega / (1 - alpha - beta)
# Set series of prediction error
e <- sp500ret - mean(sp500ret) # Constant mean
e2 <- e ^ 2
# We predict for each observation its variance.
nobs <- length(sp500ret)
predvar <- rep(NA, nobs)
# Initialize the process at the sample variance
predvar[1] <- var(sp500ret)
# Loop starting at 2 because of the lagged predictor
for (t in 2:nobs){
predvar[t] <- omega + alpha * e2[t - 1] + beta * predvar[t-1]
}
# Volatility is sqrt of predicted variance
predvol <- sqrt(predvar)
predvol <- xts(predvol, order.by = time(sp500ret))
# We compare with the unconditional volatility
uncvol <- sqrt(omega / (1 - alpha-beta))
uncvol <- xts(rep(uncvol, nobs), order.by = time(sp500ret))
# Plot
plot(predvol)
lines(uncvol, col = "red", lwd = 2)
GARCH Models in R