anova
method for objects of class "chandwich"
.
Compares two or more nested models using the adjusted likelihood ratio
test statistic (ALRTS) described in Section 3.5 of Chandler and Bate (2007).
The nesting must result from the simple constraint that a subset of the
parameters of the larger model is held fixed.
# S3 method for chandwich
anova(object, object2, ...)
An object of class "chandwich"
, returned by
adjust_loglik
.
An object of class "chandwich"
, returned by
adjust_loglik
.
Further objects of class "chandwich"
and/or arguments
to be passed to compare_models
. The name of any object
of class "chandwich"
passed via ... must not match any argument of
compare_models
or any argument of
optim
.
An object of class "anova"
inheriting from class
"data.frame"
, with four columns:
The number of parameters in the model
The decrease in the number of parameter compared the model in the previous row
The adjusted likelihood ratio test statistic
The p-value associated with the test that the model is a valid simplification of the model in the previous row.
The row names are the names of the model objects.
For details the adjusted likelihood ratio test see
compare_models
and Chandler and Bate (2007).
The objects of class "chandwich"
need not be provided in nested
order: they will be ordered inside anova.chandwich
based on the
values of attr(., "p_current")
.
Chandler, R. E. and Bate, S. (2007). Inference for clustered data using the independence loglikelihood. Biometrika, 94(1), 167-183. doi:10.1093/biomet/asm015
compare_models
for an adjusted likelihood ratio test
of two models.
adjust_loglik
to adjust a user-supplied
loglikelihood function.
conf_intervals
for confidence intervals for
individual parameters.
conf_region
for a confidence region for
pairs of parameters.
# -------------------------- GEV model, owtemps data -----------------------
# ------------ following Section 5.2 of Chandler and Bate (2007) -----------
gev_loglik <- function(pars, data) {
o_pars <- pars[c(1, 3, 5)] + pars[c(2, 4, 6)]
w_pars <- pars[c(1, 3, 5)] - pars[c(2, 4, 6)]
if (isTRUE(o_pars[2] <= 0 | w_pars[2] <= 0)) return(-Inf)
o_data <- data[, "Oxford"]
w_data <- data[, "Worthing"]
check <- 1 + o_pars[3] * (o_data - o_pars[1]) / o_pars[2]
if (isTRUE(any(check <= 0))) return(-Inf)
check <- 1 + w_pars[3] * (w_data - w_pars[1]) / w_pars[2]
if (isTRUE(any(check <= 0))) return(-Inf)
o_loglik <- log_gev(o_data, o_pars[1], o_pars[2], o_pars[3])
w_loglik <- log_gev(w_data, w_pars[1], w_pars[2], w_pars[3])
return(o_loglik + w_loglik)
}
# Initial estimates (method of moments for the Gumbel case)
sigma <- as.numeric(sqrt(6 * diag(var(owtemps))) / pi)
mu <- as.numeric(colMeans(owtemps) - 0.57722 * sigma)
init <- c(mean(mu), -diff(mu) / 2, mean(sigma), -diff(sigma) / 2, 0, 0)
# Log-likelihood adjustment of the full model
par_names <- c("mu[0]", "mu[1]", "sigma[0]", "sigma[1]", "xi[0]", "xi[1]")
large <- adjust_loglik(gev_loglik, data = owtemps, init = init,
par_names = par_names)
# Log-likelihood adjustment of some smaller models: xi[1] = 0 etc
medium <- adjust_loglik(larger = large, fixed_pars = "xi[1]")
small <- adjust_loglik(larger = medium, fixed_pars = c("sigma[1]", "xi[1]"))
tiny <- adjust_loglik(larger = small,
fixed_pars = c("mu[1]", "sigma[1]", "xi[1]"))
anova(large, medium, small, tiny)
#> Analysis of (Adjusted) Deviance Table
#>
#> Model.Df Df ALRTS Pr(>ALRTS)
#> large 6
#> medium 5 1 6.356 0.01170 *
#> small 4 1 4.251 0.03924 *
#> tiny 3 1 81.713 < 2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1