The VGAM package’s vglm()
function, like the survey package’s
svymle()
function, allows for maximum likelihood fitting
where linear predictors are added to one or more parameters of a
distribution — but vglm()
is a lot faster and has many
distributions already built in. This is how we make
svyVGAM handle complex sampling.
I will write β for the regression parameters, θ for the base parameters of the response distribution, and η for the linear predictors. So, for example, in a classical linear model there would be two parameters θ: the mean (θ1) and variance (θ2). The mean would have a set of regression parameters and the variance would have a single parameter. Collectively, these would be β (maybe β11…β1p and β21), and the two combinations that are plugged in as θ would be called η1 and η2. The big advantage of VGAM is that it does a lot of the work for the user: while the user can add new families, there are many pre-prepared ones, and there are built-in ways to constrain parameters to be equal or related in some other way.
To provide survey versions of vglm()
, we need to (a) get
design-consistent point estimates out of vglm()
, and (b)
construct design-based standard errors for the fit. The first is easy:
vglm()
accepts frequency weights, which are equivalent
to sampling weights for point estimation with independent
observations.
The second can be done in two ways: resampling (which is
straightforward, if potentially slow), and linearisation. Linearisation
requires computing the influence functions of the parameters $$h_i(\beta) = -\widehat{\cal I}^{-1}_w
U_i(\beta)$$ where $\widehat{\cal
I}_w$ is the weighted estimate of the population Fisher
information, Ui = ∂βℓi(β)
is the loglikelihood contribution of the ith observation, and wi is its
weight. The influence functions have the property β̂ − β0 = ∑iwihi(β0) + op(∥β̂ − β0∥)
so that the variance of β̂ is
asymptotically the variance of the population total of the influence
functions. The survey package provides a function
svyrecvar()
to estimate standard errors given the influence
functions, or (a bit less efficiently) you can just call
svytotal()
.
A design object of class svyrep.design
contains sets of
replicate weights analogous to jackknife or bootstrap replicates. We
need to call vglm()
with each set of weights. It should be
helpful to specify the full-sample estimates as starting values.
One complication is that sets of replicate weights will typically
include some zeroes, which vglm()
does not allow (eg, a
bootstrap or jackknife resample will omit some observations). We set
these to 10−9 times the
maximum weight, which has the desired effect that the observations are
present in the fit but with (effectively) zero weight.
The cov.unscaled
slot of a summary.vglm
object contains the inverse of the estimated population Fisher
information, $\widehat{\cal
I}^{-1}_w$.
The vglm
object provides ∂ηℓi(η)
for the base parameters θ, and
also the model matrices that specify ∂βη, so we can
construct Ui. We need to
take care with the constraints, which can cause a coefficient β to appear in more than one linear
predictor.
Suppose βx appears in both η1 and η2, with x values x1 and x2. The chain rule tells us ∂βxℓi = ∂η1ℓi∂βxη1 + ∂η2ℓi∂βxη2 = (∂η1ℓi)x1i + (∂η2ℓi)x2i We might have x1 ≡ x2 ( = x), but that just means ∂βxℓi = (∂η1ℓi)xi + (∂η2ℓi)xi
The constraint matrix C consists of 1s and 0s. If there are p parameters in M equations the matrix is M × p, with Cjk = 1 if parameter k is in linear predictor j. In the default, unconstrained setup, the constraint matrix consists of an M × M identity matrix for each parameter, pasted columnwise to give a M × pM matrix. In the proportional odds model, as another example, there are separate intercepts for each linear predictor but the other parameters all appear in every linear predictor. The first M × M block is the identity, and the rest of the matrix is a column of 1s for each predictor variable. Another way to say this is that Cjk = ∂(βkxk)ηj
So, if we want ∂βℓi, the chain rule says
There is one further complication. The model.matrix
method for vglm
objects returns a model matrix of dimension
Mn × p
rather than n × p, so
we need to sum over the rows for each observation, which can be
identified from the row names, and then rescale. The right
standardisation appears to come from defining $$\tilde C_{kj}=\frac{C_{kj}}{\sum_k
C_{kj}}$$ and then $$\frac{\partial
\ell_i}{\partial \beta_j}=\sum_k \frac{\partial \ell_i}{\partial
\eta_k} \tilde C_{kj}x_{ikj}.$$