(I'm not sure whether this is a comment or an answer, but it's a bit long and might be an answer.)
- The proximal cause of your difficulty with reproducing the result is that
lme4
uses both environments and reference classes: these are tricky to "serialize", i.e. to translate to a linear stream that can be saved viadput()
orsave()
. (Can you please trysave()
and see if it works better thandput()
? - In addition, both environments and reference classes use "pass-by-reference" semantics, so operating on the saved model can change it.
anova()
automatically refits the model, which makes some tiny but non-zero changes in the internal structure of the saved model object (we are still trying to track this down). - @alexkeil's comment is wrong: the nonlinear optimizers used within
lme4
do not use any calls to the pseudo-random number generator. They are deterministic (but the two points above explain why things might look a bit weird).
To allay your concerns with the fit, I would check the fit by computing the gradient and Hessian at the final fit, e.g.
library(lme4)
library(numDeriv)
fm1 <- lmer(Reaction ~ Days + (Days | Subject), sleepstudy)
dd <- update(fm1,devFunOnly=TRUE)
params <- getME(fm1,"theta") ## also need beta for glmer fits
grad(dd,params)
## all values 'small', say < 1e-3
## [1] 0.0002462423 0.0003276917 0.0003415010
eigen(solve(hessian(dd,params)),only.values=TRUE)$values
## all values positive and of similar magnitude
## [1] 0.029051631 0.002757233 0.001182232
We are in the process of implementing similar checks to run automatically within lme4
.
That said, I would still love to see your example, if there's a way to reproduce it relatively easily.
PS: in order to be using bobyqa
, you must either be using glmer
or have used lmerControl
to modify the default optimizer choice ... ??