Not all questions require code examples. You don't deserve to be snarked at for being new, and I'm sorry people did. Here is the answer:
The difference between the two models is not significant.
Here is what you can do about it:
- Check to make sure that the terms of one model object are a superset of the terms of the other. Otherwise, the default anova test is invalid to begin with (you could instead compare such non-nested models using AIC, but that belongs in a separate question). I'm actually really curious to see a nested pair of models that manages to be that non-significant, but again, it's not necessary to answering this question.
- If you checked, and the models are nested, and this is analysis you are doing manually, write p=1.0 in your report and call it a day.
- If the models are nested, and the above feels like cheating, here's how to do it th hard way. What you are really asking
anova
is whether that one variable by which they differ makes a significant contribution to fit. Take the "larger" model and dosummary(BAR)
. The p-value corresponding to the variable present inBAR
but missing inFOO
is your p-value! And it's probably equal to 1. And the square of the t-statistic is the F-value. - If the models are nested and this is analysis you are doing programmatically and the absence of a p-value breaks stuff elsewhere in your script, just do
anova(FOO,BAR)[,5:6]
to getNA
s instead of blanks... but then again, if you were doing it programmatically you would have already tried that.
Good luck!