Question

I have results from a regression analysis conducted with another program and I would like to test with R whether they are significant. I know that ls.diag() calculates standard errors and t-tests for regression results, but it requires a very specific input format (i.e., the result of lsfit()), so I don't think I can use that. Is there any function in r that calculates standard errors and t-tests for regression analysis that allows me to simply give the relevant coefficients to the function by hand?

Was it helpful?

Solution

I'm not so sure this is what you're lookin for, but here's a guideline

# this is a model obtained from ?lm 
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)
summary(lm.D9) this is our target

Suppose we only have the regression coefficients, its standard errors and the sample size

beta <- coef(lm.D9)
errorBeta <- summary(lm.D9)$coefficients[,2]
n <- length(weight) # the sample size
k <- length(beta) # number of regression parameters

I think this is your case, if you don't have the coefficient standard errors, then you have to estimate them, it's quite easy.

Once you have the regression coefficients and its standard errors, one can estimate the t-stat:

t_stats <- beta/errorBeta

The rule of thumb states that if |t_stats| >= 2 then the coefficient is statistically significant at 5% level. But if you want to know the p-value, then use:

pt(abs(t_stats), n-k, lower.tail=FALSE)*2

If p-values > 0.05 then the associated coefficients are not statistical significant at that level.

All what you need is knowing the coefficients, its standard errors and the sample size. Otherwise you won't do it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top