Question

I'm using statsmodels for logistic regression analysis in Python. For example:

import statsmodels.api as sm
import numpy as np
x = arange(0,1,0.01)
y = np.random.rand(100)
y[y<=x] = 1
y[y!=1] = 0
x = sm.add_constant(x)
lr = sm.Logit(y,x)
result = lr.fit().summary()

But I want to define different weightings for my observations. I'm combining 4 datasets of different sizes, and want to weight the analysis such that the observations from the largest dataset do not dominate the model.

Was it helpful?

Solution

Took me a while to work this out, but it is actually quite easy to create a logit model in statsmodels with weighted rows / multiple observations per row. Here's how's it's done:

import statsmodels.api as sm
logmodel=sm.GLM(trainingdata[['Successes', 'Failures']], trainingdata[['const', 'A', 'B', 'C', 'D']], family=sm.families.Binomial(sm.families.links.logit)).fit()

OTHER TIPS

Not sure About statsmodel,

But with scikit learn is very easy. You could use an SGDClassifier with sample_weight

Example:

import numpy as np
from sklearn import linear_model
X = [[0., 0.], [1., 1.]]
y = [0, 1]
weight=[0.5,0.5]
#log implies logistic regression
clf = linear_model.SGDClassifier(loss="log" )
clf.fit(X, y, sample_weight =weight)
print(clf.predict([[-0.8, -1]]))
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top