This line calls the prediction function from libsvm. It looks like this (but please take a look at the whole function _dense_predict
):
libsvm.predict(
X, self.support_, self.support_vectors_, self.n_support_,
self.dual_coef_, self._intercept_,
self.probA_, self.probB_, svm_type=svm_type, kernel=kernel,
degree=self.degree, coef0=self.coef0, gamma=self._gamma,
cache_size=self.cache_size)
You can use this line and give it all the relevant information directly and will obtain a raw prediction. In order to do this, you must import the libsvm from sklearn.svm import libsvm
. If your initial fitted classifier is called svc
, then you can obtain all the relevant information from it by replacing all the self
keywords with svc
and keeping the values. If svc._impl
gives you "c_svc"
, then you set svm_type=0
.
Note that at the beginning of the _dense_predict
function you have X = self._compute_kernel(X)
. If your data is X
, then you need to transform it by doing K = svc._compute_kernel(X)
, and call the libsvm.predict
function with K
as the first argument
Scoring is independent from all this. Take a look at sklearn.metrics
, where you will find e.g. the accuracy_score
, which is the default score in SVM.
This is of course a somewhat suboptimal way of doing things, but in this specific case, if is impossible (I didn't check very hard) to set coefficients, then going into the code and seeing what it does and extracting the relevant part is surely an option.