Linear classifiers are easy: they have a coef_
and an intercept_
, described in the class docstrings. Those are regular NumPy arrays, so you can dump them to disk with standard NumPy functions.
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
>>> from sklearn.svm import LinearSVC
>>> clf = LinearSVC().fit(iris.data, iris.target)
Now let's dump this to a pseudo-file:
>>> from io import BytesIO
>>> outfile = BytesIO()
>>> np.savetxt(outfile, clf.coef_)
>>> print(outfile.getvalue())
1.842426121444650788e-01 4.512319840786759295e-01 -8.079381916413134190e-01 -4.507115611351246720e-01
5.201335313639676022e-02 -8.941985347763323766e-01 4.052446671573840531e-01 -9.380586070674181709e-01
-8.506908158338851722e-01 -9.867329247779884627e-01 1.380997337625912147e+00 1.865393234038096981e+00
That's something you can parse from Java, right?
Now to get a score for the k
'th class on a sample x
, you need to evaluate
np.dot(x, clf.coef_[k]) + clf.intercept_[k]
# ==
(sum(x[i] * clf.coef_[k, i] for i in xrange(clf.coef_.shape[1]))
+ clf.intercept_[k])
which is also doable, I hope. The class with the highest score wins.
For kernel SVMs, the situation is more complicated because you need to replicate the one-vs-one decision function, as well as the kernels, in the Java code. The SVM model is stored on SVC
objects in the attributes support_vectors_
and dual_coef_
.