I believe when you say Neural Networks you mean Multi-layer Perceptron
(MLP). For your case, it intends to classify fingerprints. But if your algorithm recognizes them already, then you don't need MLP. Otherwise, the new features - containing minutiae points - say X
are the input of your network (figure 1). And y
are the fingerprint labels. MLP will learn a function that takes the features as parameters and returns the probabilities to what it might belong (computed by the softmax function). This is easily done in scikit. To demonstrate, I am using Logistic regression for the snippet below, but all Supervised Neural Network algorithms follow the same procedure.
from sklearn import linear_model
# Initialize Logistic Regression
lr = linear_model.LogisticRegression(C=10)
# Let X be the training data (Fingerprint Features)
X = [[0,1],[2,3],[1,5]]
# Let y be their labels
y = [0, 1, 2]
# Train Logistic Regression through X and y
lr.fit(X,y)
Out[284]:
LogisticRegression(C=10, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001)
# Return their probabilities
lr.predict_proba(X)
Out[285]:
array([[ 0.64974581, 0.144104 , 0.20615019],
[ 0.04442027, 0.81437946, 0.14120027],
[ 0.04096602, 0.0840223 , 0.87501168]])
The first value 0.64974581
is the probability assigned to class 0
for the sample [0,1]
in X
. Since, it's the highest in that row vector, it would return the value 0
, the second and third row vectors would return 1
and 2
, respectively.
(Figure 1 : Multi-layer Perceptron)