Question

We have a project to develop an AFIS(Automated Fingerprint..) We have read many papers on the field and still reading Handbook of Fingerprint Recognition. What we arent able to understand is ; Where are we exactly going to use Neural Networks?

So far we get ; 1)Apply Histogram Eq. + DFT(if necessary) to get contrast and remove noise. 2)Image Binarization + 3)Image Thinning (Morphological Thinning) These are okay. After that , there is feature extraction. Working in 3x3 windows we search for minutiae points , looking for patterns like - 1 in middle , and only has one other 1 neighbor so it is termination- . Then we get minutiae points and we use Poincare Index method to get singular points. But after that , while we have minutiae's and singulars , where are we going to use neural network? If for classification , how? Since we extracted singular points with poincare index why do we need ANN to classify? If for extracting minutiae points , haven't we done it with feature extraction? Any resources you might want to point out? Thanks.

Was it helpful?

Solution

I believe when you say Neural Networks you mean Multi-layer Perceptron (MLP). For your case, it intends to classify fingerprints. But if your algorithm recognizes them already, then you don't need MLP. Otherwise, the new features - containing minutiae points - say X are the input of your network (figure 1). And y are the fingerprint labels. MLP will learn a function that takes the features as parameters and returns the probabilities to what it might belong (computed by the softmax function). This is easily done in scikit. To demonstrate, I am using Logistic regression for the snippet below, but all Supervised Neural Network algorithms follow the same procedure.

from sklearn import linear_model

# Initialize Logistic Regression

lr = linear_model.LogisticRegression(C=10)

# Let X be the training data (Fingerprint Features)
X = [[0,1],[2,3],[1,5]]
# Let y be their labels
y = [0, 1, 2]

# Train Logistic Regression through X and y
lr.fit(X,y)
Out[284]: 
LogisticRegression(C=10, class_weight=None, dual=False, fit_intercept=True,
          intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001)
# Return their probabilities
lr.predict_proba(X)
Out[285]: 
array([[ 0.64974581,  0.144104  ,  0.20615019],
       [ 0.04442027,  0.81437946,  0.14120027],
       [ 0.04096602,  0.0840223 ,  0.87501168]])

The first value 0.64974581 is the probability assigned to class 0 for the sample [0,1] in X. Since, it's the highest in that row vector, it would return the value 0, the second and third row vectors would return 1 and 2, respectively.

enter image description here

                          (Figure 1 : Multi-layer Perceptron)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top