Pergunta

I found some code to calculate SIFT descriptors from Githut with three functions:

sp_find_sift_grid(I, grid_x, grid_y, patch_size, sigma_edge) GenerateSiftDescriptors(opts,descriptor_opts) normalize_sift(siftArr)

And I found there are some researchers using them in Matlab for computing SIFT.

But I have some questions: 1.There is no keypoint detection but a grid was created instead to get certain number of patches. Thus certain number of descriptors are created. Why? 2.There seems no scale octaves generation. Why?

I am still confused about the process of SIFT. I appreciate any help and thanks.

Foi útil?

Solução

I think it is in order to compare images with the same number of features.

If you extract "standard" SIFT, you don't know how many interest point you will obtain. So if you want to compare 2 images with a different number of features (different number of points) it will be complicated, you can't use directly SVM nor Neural Network... because the number of features for each image have to be the same.

With standard SIFT you need to match the points, find inliers and do others stuff or compute Bag of Visual Words before computing similarity between two images.

If you just want to know how SIFT work, you can check wikipedia and David Lowe articles.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top