문제

I have trained a CvSVM on HOG features with my own positive and negative samples:

CvSVMParams params;
params.svm_type    = CvSVM::C_SVC;
params.kernel_type = CvSVM::RBF;

CvSVM svm;
svm.train_auto(descriptors, labels, cv::Mat(), cv::Mat(), params,
               SVM_CROSS_VALIDATION_K);

I can use just fine to classify images:

cv::HOGDescriptor hog;
hog.winSize = cv::Size(HOG_PARAMS.width(), HOG_PARAMS.height());

//compute the HOG features
hog.compute(image, ders,
            cv::Size(HOG_PARAMS.stride(),HOG_PARAMS.stride()),
            cv::Size(0,0), locs);

//convert the feature to a Mat
cv::Mat desc_mat;
desc_mat.create(ders.size(), 1, CV_32FC1);
for(unsigned int i = 0; i < ders.size(); i++)
  desc_mat.at<float>(i, 0) = ders[i];

float response = svm.predict(desc_mat);

Now I would like to use HOGDescripor::detectMultiScale() to detect objects of interest in images. To convert the CvSVM into the primal form that HOGDescriptor needs, I use the approach suggested by https://stackoverflow.com/a/17118561/2197564:

detector_svm.h:

#ifndef DETECTOR_SVM_H
#define DETECTOR_SVM_H

#include <opencv2/core/core.hpp>
#include <opencv2/ml/ml.hpp>

class Detector_svm : public CvSVM
{
  public:
  std::vector<float> get_primal_form() const;
};  

#endif //DETECTOR_SVM_H

detector_svm.cpp:

#include "detector_svm.h"

std::vector<float> Detector_svm::get_primal_form() const
{
  std::vector<float> support_vector;

  int sv_count = get_support_vector_count();

  const CvSVMDecisionFunc* df = decision_func;
  const double* alphas = df[0].alpha;
  double rho = df[0].rho;
  int var_count = get_var_count();

  support_vector.resize(var_count, 0);

  for (unsigned int r = 0; r < (unsigned)sv_count; r++) 
  {
    float myalpha = alphas[r];
    const float* v = get_support_vector(r);
    for (int j = 0; j < var_count; j++,v++) 
    {
      support_vector[j] += (-myalpha) * (*v);
    }
  }

  support_vector.push_back(rho);

  return support_vector;
}

However, when I try to set the SVM Detector

HOGDescriptor hog;
hog.setSVMDetector(primal_svm); //primal_svm is a std::vector<float>

I get failed asserts:

OpenCV Error: Assertion failed (checkDetectorSize()) in setSVMDetector, file /home/username/libs/OpenCV-2.3.1/modules/objdetect/src/hog.cpp, line 89
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/username/libs/OpenCV-2.3.1/modules/objdetect/src/hog.cpp:89: error: (-215) checkDetectorSize() in function setSVMDetector

I've tried running this with OpenCV 2.3.1 and 2.4.7; the result is the same.

What am I doing wrong?

도움이 되었습니까?

해결책 5

I no longer have access to the original code.

To get around the issue, I wrote my own multiscale detector, which was less work than getting the primal SVM form.

My suggestion to people with similar issues now is to try upgrading to OpenCV 3.x.

다른 팁

I had the same issue. What I realized was I was giving the HogDescriptor function the wrong winSize. The winSize should match the dimensions of your training images. In my case I used 32x64 images (for training) and so I needed to use a winSize=(32x64). My code for setting the detector looked as follows.

 vector<float> primal;
 svm.getSupportVector(primal);
 cv::HOGDescriptor hog(cv::Size(32, 64), cv::Size(8, 8), cv::Size(4, 4), cv::Size(4, 4), 9);
 hog.setSVMDetector(primal);

Your trained vector size is probably too small. You need to make sure the size of the windows when you train match the descriptor size win_size=Size(64, 128) block_size=Size(16, 16) block_stride=Size(8, 8) cell_size=Size(8, 8) nbins=9

That error gets thrown if (your) primal_svm.size() differs from the hog.getDescriptorSize()

I don't see anything immediately wrong in your code, but some boilerplate is obviously missing.

You need to initialize your hog like hog(cv::Size(64, 64), cv::Size(16, 16), cv::Size(8, 8), cv::Size(8, 8), 9), and ensure the value of parameters are match yours.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top