문제

I'm trying to implement P. Viola and M. Jones detection framework in C++ (at the beginning, simply sequence classifier - not cascaded version). I think I have designed all required class and modules (e.g Integral images, Haar features), despite one - the most important: the AdaBoost core algorithm.

I have read the P. Viola and M. Jones original paper and many other publications. Unfortunately I still don't understand how I should find the best threshold for the one weak classifier? I have found only small references to "weighted median" and "gaussian distribution" algorithms and many pieces of mathematics formulas...

I have tried to use OpenCV Train Cascade module sources as a template, but it is so comprehensive that doing a reverse engineering of code is very time-consuming. I also coded my own simple code to understand the idea of Adaptive Boosting.

The question is: could you explain me the best way to calculate the best threshold for the one weak classifier?

Below I'm presenting the AdaBoost pseudo code, rewritten from sample found in Google, but I'm not convinced if it's correctly approach. Calculating of one weak classifier is very slow (few hours) and I have doubts about method of calculating the best threshold especially.

(1) AdaBoost::FindNewWeakClassifier
(2) AdaBoost::CalculateFeatures
(3) AdaBoost::FindBestThreshold
(4) AdaBoost::FindFeatureError
(5) AdaBoost::NormalizeWeights
(6) AdaBoost::FindLowestError
(7) AdaBoost::ClassifyExamples
(8) AdaBoost::UpdateWeights

DESCRIPTION (1)
-Generates all possible arrangement of features in detection window and put to the vector
DO IN LOOP
    -Runs main calculating function (2)
END

DESCRIPTION(2)
-Normalizes weights (5)
DO FOR EACH HAAR FEATURE
    -Puts sequentially next feature from list on all integral images
    -Finds the best threshold for each feature (3)
    -Finds the error for each the best feature in current iteration (4)
    -Saves errors for each the best feature in current iteration in array
    -Saves threshold for each the best feature in current iteration in array
    -Saves the threshold sign for each the best feature in current iteration in array
END LOOP
-Finds for classifier index with the lowest error selected by above loop (6)
-Gets the value of error from the best feature
-Calculates the value of the best feature in the all integral images (7)
-Updates weights (8)
-Adds new, weak classifier to vector

DESCRIPTION (3)
-Calculates an error for each feature threshold on positives integral images - seperate for "+" and "-" sign (4)
-Returns threshold and sign of the feature with the lowest error

DESCRIPTION(4)
- Returns feature error for all samples, by calculating inequality f(x) * sign < sign * threshold

DESCRIPTION (5)
-Ensures that samples weights are probability distribution

DESCRIPTION (6)
-Finds the classifier with the lowest error

DESCRIPTION (7)
-Calculates a value of the best features at all integral images
-Counts false positives number and false negatives number

DESCRIPTION (8)
-Corrects weights, depending on classification results

Thank you for any help

도움이 되었습니까?

해결책

In the original viola-Jones paper here, section 3.1 Learning Discussion (para 4, to be precise) you will find out the procedure to find optimal threshold.

I'll sum up the method quickly below.


Optimal threshold for each feature is sample-weight dependent and therefore calculated in very iteration of adaboost. The best weak classifier's threshold is saved as mentioned in the pseudo code.

In every round, for each weak classifier, you have to arrange the N training samples according to the feature value. Putting a threshold will separate this sequence in 2 parts. Both parts will have either positive or negative samples in majority along with a few samples of other type.

  • T+ : total sum of positive sample weights
  • T- : total sum of negative sample weights
  • S+ : sum of positive sample weights below the threshold
  • S- : sum of negative sample weights below the threshold

Error for this particular threshold is -

e = MIN((S+) + (T-) - (S-), (S-) + (T+) - (S+))

Why the minimum? here's an example:
If the samples and threshold is like this -

+ + + + + - - | + + - - - - -

In the first round, if all weights are equal(=w), taking the minimum will give you the error of 4*w, instead of 10*w.

You calculate this error for all N possible ways of separating the samples.
The minimum error will give you the range of threshold values. The actual threshold is probably the average of the adjacent feature values (I'm not sure though, do some research on this).
This was the second step in your DO FOR EACH HAAR FEATURE loop.
The cascades given along with OpenCV were created by Rainer Lienhart and I don't know what method he used. You could closely follow the OpenCV source codes to get any further improvements on this procedure.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top