Question

I am trying to implement a fast object tracking app on Android

My logic is as follows

  1. Remove all colours except the desired colour range.
  2. Smooth image using GaussianBlur
  3. Find largest radius Circle with HoughCircles

The app sort of works OK but the performance is bad and I would want to speed up my performance at least 5 times faster. I borrowed much of the logic from this link.

Fast Object Tracking example

public void apply(Mat src, Mat dst) {
    Mat mIntermediateMat = new Mat(src.rows(), src.cols(), CvType.CV_8UC1);

    Mat mHsv = new Mat(src.size(), CvType.CV_8UC3);
    Mat mHsv2 = new Mat(src.size(), CvType.CV_8UC3);

    Imgproc.cvtColor(src, mHsv, Imgproc.COLOR_RGB2HSV, 3);

    Core.inRange(mHsv, new Scalar(0, 86, 72), new Scalar(39, 255, 255), mHsv); // red
    Core.inRange(mHsv, new Scalar(150, 125, 100), new Scalar(180,255,255), mHsv2); // red
    Core.bitwise_or(mHsv, mHsv2, mHsv);

    /// Reduce the noise so we avoid false circle detection
    Imgproc.GaussianBlur(mHsv, mHsv, new Size(7, 7), 2);
    Imgproc.HoughCircles(mHsv, mIntermediateMat, Imgproc.CV_HOUGH_GRADIENT,2.0,100);

    int maxRadious = 0;
    Point pt = new Point(0,0);
    if (mIntermediateMat.cols() > 0) {
        for (int x = 0; x < mIntermediateMat.cols(); x++)
        {
            double vCircle[] = mIntermediateMat.get(0,x);

            if (vCircle == null)
                break;

            int radius = (int)Math.round(vCircle[2]);
            if (radius > maxRadious) {
                maxRadious = radius;
                pt = new Point(Math.round(vCircle[0]), Math.round(vCircle[1]));
            }

        }
        int iLineThickness = 5;
        Scalar red = new Scalar(255, 0, 0);
        // draw the found circle
        Core.circle(dst, pt, maxRadious, red, iLineThickness);
    }

}

I have been thinking of ways to increase my performance and I would like advice on which are likely to be viable and significant.

1) Using Multi Threading. I could use a thread to capture from the camera and one to process the image. From OpenCV Android Release notes I see "Enabled multi-threading support with TBB (just few functions are optimized at the moment). " However I do not understand this. Is TBB only for Intel Chips ? Which functions are available ? Are there relevant examples for Android and OpenCV ?

2) Using a more powerful Android device. I am currently running on an 2012 Nexus 7 , using the front facing camera. I am not really very clued up on which specs are important to me. Nexus 7 (2012) has a 1.3GHz quad-core Nvidia Tegra 3 CPU; 416MHz Nvidia GeForce ULP GPU.

If I was to run on the Fastest Android Handset currently around, how much difference would it make ?

Which specs are most relevant to this type of app

  1. CPU.
  2. GPU.
  3. Number of cores.
  4. Frame Rate of the Camera.

3) Would using Native C++ code positively impact my performance ?

4) Are there alternatives to OpenCV I could use ?

Was it helpful?

Solution

0)I'd profile(or measure run-time) for all functions you use to check what you have to optimize and then plan further optimization.

1)Multi-threading can improve frame-rate, but not a lag (one core process one frame in x ms. You have N cores, so you have N frames very fast, then you have to wait again x ms.). I'm not sure about OpenCV, but as far as I know, Gaussian blur and Hough transform doesn`t use multicores.

2)Intel TBB is not only for Intel chips, people used it for ARMs as well as for AMD chips. See OpenCV configure with TBB for ARM (Ubuntu, 3.0.63)

3-4)You use quite simple algorithms, everything can be implemented by yourself, without OpenCV. And OpenCV Hough transform or Gaussian blur are quite fast. C++ is faster than Python, but in term of "whole programm runtime". Python OpenCV is just wrappers above C++ libraries, so their performance "alone" are similar.

OTHER TIPS

First, like it was said before, profile your code. Android SDK profiler is awesome, probably the best from the few I have tried.

Here are a few things that will easily let you see some improvements:

  • Do not declare (instantiate) those data structures (Mat, Scalar, Point) inside your processing code (the code which is called for every image you capture). Try to reuse them.

  • You do not need to use the full scale of the image for object tracking, you might resize (scale down) every image frame, or use image ROI's: process a smaller region of the image.

  • Your Nexus 7 supports OpenCV NEON optimizations, these are optimizations supported by NVIDIA Tegra hardware, look into that. Basically you will need OpenCV compiled with NEON support, you will find documentation if you look for it.

Edit:

Because you mentioned that GaussianBlur is an issue, you can try other types of blur (median, normalized box) which are faster and also you might increase the sliding window (aka Kernel) size (3rd parameter), the larger the Kernel the faster it goes through an image.

http://docs.opencv.org/doc/tutorials/imgproc/gausian_median_blur_bilateral_filter/gausian_median_blur_bilateral_filter.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top