I want to filter the position and size of the detected face, to smooth in along the time.

Problem is the video is about 30 FPS but the method that calls the face detection: captureOutput:didOutputSampleBuffer:fromConnection Is called every 5 times per second, and then the rectangle (a CGRect ) is send to the main thread to be drawn using: dispatch_async(dispatch_get_main_queue() So, it does not matter if I apply a low pass filter or kalman filter in that part of the program because it updates the rectangle 5 times per second, that’s the reason I see the rectangle like vibrating or with noise in its position.

What should I do?

Should I add something like a timer (NSTimer) at 30 ticks per second or similar in order to apply the filter there? What program architecture would you use?

Thanks in advance.

有帮助吗?

解决方案

I'm not very familiar with avfoundation, but poking around the API you might be able to use this: addPeriodicTimeObserverForInterval:queue:usingBlock: on AVPlayer

You can give it a 1/30 CMTime, and it will be timed to the video rather than absolute time.

Then it's a matter of coordinating with the output capture delegate. It sounds like they may be processed by different threads(?), so you'll probably want to create some kind of thread safe queue data structure to pass events from the periodic time observer to the output capture. That kind of thing doesn't appear to be a built in library, so you might have to get creative with this piece.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top