Question

We are having trouble after the iOS upgrade went from 7.0.6 to 7.1.0. I don't see this issue on iPhone 4s, 5, 5c, nor 5s running iOS 7.1 So much for all the non-fragmentation talk. I am posting the camera initialization code:

- (void)initCapture
{
    //Setting up the AVCaptureDevice (camera)
    AVCaptureDevice* inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError* cameraError;
    if ([inputDevice lockForConfiguration:&cameraError])
    {
        if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus])
        {
            NSLog(@"AVCaptureDevice is set to video with continuous auto focus");
            CGPoint autofocusPoint = CGPointMake(0.5f, 0.5f);
            [inputDevice setFocusPointOfInterest:autofocusPoint];
            [inputDevice setFocusMode:AVCaptureFocusModeContinuousAutoFocus];
        }

        [inputDevice unlockForConfiguration];
    }

    //setting up the input streams
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:nil];

    //setting up up the AVCaptureVideoDataOutput
    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.alwaysDiscardsLateVideoFrames = YES;
    [captureOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];

    //setting up video settings
    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];

    //passing the settings to the AVCaptureVideoDataOutput
[captureOutput setVideoSettings:videoSettings];

    //setting up the AVCaptureSession
    captureSession = [[AVCaptureSession alloc] init];
    captureSession.sessionPreset = AVCaptureSessionPresetMedium;

    [captureSession addInput:captureInput];
    [captureSession addOutput:captureOutput];

    if (!prevLayer)
{
        prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    }
    NSLog(@"initCapture preview Layer %p %@", self.prevLayer, self.prevLayer);
    self.prevLayer.frame = self.view.bounds;
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
    [self.view.layer addSublayer: self.prevLayer];

    [self.captureSession startRunning];
}

Any help would be greatly appreciated...

Was it helpful?

Solution 2

To close this thread up, we were using the camera for scanning of QR codes in addition to the libzxing. We decided to implement native iOS 7.0 AVCaptureMetadataOutputObjectsDelegate instead of the older AVCaptureVideoDataOutputSampleBufferDelegate. The Metadata delegate is much simpler and cleaner, and we found the example in http://nshipster.com/ios7/ very helpful.

OTHER TIPS

The code provided by Apple you are using is outdated - they have fully rewritten it now. I'd try my luck and go for the new workflow.

Check it out here.

Here are some ideas to diagnose your problem:

  • You have no else case for if ([inputDevice lockForConfiguration:&cameraError]). Add one.
  • In the else case, log the error contained in cameraError.
  • You have no else case for if ([inputDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]). Add one; log that, or add a breakpoint there to test in your debugging.
  • You don't check the return value of the property focusPointOfInterestSupported, before attempting setFocusPointOfInterest. Consider calling setFocusMode before setFocusPointOfInterest (not sure if it matters, but that's what I have)
  • In general, you may want to do all your checks before attempting to lock the configuration.

Following neuman8's comment stating that something in libzxing is preventing the refocus, I did some investigating myself

I found the following line in the Decoder.mm file to be the culprit.

ArrayRef<char> subsetData (subsetBytesPerRow * subsetHeight);

It seems that ArrayRef is a class in zxing/common/Array.h file that attempts to allocate an array with the specified size. It did not seem to do anything wrong, but I guessed that the allocation of about 170k char element array may take some time and be the culprit for slowing down the blocking call enough to prevent other threads from running.

So, I tried to just put in a brute force solution to test the hypothesis. I added a sleep just after the allocation.

[NSThread sleepForTimeInterval:0.02];

The camera started focusing again and was able to decipher the QR codes.

I am still unable to find a better way to resolve this. Is there anyone who is able to figure a more efficient allocation of the large array, or have a more elegant way of yielding the thread for the camera focus?
Otherwise this should solve the problem for now, even if it is ugly.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top