Question

Currently I am working on an app to capture images of different exposurePointOfInterest. Basically the steps to that are:

  1. Set focus on point A
  2. Capture
  3. Set focus on point B
  4. Capture

I had to put a redundant for loop between step 1 & 2 and step 3 & 4 to allow some time for the lens to actually focus on the intended points, otherwise both captures at step 2 & 4 would result in the same picture. This works perfectly. But, I believe this is not the best way to solve this problem.

I have tried putting this code instead of the for loop:

[self performSelector:@selector(captureStillImage) withObject:@"Grand Central Dispatch" afterDelay:1.0]

But when I ran it, it ran as if the selector captureStillImage is never executed. Is there anything that I did wrong? Or is there a better solution that anyone can advise me?

The function I call to capture multiple images looks like this:

-(void)captureMultipleImg
{
//CAPTURE FIRST IMAGE WITH EXPOSURE POINT(0,0)
[self continuousExposeAtPoint:CGPointMake(0.0f, 0.0f)];

NSLog(@"Looping..");
for(int i=0; i<100000000;i++){
}
NSLog(@"Finish Looping");
[self captureStillImage];


//CAPTURE FIRST IMAGE WITH EXPOSURE POINT(0,0)
[self continuousExposeAtPoint:CGPointMake(0.5f, 0.5f)];

NSLog(@"Looping..");
for(int i=0; i<100000000;i++){
}
NSLog(@"Finish Looping");

[self captureStillImage];
}

And the code for captureStillImage looks like this:

-(void)captureStillImage
{
AVCaptureConnection *connection = [stillImage connectionWithMediaType:AVMediaTypeVideo];

typedef void(^MyBufBlock)(CMSampleBufferRef, NSError*);

MyBufBlock h = ^(CMSampleBufferRef buf, NSError *err){
    NSData *data = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:buf];
    [self setToSaveImage:[UIImage imageWithData:data]];

    NSLog(@"Saving to Camera Roll..");
    //Saving photo to camera roll
    UIImageWriteToSavedPhotosAlbum(toSaveImage, self, @selector(image:didFinishSavingWithError:contextInfo:), nil);
    toSaveImage = NULL;
};

[stillImage captureStillImageAsynchronouslyFromConnection:connection completionHandler:h];
}

The code for continuousExposeAtPoint: function:

-(void)continuousExposeAtPoint:(CGPoint)point
{
if([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]){
    if([device lockForConfiguration:NULL]){
        [device setExposurePointOfInterest:point];
        [device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
        [device unlockForConfiguration];
        NSLog(@"Exposure point of intereset has been set to (%f,%f)",point.x, point.y);
    }
}
}

Thanks in advance!

Was it helpful?

Solution 2

I'm going out of a limb here, since I would like to suggest a different approach which completely avoids "busy waiting" or "run loop waiting".

If I understood the camera correctly, it may take a certain duration until after the exposure point has been set by the camera. There is the property adjustingFocus which reflects this state of the camera. This property is KVO compliant and we can use KVO to observe its value.

So, the idea is to set the exposure point, and then observe the property adjustingFocus. When it's value changes to NO, the camera is finished setting the exposure point.

Now, we can leverage KVO to call a completion hander immediately after the setting is complete. Your method to setup the exposure point becomes asynchronous with a completion handler:

typedef void (^completion_t) ();
-(void)continuousExposeAtPoint:(CGPoint)point 
                    completion:(completion_t)completionHandler;

Assuming you have properly implemented KVO in the method above you can use it as follows:

-(void)captureMultipleImg
{
    [self continuousExposeAtPoint:CGPointMake(0.0f, 0.0f) completion:^{
        [self captureStillImage];
        [self continuousExposeAtPoint:CGPointMake(0.5f, 0.5f) completion:^{
            [self captureStillImage];
        }];
    }];
}

Edit:

Now, method captureMultipleImg became asynchronous as well.

Note:

A method invoking an asynchronous method becomes itself asynchronous.

Thus, in order to let the call-site know when its underlying asynchronous task is finished, we may provide a completion handler:

typedef void (^completion_t)();
-(void)captureMultipleImagesWithCompletion:(completion_t)completionHandler
{
    [self continuousExposeAtPoint:CGPointMake(0.0f, 0.0f) completion:^{
        [self captureStillImage];
        [self continuousExposeAtPoint:CGPointMake(0.5f, 0.5f) completion:^{
            [self captureStillImage];
            if (completionHandler) {
                completionHandler();
            }
        }];
    }];
}

A button action may be implemented as follows:

- (void)captureImages {
    [self showLabel];
    self.captureImagesButton.enabled = NO;
    [manager captureMultipleImagesWithCompletion:^{
        dispatch_async(dispatch_get_main_queue(), ^{
            [self hideLabel];
            self.captureImagesButton.enabled = NO;            
        });
    }];
}

Edit:

For a jump start, you may implement the KVO and your method as shown below. Caution: not tested!

-(void)continuousExposeAtPoint:(CGPoint)point 
                    completion:(completion_t)completionHandler
{
    AVCaptureDevice* device; // ...;

    if([device isExposurePointOfInterestSupported] && [device isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]){
        if([device lockForConfiguration:NULL]){

            [device addObserver:self forKeyPath:@"adjustingExposure"
                        options:NSKeyValueObservingOptionNew | NSKeyValueObservingOptionOld
                        context:(__bridge_retained void*)([completionHandler copy])];
            [device setExposurePointOfInterest:point];
            [device setExposureMode:AVCaptureExposureModeContinuousAutoExposure];
        }
    }
}

- (void)observeValueForKeyPath:(NSString *)keyPath
                      ofObject:(id)object change:(NSDictionary *)change
                       context:(void *)context
{
    AVCaptureDevice* device; // = ...;

    if ([keyPath isEqual:@"adjustingExposure"]) {
        if ([[change objectForKey:NSKeyValueChangeNewKey] boolValue] == NO) {
            CGPoint point = device.exposurePointOfInterest;
            NSLog(@"Exposure point of intereset has been set to (%f,%f)",point.x, point.y);

            [device removeObserver:self forKeyPath:@"adjustingExposure"];
            [device unlockForConfiguration];
            completion_t block = CFBridgingRelease(context);
            if (block) {
                block();
            }
        }
    }
    // Be sure to call the superclass's implementation *if it implements it.
    // NSObject does not implement the method.
    [super observeValueForKeyPath:keyPath
                         ofObject:object
                           change:change
                          context:context];

}

The caveat here is, that KVO is difficult to setup. But once you managed to wrap it into a method with a completion handler it looks much nicer ;)

OTHER TIPS

Instead of the dummy loop you can use performSelector:withObject:afterDelay:

maybe is your code running when the run loop is in a mode other than the default mode? try this:

        [self performSelector:@selector(mywork:) withObject:nil
               afterDelay:delay
                  inModes:@[[[NSRunLoop currentRunLoop] currentMode]]];

Use dispatch after:

double delayInSeconds = 2.0;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
    // Code
});

I personally tend to use delays in blocks on the main thread like this:

double delayInSeconds = 0.5;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
     //Do your thing here
});

have you tried using a timer?

if performSelector:withObject:afterDelay: not work you can try:

[NSTimer scheduledTimerWithTimeInterval:1.5 target:self selector:@selector(captureStillImage) userInfo:nil repeats:NO];

Try this with less amount of time like 2 seconds

[self performSelector:@selector(yourMethod:) withObject:yourObject afterDelay:0.2];

You should know performSelector is work on the calling thread, if the calling thread is not executing, the selector is not called.

So I think the reason why performSelector:withObject:afterDelay: does not work is your thread execute captureMultipleImg method is not work after the delay time.

If you call captureMultipleImg with dispatch_async , same reason.

Let's say you call the method in dispatch_async

- (void)testCode
{
    [self performSelector:@selector(mywork:) withObject:nil afterDelay:0.1] ;
    [self endWork] ;
}

after the endWork is executed, the calling thread may be released , so - (void)mywork:(id)objis never called.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top