Question

I am trying to monitor a stream of video output from a FireWire camera. I have created an Interface Builder interface with buttons and an NSImageView. While image monitoring is occurring within an endless loop, I want to:

  • change some camera parameters on the fly (gain, gamma, etc.)
  • tell the monitoring to stop so I can save an image to a file (set a flag that stops the while loop)

Using the button features, I have been unable to loop the video frame monitor, while still looking for a button press (much like using the keypressed feature from C.) Two options present themselves:

  1. Initiate a new run loop (for which I cannot get an autoreleasepool to function ...)
  2. Initiate an NSOperation - how do I do this in a way which allows me to connect with an Xcode button push?

The documentation is very obtuse about the creation of such objects. If I create an NSOperation as per the examples I've found, there seems to be no way to communicate with it with an object from Interface Builder. When I create an NSRunLoop, I get an object leak error, and I can find no example of how to create an autoreleasepool that actually responds to the RunLoop I've created. Nevermind that I haven't even attempted to choose which objects get sampled by the secondary run loop ...

Because Objective C is (obviously!) not my native tongue, I am looking for solutions with baby steps, sorry to say ... Thanks in advance

Was it helpful?

Solution

I've needed to do almost exactly the same as you, only with a continuous video display from the FireWire camera. In my case, I used the libdc1394 library to perform the frame capture and camera property adjustment for our FireWire cameras. I know you can also do this using some of the Carbon Quicktime functions, but I found libdc1394 to be a little easier to understand.

For the video capture loop, I tried a number of different approaches, from a separate thread that polls the camera and has locks around shared resources, to using one NSOperationQueue for interaction with the camera, and finally settled on using a CVDisplayLink to poll the camera in a way that matches the refresh rate of the screen.

The CVDisplayLink is configured using the following code:

CGDirectDisplayID   displayID = CGMainDisplayID();  
CVReturn            error = kCVReturnSuccess;
error = CVDisplayLinkCreateWithCGDisplay(displayID, &displayLink);
if (error)
{
    NSLog(@"DisplayLink created with error:%d", error);
    displayLink = NULL;
}
CVDisplayLinkSetOutputCallback(displayLink, renderCallback, self);  

and it calls the following function to trigger the retrieval of a new camera frame:

static CVReturn renderCallback(CVDisplayLinkRef displayLink, 
                               const CVTimeStamp *inNow, 
                               const CVTimeStamp *inOutputTime, 
                               CVOptionFlags flagsIn, 
                               CVOptionFlags *flagsOut, 
                               void *displayLinkContext)
{
    return [(SPVideoView *)displayLinkContext renderTime:inOutputTime];
}

The CVDisplayLink is started and stopped using the following:

- (void)startRequestingFrames;
{
    CVDisplayLinkStart(displayLink);    
}

- (void)stopRequestingFrames;
{
    CVDisplayLinkStop(displayLink);
}

Rather than using a lock on the FireWire camera communications, whenever I need to adjust the exposure, gain, etc. I change corresponding instance variables and set the appropriate bits within a flag variable to indicate which settings to change. On the next retrieval of a frame, the callback method from the CVDisplayLink changes the appropriate settings on the camera to match the locally stored instance variables and clears that flag.

Display to the screen is handled through an NSOpenGLView (CAOpenGLLayer introduced too many visual artifacts when updating at this rate, and its update callbacks ran on the main thread). Apple has some extensions you can use to provide these frames as textures using DMA for better performance.

Unfortunately, nothing that I've described here is introductory-level stuff. I have about 2,000 lines of code for these camera-handling functions in our software and this took a long time to puzzle out. If Apple could add the manual camera settings adjustments to the QTKit Capture APIs, I could remove almost all of this.

OTHER TIPS

If all you're trying to do is see/grab the output of a connected camera, the answer is probably neither.

Use QTKit's QTCaptureView. Problem solved. Want to grab a frame? Also no problem. Don't try to roll your own - QTKit's stuff is optimized and part of the OS. I'm pretty sure you can affect camera properties as you wanted but if not, plan B should work.

Plan b: Use a scheduled, recurring NSTimer to ask QTKit to grab a frame every so often ("how" linked above) and apply your image manipulations to the frame (maybe with Core Image) before displaying in your NSImageView.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top