Question

Background

I am currently working on a small application that grabs the RGB and depth map streams from a Microsoft Kinect device and saves them on disk for future analysis. Whn I run the program it shall output each frame as a separate image on disk.

The framerate of the Kinect is 30fps, but there are two sources so make this (approximately) 60fps. If I naively try to just save each frame when it arrives I will get dropped frames as is demonstrated by the bundled freenect/record.c application.

I rewrote the application to use one thread that grabs the frames from the device and pushes them to the back of a double ended list (std::deque). Then there are two threads that each pop frames from the front of the double ended list and saves the frames to disk. When the recording is turned off, there is a potentially large number of frames left in the list that still need to be recorded, so before exiting we let the two save threads do their work until finished.

Now the actual problem

Although the problem of dropped frames is solved, writes to the filesystem are still quite slow. Is there any good way to speed up the file creation on disk? Currently, the function dump_frame looks like this:

static void
dump_frame(struct frame* frame)
{
    FILE* fp;
    char filename[512]; /* plenty of space! */

    sprintf(filename, "d-%f-%u.pgm", get_time, frame->timestamp);

    fp = fopen(filename, "w");
    fprintf(fp, "P5 %d %d 65535\n", frame->width, frame->height);
    fwrite(frame->data, frame->size, 1, fp);
    fclose(fp);
}

I am running Fedora 14 x64, so the solution only have to concern Linux as operating system.

Was it helpful?

Solution

You need to measure what takes time in your specific case. Is it creating multiple files or actually writing the image data to disk?

When I tested on my local system with OSX and an Intel SSD X25M 2G I noticed a huge variation in writes when writing multiple 1MB files vs writing 1 multi MB file. This is probably due to housekeeping of the filesystem and will vary depending on the file system you have.

To avoid the housekeeping you could site all your images to the same file and split it later. However, the data you are saving needs about 60MB sustained speed which is quite high.

An alternative if you have a lot of memory is to create a ram disk and store the images there first and later move them on to the persistent file system. With a 6GB ram disk you could store about 100 seconds of video.

OTHER TIPS

A possible improvement would to explicitly set the buffering of fp to full using setvbuf:

const size_t BUFFER_SIZE = 1024 * 16;
fp = fopen(filename, "w");
setvbuf(fp, 0, _IOFBF, BUFFER_SIZE)); /* Must be immediately after the open. */
fprintf(fp, "P5 %d %d 65535\n", frame->width, frame->height);
fwrite(frame->data, frame->size, 1, fp);
fclose(fp);

You could profile using different buffer sizes to determine which provides the best performance.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top