Thank you very much for your help!
The following are the codes I use to get the size and clean rect of a CVImageBuffer and clean texture coordinates of CVOpenGLESTexture base on the CVImageBuffer. The CVImageBuffer was grabbed from iPhone5 camera running on IOS7.
`CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &luminanceTextureRef);
NSLog(@"Clean Rect: %@", NSStringFromCGRect(cleanRect));
NSLog(@"Nominal Output Display Size: %@", NSStringFromCGSize(nominalOutputDisplaySize));
NSLog(@"Encoded full Size: %@", NSStringFromCGSize(encodedSize));
NSLog(@"Lower left coordinate: %f, %f", lowerLeft[0], lowerLeft[1]);
NSLog(@"Lower right coordinate: %f, %f", lowerRight[0], lowerRight[1]);
NSLog(@"Upper right coordinate: %f, %f", upperRight[0], upperRight[1]);
NSLog(@"Upper left coordinate: %f, %f", upperLeft[0], upperLeft[1]);`
`The out put is as following:
Clean Rect: {{0, 0}, {640, 480}}
Nominal Output Display Size: {640, 480}
Encoded full Size: {640, 480}
Lower left coordinate: 0.000000, 1.000000
Lower right coordinate: 1.000000, 1.000000
Upper right coordinate: 1.000000, 0.000000
Upper left coordinate: 0.000000, 0.000000`
It is not very clear from the output here as the Encoded full size is the same as nominal output display size. I feel the clean texture coordinates might be used to crop pixels from full encoded image buffer.