Frage

Some of the iOS devices have cameras capable of 720P and others are 1080P.

Holding the screen size fixed, obviously the 1080P will provide a better picture since we are fitting more pixels in the same screen size.

But if we wanted to manipulate pixels using:

-(void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection 

and for the sake of argument, we will not be rendering them anywhere but rather run calculations on them.

Obviously, the buffer height and width will be larger but does the 1080P camera capture more pixels because possibly of a wider camera "field of vision" and so there is no enhancement to the quality or is the 1080p camera working within the same "field of vision" of the 720p camera and it is simply capturing more pixels per inch and so even if I don't output the buffer to an image, I should expect more "grain/detail" from my frame buffer.

Thanks

War es hilfreich?

Lösung

They have the same field of vision, the only difference is that the 1080 captures more pixels from of the same area. This is why the frames are bigger, you if you where to print the raw frames you would see how the 1080 image is bigger than the 720, but the image is the same. So when you show this in the same window the 1080 looks prettier. However the memory required is higher and the acquiring speed of frames is lower, you might also notice a higher frame drop on the 1080 resolution if you have the "drop late frames" options enabled.

Depending on the speed of your calculations you might have to lower it even more, for example if you where to perform heavy duty OpenCV style of image processing using 1080 would simply be impossible if smoothness is required.

Btw this is not an iOS or OpenGL question. Its just how resolutions work. Even the quality of the tv broadcast runs with the same principle.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top