CoreImage's face detector work in the image coordinate space, not the view space. So the coordinates that are returned to you are in pixels in the image, your view.
Here's a tutorial on what theses coordinate spaces are, and how to convert from one another. This should clear up things for you.
As far as orientation goes : you got it right, it might be reversed.
When the user takes a picture, whether it is landscape or portrait, the actual image written on disk is always the same dimensions. It only sets a flag somewhere in the file that tells which orientation it should be displayed in (the Exif.Image.Orientation
to be precise) , flag that the UIImageView
respects, but that is lost when you convert to CGImage
and then CIImage
.
You can know whether or not to flip x
and y
values by looking at the original UIImage
's imageOrientation
property. If you wanna learn more on what this flag is exactly, and how a surprisingly large number of people get it wrong, head over to here