You actually have to translate the point into the image's coordinate system in both orientations. In the example you gave, it works for portrait only because the image has the same aspect ration as the screen. If you had a square image, the point would have no 1:1 relation to the image and could be placed outside the image just as easily.
I can think of two methods to accomplish your goal off the top of my head:
1) Instead of having the bottom imageview take up the whole screen and have it aspect fill, resize it manually to take up either the whole width or height of the screen, depending on the image. I.e. if you have a 1000x1000 image, you want the image view to be 320x320 for portait (iphone3) or 300x300 for landscape. Then, make second image a subview of the first. Its location will now be relative to the bottom image's coordinate system.
2) Use some simple math to translate the point on screen into the image's coordinate system. You know size and aspect ratio of the bottom image view, and you know the same about the image.
Let's say the image is 640x920. In landscape, the bottom image view will be 480x300.
1) Scale the image to size it will be in the image view (209x300)
2) Since the image will be centered, it will start around 136 points from the left
3) A point of (280, 300) on the screen would therefore translate to (144, 280 [-20 points for the status bar]) relative to the smaller image, which translates to (441, 859) in the full image's coordinate system.
I hope this points you in the right direction.
EDIT:
Ok, working of your example logs:
[btw, you can print out a CGRECT more easily like this: NSLog(@"%@", NSStringFromCGRect(self.photoImageView.frame))]
1) scale the image dimension so it would fit into the image view:
ratio = photoImageView.frame.size.height / photoImageView.image.size.height;
height = photoImageView.image.size.height * ratio;
width = photoImageView.image.size.width * ratio;
2) calculate the frame of the image inside the image view
x = (photoImageView.frame.size.width - width) / 2;
y = 0;
frame = CGRectMake(x, y, width, height);
3) assuming you use [touch locationInView:photoImageView] to get the location point, you can check if the touch is within the image frame with
CGRectContainsPoint(frame, location)
EDIT - include actual code used - slightly different to above handling the orientation of the image view as well.
if (self.photoImageView.frame.size.height < self.photoImageView.frame.size.width ) {
// This is device and image view is landscape
if (self.pushedPhoto.size.height < self.pushedPhoto.size.width) {
// The pushed photo is portrait
self.photoImageViewRatio = self.photoImageView.frame.size.height / self.pushedPhoto.size.height;
} else {
// The pushed photo is landscape
self.photoImageViewRatio = self.photoImageView.frame.size.width / self.pushedPhoto.size.width;
}
} else {
// This is device and image view is portrait
if (self.pushedPhoto.size.height < self.pushedPhoto.size.width) {
// The pushed photo is portrait
self.photoImageViewRatio = self.photoImageView.frame.size.width / self.pushedPhoto.size.width;
} else {
// The pushed photo is landscape
self.photoImageViewRatio = self.photoImageView.frame.size.height / self.pushedPhoto.size.height;
}
}
self.valueHeight = self.pushedPhoto.size.height * self.photoImageViewRatio;
self.valueWidth = self.pushedPhoto.size.width * self.photoImageViewRatio;
float x = (self.photoImageView.frame.size.width - self.valueWidth) / 2;
float y = (self.photoImageView.frame.size.height - self.valueHeight) /2;