문제

I currently have an image with a defined distance between multiple values. This distance is defined by pixel distance. I would like to know what the most proper future/proof way of going about converting these pixel positions into point positions on iOS would be. I have to overlay specific images in these spots based off of a performed calculation. Would anyone know the best way to do this?

도움이 되었습니까?

해결책 2

I found the answer to this quest a long time ago. The easiest way to keep the full image scaling factor, while working with pixels, consists of two ways. One is to set the contents layer of a CALayer and deal with the raw pixel size without setting the content size for the CALayer so it keeps a 1:1 scaling factor. The second way is to convert the pixels to position coordinates using the scaling factor provided by iOS.

다른 팁

Your distance values should be based on the standard image size - not the retina size. The retina image size is used to 'fill in' all of the on screen pixels but because of the way Apple has organised the size (always @2x) the image display size is the same as the standard image.

Note that for this to work properly you need to ensure the image isn't scaled / resized for display. I.e. the image view should be the same size as the standard image.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top