Question

I currently have an image with a defined distance between multiple values. This distance is defined by pixel distance. I would like to know what the most proper future/proof way of going about converting these pixel positions into point positions on iOS would be. I have to overlay specific images in these spots based off of a performed calculation. Would anyone know the best way to do this?

Was it helpful?

Solution 2

I found the answer to this quest a long time ago. The easiest way to keep the full image scaling factor, while working with pixels, consists of two ways. One is to set the contents layer of a CALayer and deal with the raw pixel size without setting the content size for the CALayer so it keeps a 1:1 scaling factor. The second way is to convert the pixels to position coordinates using the scaling factor provided by iOS.

OTHER TIPS

Your distance values should be based on the standard image size - not the retina size. The retina image size is used to 'fill in' all of the on screen pixels but because of the way Apple has organised the size (always @2x) the image display size is the same as the standard image.

Note that for this to work properly you need to ensure the image isn't scaled / resized for display. I.e. the image view should be the same size as the standard image.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top