Question

we are maintaing a database of image media in a large scale webapplication. The high resolution jpgs are big (> 15MB) and must not be made available for download in any way. Now we need to provide access to the details (crops) of the images to the clients (like a zoom-in function). The client should see a downskaled version of the image and be able to select an area of it to be viewed in the full scale mode (100%).

How could this be implemented in the most performant way (traffic and cpu wise)? We are open to any solution as long as the high resolution image file remains protected. The application is developed in c# and .net framework 3.5.

Any ideas? Thanks in advance!

Was it helpful?

Solution

First thing you will need to do is compress and watermark the images before uploading them to the server. Then present those to the user. This will take the least CPU resources since they images will be static.

I personally would then crop the images for the full size versions and put them up along side the compressed ones. This way the client can have a view of the full image (albeit compressed and watermarked) along side a small sample of the full hi-res version.

You probably want to avoid on-the-fly image manipulation unless you have a low number of clients and a very beefy server.

OTHER TIPS

I would serve a low-res version of the image to the browser and hava a client side crop ui that would then send a request back to the server which will crop out the selection and send it back in high res.

As i say to my father (that does not understand how the internet works), if you can see it on a web page, you can save it, it's just a question of how to do it.

There is an ajax version of Deep Zoom that you might like - See Dragon:

http://livelabs.com/seadragon-ajax/gallery/

The user is presented with a low-res version of the image; they can then zoom in on any part of it that they like.

Firstly I would pre-render watermarked versions of all the full size images saving them in a compressed file format, as well as pre-rendered low res versions.

I would serve the low res images for browsing. Then the watermarked high res image for the user to set up their cropping.

At the point of confirmation I would have a second dedicated image processing server that cropped the non-watermarked image, passed the cropped image to the webserver which sent it to the client.

That being said, it would still be possible, to create a client side script that farmed the cropped portions and stiched them together to create a full size copy of the non-watermarked image.

must not be made available for download in any way.

is at odds with:

The client should see a downskaled version of the image and be able to select an area of it to be viewed in the full scale mode (100%).

... at the point you allow all areas of the image to be viewed at full res, the entire image could be stitched together. so you're effectively (if very inconveniently) making the full size image available.

none of this helps you achieve the goal though.

the way i'd do it would be to provide a 72dpi watermarked copy for use in selecting the area of the image to download. you could scale this to a % of the original if screen real estate was an issue. have the user choose top-left and bottom-right coordinates. then use something like imagemagick to copy this area out of the original to be served to the user.

if you need to conserve resources, you could have the users download from a predefined grid, so the first time grid coord 14:11 is chosen, image_1411_crop.jpg gets written to the file system, and the next time that coord is selected, the file already exists.

edit: read some of your comments on other answers...

no matter what way you go about generating and serverside caching, you're going to use the same amount of bandwidth and traffic. a 300dpi jpeg is a 300dpi jpeg no matter if it's just been generated or is sitting on the filesystem.

you have to find out whether you need to conserve CPU or disk space. if you've got a million gigs of images and only forty users, you can afford the CPU hit. if you've got forty gigs of images and a million users, go for the HDD.

I'd use S3 for storage. Create two buckets (public protected), give out url to the protected bucket's images once you have authorized the user to download them. S3 Urls can be made restful with an expiration date.

With 15Mb images you'll likely realize that you need to pre-generate the scaled/cropped version ahead of time.

I'd use a watermark of some sort on all but the original file. (like Google maps)

[Edit: Added Deep Zoom for zooming]

Check out Silverlight Deep Zoom for managing the croping and zooming (Demo). They even have a utility to generate all the cropped images.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top