I'm not using S3 but a local cloud provider in China to store images and their thumbnails. In my case I was using imagemagick
library with imagemagick-stream
and memorystream
modules.
imagemagick-stream
provides a way to process image with imagemagick
through Stream so that I don't need to save the image in local disk.
memorystream
provides a way to store the source image and thumbnail image binaries in memory, and with the ability to read/write to Stream.
So the logic I have is
1, Retrieve the image binaries from the client POST request.
2, Save the image into memory using memorystream
3, Upload it to, in your case, S3
4, Define the image process action in imagemagick-stream
, for example resize to 180x180
5, Create a read stream from the original image binaries in step 1 using memorystream
, pipe into imagemagick-stream
created in step 4, and then pipe into a new memory writable created by memorystream
where stores the thumbnail.
6, Upload the thumbnail I got in step 5 to S3.
The only problem in my solution is that, your virtual machine might run out of memory if many huge images came. But I know this should not be happened in my case so that's OK. But you'd better evaluate by yourself.
Hope this helps a bit.