Question

I am using the knox package to connect my S3 account and pull an image, like this:

var picturestring;
knoxclient.get(key).on('response', function(res){
    console.log(res.statusCode);
    console.log(res.headers);
    res.setEncoding('base64');
    res.on('data', function(chunk){
        picturestring += chunk;
    });
    res.on('end', function () {
        console.log(picturestring);
        resizeimage(picturestring, done); // use the resize library in this function
    });
}).end();

After that, I want to use a library that can take in that string (picturestring), resize the image, and return a new base64 string that represents the resized image. At this point, I plan on uploaded the resized image to S3.

I wrote a similar script in Golang that let me resize images like this, but every JS resizing library I've reviewed only give examples on resizing images from the local file system.

Is there any way that I can avoid reading the image from S3 into the file system, and focus on dealing with the returned string exclusively??

***************UPDATE***********************

function pullFromS3 (key, done) {
    console.log("This is the key being pulled from Amazon: ", key);
    var originalstream = new MemoryStream(null, {readable: false});
    var picturefile;
    client.get(key).on('response', function(res){
        console.log("This is the res status code: ", res.statusCode);
        res.setEncoding('base64');
        res.pipe(originalstream);
        res.on('end', function () {
            resizeImage(originalstream, key, done);
        });
    }).end();
};

function resizeImage (originalstream, key, done) {
    console.log("This is the original stream: ", originalstream.toString());
    var resizedstream = new MemoryStream(null, {readable: false});
    var resize = im().resize('160x160').quality(90);
    // getting stuck here ******
    originalstream.pipe(resize).pipe(resizedstream);
    done();
};

I can't seem to get a grip on how the piping from originalstream --> to the resize ImageMagick function ---> to the resizestream works. Ideally, the resizestream should hold the base64 string for the resized image, which I can then upload to S3.

1) How do I wait for the piping to finish, and THEN use the data in resizedstream?

2) Am I doing the piping correctly? I can't debug it because I am unsure how to wait for the piping to finish!

Was it helpful?

Solution

I'm not using S3 but a local cloud provider in China to store images and their thumbnails. In my case I was using imagemagick library with imagemagick-stream and memorystream modules.

imagemagick-stream provides a way to process image with imagemagick through Stream so that I don't need to save the image in local disk.

memorystream provides a way to store the source image and thumbnail image binaries in memory, and with the ability to read/write to Stream.

So the logic I have is

1, Retrieve the image binaries from the client POST request.

2, Save the image into memory using memorystream

3, Upload it to, in your case, S3

4, Define the image process action in imagemagick-stream, for example resize to 180x180

5, Create a read stream from the original image binaries in step 1 using memorystream, pipe into imagemagick-stream created in step 4, and then pipe into a new memory writable created by memorystream where stores the thumbnail.

6, Upload the thumbnail I got in step 5 to S3.

The only problem in my solution is that, your virtual machine might run out of memory if many huge images came. But I know this should not be happened in my case so that's OK. But you'd better evaluate by yourself.

Hope this helps a bit.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top