I'm downloading a bitmap from an URL with the following code. If I do this cyclic (like streaming images from a camera) then the bitmap will be reallocated again and again. So I wonder if there is a way to write the newly downloaded byte-array into the existing bitmap which is already allocated in memory.

public static Bitmap downloadBitmap(String url) {
    try {
        URL newUrl = new URL(url);
        return BitmapFactory.decodeStream(newUrl.openConnection()
                .getInputStream());
    } catch (MalformedURLException e) {
        e.printStackTrace();
    } catch (IOException e) {
        e.printStackTrace();
    }

    return null;
}
有帮助吗?

解决方案 2

The app is slowed down because it allocates and de-allocates memory in each cycle. There are three ways to avoid that.

The first version works without OpenCV but still allocates some memory in each cycle. But the amount is much smaller and therefore it is at least two times faster. How? By re-using an existing and allready allocated buffer (byte[]). I'm using it with a pre-allocated SteamInfo buffer of 1.000.000 length (about double the size than I'm expecting).

By the way - reading the input stream in chunks and using BitmapFactory.decodeByteArray is much faster than putting the URL's input stream directly into BitmapFactory.decodeStream.

public static class StreamInfo {
    public byte[] buffer;
    public int length;

    public StreamInfo(int length) {
        buffer = new byte[length];
    }
}

public static StreamInfo imageByte(StreamInfo buffer, String url) {
    try {

        URL newUrl = new URL(url);
        InputStream is = (InputStream) newUrl.getContent();
        byte[] tempBuffer = new byte[8192];
        int bytesRead;
        int position = 0;

        if (buffer != null) {
            // re-using existing buffer

            while ((bytesRead = is.read(tempBuffer)) != -1) {
                System.arraycopy(tempBuffer, 0, buffer.buffer, position,
                        bytesRead);
                position += bytesRead;
            }

            buffer.length = position;
            return buffer;
        } else {
            // allocating new buffer

            ByteArrayOutputStream output = new ByteArrayOutputStream();
            while ((bytesRead = is.read(tempBuffer)) != -1) {
                output.write(tempBuffer, 0, bytesRead);
                position += bytesRead;
            }

            byte[] result = output.toByteArray();
            buffer = new StreamInfo(result.length * 2, false);
            buffer.length = position;

            System.arraycopy(result, 0, buffer.buffer, 0, result.length);
            return buffer;
        }
    } catch (MalformedURLException e) {
        e.printStackTrace();
        return null;
    } catch (IOException e) {
        e.printStackTrace();
        return null;
    }
}

The second version uses OpenCV Mat and a pre-allocated Bitmap. Receiving the stream is done as in version one. So it does not need further memory allocation anymore (for details check out this link). This version works fine but it is a bit slower because it contains conversions between OpenCV Mat and Bitmap.

private NetworkCameraFrame frame;
private HttpUtils.StreamInfo buffer = new HttpUtils.StreamInfo(1000000);
private MatOfByte matForConversion;

    private NetworkCameraFrame receive() {

        buffer = HttpUtils.imageByte(buffer, uri);

        if (buffer == null || buffer.length == 0)
            return null;

        Log.d(TAG, "Received image with byte-array of length: "
                + buffer.length / 1024 + "kb");

        if (frame == null) {
            final BitmapFactory.Options options = new BitmapFactory.Options();
            options.inJustDecodeBounds = true;

            Bitmap bmp = BitmapFactory.decodeByteArray(buffer.buffer, 0,
                    buffer.length);

            frame = new NetworkCameraFrame(bmp.getWidth(), bmp.getHeight());
            Log.d(TAG, "NetworkCameraFrame created");

            bmp.recycle();
        }

        if (matForConversion == null)
            matForConversion = new MatOfByte(buffer.buffer);
        else
            matForConversion.fromArray(buffer.buffer);

        Mat newImage = Highgui.imdecode(matForConversion,
                Highgui.IMREAD_UNCHANGED);
        frame.put(newImage);
        return frame;
    }

private class NetworkCameraFrame implements CameraFrame {
    Mat mat;
    private int mWidth;
    private int mHeight;
    private Bitmap mCachedBitmap;
    private boolean mBitmapConverted;

    public NetworkCameraFrame(int width, int height) {

        this.mWidth = width;
        this.mHeight = height;
        this.mat = new Mat(new Size(width, height), CvType.CV_8U);

        this.mCachedBitmap = Bitmap.createBitmap(width, height,
                Bitmap.Config.ARGB_8888);
    }

    @Override
    public Mat gray() {
        return mat.submat(0, mHeight, 0, mWidth);
    }

    @Override
    public Mat rgba() {
        return mat;
    }

    // @Override
    // public Mat yuv() {
    // return mYuvFrameData;
    // }

    @Override
    public synchronized Bitmap toBitmap() {
        if (mBitmapConverted)
            return mCachedBitmap;

        Mat rgba = this.rgba();
        Utils.matToBitmap(rgba, mCachedBitmap);

        mBitmapConverted = true;
        return mCachedBitmap;
    }

    public synchronized void put(Mat frame) {
        mat = frame;
        invalidate();
    }

    public void release() {
        mat.release();
        mCachedBitmap.recycle();
    }

    public void invalidate() {
        mBitmapConverted = false;
    }
};

The third version uses the instructions "Usage of BitmapFactory" on BitmapFactory.Options and a mutable Bitmap that is then re-used while decoding. It even work ed for me on Android JellyBean. Make sure you're using the correct BitmapFactory.Options when created the very first Bitmap.

        BitmapFactory.Options options = new BitmapFactory.Options();
        options.inBitmap = bmp;  // the old Bitmap that should be reused
        options.inMutable = true;
        options.inSampleSize = 1;

        Bitmap bmp = BitmapFactory.decodeByteArray(buffer, 0, buffer.length, options);
        options.inBitmap = bmp;

This was actually the fastest streaming then.

其他提示

Within this segment in the bitmap memory management section entitled 'Manage Memory on Android 3.0 and Higher' they start to speak of how to manipulate the bitmaps so that you can reuse the bitmap space so that the location for the Bitmap itself does not need to be re-allocated. If you are indeed looking at using the stream from the camera then this will cover back to Honeycomb since they will be the same sizes. Otherwise, it may only help out past 4.4 Kitkat.

But, you could store a local WeakReference (if you want it to be collected in case of memory issues) within the downloadBitmap class and then re-assign to that space and return there instead of creating a bitmap each time in a single line.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top