سؤال

All over the place, I read what a serious performance hit Blending was, until I came across a comment that it was not so expensive on iOS devices due to their architecture.

Now, the wonderfully uber-controlled world of Apple is a bit different from Android's, but I've done some tests and it looks like my Blending is only half as bad for performance than switching from RGB555 to RGBA8888 (on the two devices I tried).

Questions:

  • Is there any rule of thumb that, while Android devices can differ substantially regarding their hardware, their "GPU computational power by screen resolution ratio" does not fall below a certain threshold?

  • Does such a rule also apply to Blending?

  • Is there a list of cornerstone test devices somewhere which is the result of some systematic market analysis? In the form of: If it runs on these devices, it'll run pretty much on any reasonable device?

  • Do you use blending, and what experience does it give your customers?

I see alternatives to using blending so I'm interested to know either what to invest in or whether I should avoid, hmm, the unknown.

هل كانت مفيدة؟

المحلول

I doubt there is any rule of thumb, also in Apple world, tomorrow they may decide to switch to other architecture and all your assumptions are screwed. For Android, the are too many vendors and architectures to decide which threshold of blending is suppose to be good for each one. The only rule of thumb is to use blending when you really need to. So the answer to your question is: It's NOT safe to assume blending is relative cheap on Mobile Devices. Blending requires reading from memory, and memory is slow on mobile devices.

نصائح أخرى

I've done some additional research which I thought I'd share. If you appreciate my investigation then let me know (upvote?) and I'll probably share more as I learn more.

With only 1.8% of Android devices running a version prior to API 8/2.2/Froyo, 98.2% of Android devices support OpenGL ES 2.0 (although the 2.2 API has some flaws).

I found the following peak/max/... information for the weakest chips of the above brands which support GLES20 (given maximum clock frequency):

  • Adreno 200 (22 million triangles per second, 133 million pixels per second)
  • Mali-200 (16 mtps, 275 mpps)
  • PowerVR SGX520 (7 mtps, 100 mpps)
  • Tegra APX 2500 (40 mtps, 400 or 600 mpps depending on source)

The cheapest Android device I could find as per today uses a

  • MARVELL PXA910 800MHz (10 or 20 mtps, 200 mpps)

where the 20 million triangles per second appear to be a theoretical maximum if half of the triangles need not be drawn (by means of culling).

I find the fact that Marvell differentiate between two peak values a bit suspicious; maybe one should be a bit sceptical about the Tegra figures which I found on a marketing slide and a forum. Also, I have a device with a PowerVR SGX530 (14 mtps, 200 mpps) which renders my test app for an 800x480 screen at least with the same speed as my Tegra 2 T20 devices (71 mtps, 1200 mpps) which I tried at 1024x600.

I have two identical devices with the Tegra 2 T20, and the one running Android 3 irenders my test app faster than the one running Android 2. Both run unofficial Android releases, though. I thought this might lead to suboptimal GPU utilization but the CPU load is shown as ridiculously low. Maybe there's SurfaceView overhead which starts to get significant beyond a certain frame rate -- but the PowerVR device also runs Android 2.

This has nothing much to do with alpha blending so far (except I read that on Nvidia chips that's implicitly done in the fragment shaders) but I felt it would be worth documenting there starting points. But stay tuned for updates.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top