Question

I am creating custom images that I later convert to an image pyramid for Seadragon AJAX. The images and image pyramid are created using PIL. It currently take a few hours to generate the images and image pyramid for approximately 100 pictures that have a combined width and height of about 32,000,000 by 1000 (yes, the image is very long and narrow). The performance is roughly similar another algorithm I have tried (i.e. deepzoom.py). I plan to see if python-gd would perform better due to most of its functionality being coded in C (from the GD library). I would assume a significant performance increase however I am curious to hear the opinion of others. In particular the resizing and cropping is slow in PIL (w/ Image.ANTIALIAS). Will this improve considerable if I use Python-GD?

Thanks in advance for the comments and suggestions.

EDIT: The performance difference between PIL and python-GD seems minimal. I will refactor my code to reduce performance bottlenecks and include support for multiple processors. I've tested out the python 'multiprocessing' module. Results are encouraging.

Was it helpful?

Solution

PIL is mostly in C.

Antialiasing is slow. When you turn off antialiasing, what happens to the speed?

OTHER TIPS

VIPS includes a fast deepzoom creator. I timed deepzoom.py and on my machine I see:

$ time ./wtc.py 
real    0m29.601s
user    0m29.158s
sys     0m0.408s
peak RES 450mb

where wtc.jpg is a 10,000 x 10,000 pixel RGB JPG image, and wtc.py is using these settings.

VIPS is around three times faster and needs a quarter of the memory:

$ time vips dzsave wtc.jpg wtc --overlap 2 --tile-size 128 --suffix .png[compression=0]
real    0m10.819s
user    0m37.084s
sys     0m15.314s
peak RES 100mb

I'm not sure why sys is so much higher.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top