Question

This is really a two part question, since I don't fully understand how these things work just yet:

My situation: I'm writing a web app which lets the user upload an image. My app then resizes to something displayable (eg: 640x480-ish) and saves the file for use later.

My questions:

  1. Given an arbitrary JPEG file, is it possible to tell what the quality level is, so that I can use that same quality when saving the resized image?
  2. Does this even matter?? Should I be saving all the images at a decent level (eg: 75-80), regardless of the original quality?

I'm not so sure about this because, as I figure it: (let's take an extreme example), if someone had a 5 megapixel image saved at quality 0, it would be blocky as anything. Reducing the image size to 640x480, the blockiness would be smoothed out and barely less noticeable... until I saved it with quality 0 again...

On the other end of the spectrum, if there was an image which was 800x600 with q=0, resizing to 640x480 isn't going to change the fact that it looks like utter crap, so saving with q=80 would be redundant.

Am I even close?

I'm using GD2 library on PHP if that is of any use

Was it helpful?

Solution

  1. JPEG is a lossy format. Every time you save a JPEG same image, regardless of quality level, you will reduce the actual image quality. Therefore even if you did obtain a quality level from the file, you could not maintain that same quality when you save a JPEG again (even at quality=100).

  2. You should save your JPEG at as high a quality as you can afford in terms of file size. Or use a loss-less format such as PNG.

Low quality JPEG files do not simply become more blocky. Instead colour depth is reduced and the detail of sections of the image are removed. You can't rely on lower quality images being blocky and looking ok at smaller sizes.

According to the JFIF spec. the quality number (0-100) is not stored in the image header, although the horizontal and vertical pixel density is stored.

OTHER TIPS

You can view compress level by ImageMagick. Download and installation instructions can be found at the official website.

After you install it, run the following command from the command line:

identify -format '%Q' yourimage.jpg

And you should get the value from 0 (low quality, small filesize) to 100 (high quality, large filesize).

Information source

For future visitors, checking the quality of a given jpeg, you could just use imagemagick tooling:

$> identify -format '%Q' filename.jpg
   92%

Jpeg compression algorithm has some parameters which influence on the quality of the result image.

One of such parameters are quantization tables which defines how many bits will be used on each coefficient. Different programs use different quatization tables.

Some programs allow user to set quality level 0-100. But there is no common defenition of this number. The image made with Photoshop with 60% quality takes 46 KB, while the image made with GIMP takes only 26 KB.

Quantization tables are also different.

There are other parameters such subsampling, dct method and etc.

So you can't describe all of them by single quality level number and you can't compare quality of jpeg images by single number. But you can create such number like photoshop or gimp which will describe compromiss between size on quality.

More information: http://patrakov.blogspot.com/2008/12/jpeg-quality-is-meaningless-number.html

Common practice is that you resize the image to appropriate size and apply jpeg after that. In this case huge and middle images will have the same size and quality.

As there are already two answers using identify, here's one that also outputs the file name (for scanning multiple files at once):

If you wish to have a simple output of filename: quality for use on multiple images, you can use

identify -format '%f: %Q' *

to show the filename + compression of all files within the current directory.

So, there are basically two cases you care about:

  1. If an incoming image has quality set too high, it may take up an inappropriate amount of space. Therefore, you might want, for example, to reduce incoming q=99 to q=85.

  2. If an incoming image has quality set too low, it might be a waste of space to raise it's quality. Except that an image that's had a large amount of data discarded won't magically take up more space when the quality is raised -- blocky images will compress very nicely even at high quality settings. So, in my opinion it's perfectly OK to raise incoming q=1 to q=85.

From this I would think simply forcing a decent quality setting is a perfectly acceptable thing to do.

Here is a formula I've found to work well:

  1. jpg100size (the size it should not exceed in bytes for 98-100% quality) = width*height/1.7

  2. jpgxsize = jpg100size*x (x = percent, e.g. 0.65)

so, you could use these to find out statistically what quality your jpg was last saved at. if you want to get it down to let's say 65% quality and if you want to avoid resampling, you should compare the size initially to make sure it's not already too low, and only then reduce the quality

Every new save of the file will further decrease overall quality, by using higher quality values you will preserve more of image. Regardless of what original image quality was.

If you resave a JPEG using the same software that created it originally, using the same settings, you'll find that the damage is minimized - the algorithm will tend to throw out the same information it threw out the first time. I don't think there's any way to know what level was selected just by looking at the file; even if you could, different software almost guarantees different parameters and rounding, making a match almost impossible.

This may be a silly question, but why would you be concerned about micromanaging the quality of the document? I believe if you use ImageMagick to do the conversion, it will manage the quality of the JPEG for you for best effect. http://www.php.net/manual/en/intro.imagick.php

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top